Linguistic Inquiry Monograph 50 The MIT Press Massachusetts Institute of Technology Cambridge, Massachusetts 02142 http:...
13 downloads
482 Views
753KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Linguistic Inquiry Monograph 50 The MIT Press Massachusetts Institute of Technology Cambridge, Massachusetts 02142 http://mitpress.mit.edu 978-0-262-51271-8
“This long-awaited book by David Lebeaux is highly recommended to those who pursue tight, albeit indirect, connections between empirical paradigms and theorizing at the most foundational level. His proposal on the theta subtree and the Case frame points to a new direction of research on cross-linguistic variations.” —Hajime Hoji, Department of Linguistics, University of Southern California “David Lebeaux’s proposals on the binding theory over the last ten years have had profound effects on the development of syntax. In this monograph, he examines the architecture of syntactic theory, based on those proposals, and argues for separate structures for thematic and Case representations, presenting a variety of independent evidence ranging from very early child grammar to idiom interpretation. This monograph should be among the standard references for syntax as it presents an original framework that forms a sound basis for any future research on the binding theory.” —Mamoru Saito, Professor of Linguistics, Nanzan University, and Distinguished Visitor, University of Connecticut
Lebeaux
This concise but wide-ranging monograph examines where the conditions of binding theory apply and in doing so considers the nature of phrase structure (in particular how case and theta roles apply) and the nature of the lexical/functional split. David Lebeaux begins with a revised formulation of binding theory. He reexamines Chomsky’s conjecture that all conditions apply at the interfaces, in particular LF (or Logical Form), and argues instead that all negative conditions, in particular Condition C, apply continuously throughout the derivation. Lebeaux draws a distinction between positive and negative conditions, which have different privileges of occurrence according to the architecture of the grammar. Negative conditions, he finds, apply homogeneously throughout the derivation; positive conditions apply solely at LF. A hole in Condition C then forces a reconsideration of the whole architecture of the grammar. He finds that case and theta representations are split apart and are only fused at later points in the derivation, after movement has applied. Lebeaux’s exploration of the relationship between case and theta theory reveals a relationship of greater subtlety and importance than is generally assumed. His arguments should interest syntacticians and those curious about the foundations of grammar.
Where Does Binding Theory Apply?
Where Does Binding Theory Apply? David Lebeaux
David Lebeaux is an independent researcher who specializes in syntax and the syntactic elements of language acquisition. He has held positions at Princeton University, the NEC Research Institute, and the University of Maryland, among other institutions, and is the author of Language Acquisition and the Form of the Grammar.
50
121220 Lebeaux PMS 346 PMS 2945
linguistics
Linguistic Inquiry Monograph Fifty
Where Does Binding Theory Apply? David Lebeaux
Where Does Binding Theory Apply?
Linguistic Inquiry Monographs Samuel Jay Keyser, general editor A complete list of books published in the Linguistic Inquiry Monographs series appears at the back of this book.
Where Does Binding Theory Apply?
David Lebeaux
The MIT Press Cambridge, Massachusetts London, England
6 2009 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please e-mail special_sales@ mitpress.mit.edu This book was set in Times New Roman and Syntax on 3B2 by Asco Typesetters, Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Lebeaux, David. Where does binding theory apply? / David Lebeaux. p. cm.—(Linguistic inquiry monograph) Includes bibliographical references. ISBN 978-0-262-01290-4 (hard cover : alk. paper)—ISBN 978-0-262-51271-8 (pbk. : alk. paper) 1. Government-binding theory (Linguistics) I. Title. P158.2.L43 2009 415—dc22 2008031027 10 9 8 7
6 5 4 3 2
1
For my late father, Charles, and my mother, Lillian For Father He is one of the level headed, and I have been led past an old barn, an old shed, into an equilibrium. In the middle of this pasture, he is one sure man, in the center aging faster and faster, in the sun; This is the grass he grew, these his childhood homes, the green and blue of his earth and sky come together here. Father: this is a strange settling for you, in the water of newer homes. This land is yours to balance under your feet, to spread and steady; it pours, it leaks, it runs. —Deborah Lebeaux
Contents
Series Foreword ix Preface xi Acknowledgments xxv 1
Introduction
1
2
Reconstruction Down A-Chains, and the Single Tree Condition
3
More on the Single Tree Condition
4
Condition C Applies Everywhere
5
The Structure of the Reconstruction Data (A Hole in Condition C)
6
Two Interesting Constructions
7
The Architecture of the Derivation
8
Another Negative Condition
9
Conclusion
85
Notes 87 References Index 97
91
5
15 23
43
79
51
29
Series Foreword
We are pleased to present the fiftieth in the series Linguistic Inquiry Monographs. These monographs present new and original research beyond the scope of the article. We hope they will benefit our field by bringing to it perspectives that will stimulate further research and insight. Originally published in limited edition, the Linguistic Inquiry Monographs are now more widely available. This change is due to the great interest engendered by the series and by the needs of a growing readership. The editors thank the readers for their support and welcome suggestions about future directions for the series. Samuel Jay Keyser for the Editorial Board
Preface
This book presents some very simple sentences—with the potential to reframe our understanding of the entire linguistic system. The argument is slightly complex, and for this reason I include this preface, to summarize and highlight the major points. The argument starts with a revised formulation of binding theory, which is used in turn to investigate the architecture of the grammar, including phrase structure. The book reexamines Chomsky’s (1995) conjecture: (1) Chomsky’s conjecture [interface conjecture] All conditions apply at the interfaces, in particular LF. Instead I argue for the following: (2) Homogeneity conjecture All negative conditions, in particular Condition C, apply continuously throughout the derivation. The architecture in (2) is quite di¤erent from (1) since it implies that the derivation itself has homogeneous constraints holding over it. These constraints rule out any derivation at the point that it violates Condition C: the derivation crashes and it cannot be saved by further operations. Positive conditions (Condition A, quantificational binding, and quantificational scope read-o¤ ) apply at LF, though they apply over structures over which the information is collected in the course of the derivation, by the copy-and-erasure operation or by leaving a trace. (In a sense, then, they apply over the derivation, but more exactly they apply at LF.) This proposal is close to Chomsky’s own; it diverges from earlier work by myself and others (Lebeaux 1988, 1991; Burzio 1986; Belletti and Rizzi 1988; Kayne 1984), in which I held that the entire binding theory, including the positive conditions, applied throughout the derivation. The reason for this change is that it appears there is a single tree (the Single Tree
xii
Preface
Condition, discussed below) that encodes all quantificational and binding information in a coherent fashion (see also Chomsky 1977a). If the positive binding conditions could apply throughout the derivation, then, when an element is moving, one set of conditions could apply at one point in its path, and another set of conditions could apply at another point in its path, not giving rise to a coherent tree. Since the tree appears to be coherent (see the discussion in chapters 1 and 2), I argue that all positive conditions apply in a single structure. This contention is not trivial: most formulations of binding and interpretation have violated it. Why this di¤erence between positive and negative conditions? Negative conditions apply everywhere: what I take to be the default or null case. However, positive conditions must lead to an interpretation, and hence must lead to a single, coherent tree: what we call the LF representation. Negative conditions do not lead to an interpretation—they just throw out certain trees—and so need not apply to a single, LF structure. Let us consider in more detail how negative conditions apply everywhere. The relevant sentences are very simple ones, as noted above: (3) *Hei seems to John’s i mother t to be expected t to win. (Condition C) (4) *Hei seems to himi t to be expected t to win. (Condition B) Both sentences are ungrammatical, violating Conditions C and B, respectively. But why? If reconstruction is possible, the matrix subject should optionally reconstruct at LF (or its equivalent in terms of copy-anderasure), giving rise to the following sentences, which do not violate any conditions: (5) e seems to John’s i mother e to be expected hei to win. (6) e seems to himi e to be expected hei to win. The bulk of the argumentation in the first part of this book is thus to show that reconstruction is in general possible, so that sentences like (3) and (4) could give rise to sentences like (5) and (6) at LF. Since these are grammatical, the derivation of (3) and (4) must be ruled out at some preLF point. This in turn means that Condition C (and Condition B), the negative conditions, do not apply solely at LF, but must apply throughout the derivation. The argument that the whole architecture of the grammar has the negative conditions applying throughout the derivation therefore rests on a small and seemingly insignificant part of the grammar: reconstruction. In
Preface
xiii
fact, this minor module holds within it a key to the whole grammar. The following generalization holds: (7) A-reconstruction applies (optionally to any trace site) ! Condition C applies throughout the derivation Since the sentences in (3) and (4) can give rise to LFs in (5) and (6), and since (5) and (6) are not ungrammatical at LF, this means that they must be ruled out by Condition C applying at some pre-LF point. In the text, I give six arguments for (A-) reconstruction applying, optionally, to any trace site. One argument turns on the existence of what I call ‘‘trapping e¤ects.’’ The sentences are the following: (8) a. Two women seem t to be expected t to dance with every senator. b. Two women seem to each other t to be expected t to dance with every senator. (9) a. For (8a) 2xEy; Ey2x b. For (8b) 2xEy; not Ey2x (8a) and (8b) di¤er for quantifier scopes. The readings for (8a) and (8b) are shown in (9a) and (9b). Both orderings of quantifiers exist for (8a), 2xEy and Ey2x. These correspond to the readings: (i) the same two women dance with each senator, and (ii) for each senator, two women dance with each. For (8b), only the first of these holds, as in (9b): 2xEy, the same two women dance with every senator. There is no reading in which di¤erent women dance with each senator. Why should this divergence occur? The answer must be the following. For (8a), the moved DP two women optionally reconstructs to the base trace site (or its equivalent in terms of copy-and-erasure). This is shown in (10a): (10) a. LF of (8a) e seem e to be expected two women to dance with every senator. b. LF of (8b) Two women seem to each other t to be expected t to dance with every senator. From there, either quantificational ordering can be obtained. Thus there are the two readings for (8a), shown in (9a). In contradistinction, in (8b), shown in (10b), two women must stay in the matrix clause, in order to bind each other at LF. Therefore, it cannot reconstruct (or the equivalent
xiv
Preface
in terms of copy-and-erasure) to the lowest trace site as the subject of dance. It must stay as the subject of the matrix clause. So only one quantifier ordering is possible, with two women taking scope over every senator (two women being two clauses up). This shows, then, the optional (A-) reconstruction of two women in (8a). See the text for five more arguments. These sentences also provide an argument for what I call the Single Tree Condition. In the Single Tree Condition, the LF representation must encode the quantificational force of a moved phrase at one point, and not be spread out over several trace sites. This is shown in (8b) and (10b). If the quantificational force of two women could be registered not only at the Spell-Out (surface) position, but at the various trace sites too, then one could use the Spell-Out position to bind each other, and the base (originally merged) position to take both quantifier scopes (2xEy and Ey2x). This would predict that the sentence (8b) would be ambiguous, like (8a). Since it is not ambiguous, this argues that the quantificational force of a moved element must be in one particular spot. Other arguments in the text show that this position can be either the base position, the Spell-Out position, or an intermediate spot in the chain. The above arguments for reconstruction are relevant because they lead to the conclusion that Condition C must apply elsewhere than LF, given the ungrammaticality of sentences like (3), repeated below: (11) *Hei seemed to John’s i mother t to be expected t to win. To summarize, the first section of the book concludes that Condition C applies throughout the derivation. The remainder of the book takes up a second type of very simple sentence, which, however, proves problematic for the conclusion just given. The sentence is the following: (12) Johni seems to himselfi t to like cheese. If Condition C applies throughout the derivation, then the above sentence should be ruled out in its premovement structure, which is (13): (13) *e seems to himselfi Johni to like cheese. How, then, can (12) turn out to be grammatical? The answer given here is somewhat complex. The first proposal is the following:1 (14) All DPs begin as schematic elements that are little pro’s. Movement of these elements takes place. These are then overlaid in the course of the derivation by pronouns, anaphors, and names, at di¤erent points.
Preface
xv
This notion of lexical overlay will be grounded in a way that will become clearer below. A sample derivation (of (13)) is given in (15). (For clarity, I have not overlaid the to-phrase.) (15) Derivation e seems to himself proi to like cheese. (A nonname pro is present instead of John, hence Condition C is not triggered) Move ! proi seems to himself ti to like cheese Overlay ! Johni seems to himself ti to like cheese Bind ! Johni seems to himselfi ti to like cheese. At no point is Condition C triggered, because the schematic element pro is present in the premovement position. The general idea, then, is that lexical insertion is staggered throughout the derivation. Lexical insertion is free to apply prior to movement as well, producing structures like the following: (16) Final sentence: Each other’s ties seem to the boys t to be quite colorful. e seem to the boys pro to be quite colorful Overlay ! e seem to the boys each other’s ties to be quite colorful Bind ! e seem to the boys i each other’si ties to be quite colorful Move ! Each other’s i ties seem to the boysi t to be quite colorful. For clarity, I have had Bind apply throughout the derivation. We could have it, and other positive conditions, apply on the LF representation instead, as above, if an element could reconstruct to any overlaid position. Thus each other in (16) could reconstruct to the base position in (16) because it is overlaid there. A negative condition, such as Condition C, continues to apply throughout the derivation. John does not trigger Condition C in (15), because a pro is in its base position—it is overlaid later. A problem arises, however, with the following sentence: (17) *Which pictures of Johni does hei like t? If lexical insertion can apply postmovement, and if Condition C applies throughout the derivation, then (17) should be grammatical, since Which pictures of John can escape a Condition C violation by being inserted after movement, where John is no longer in the c-command domain of he. This suggests that there is a deep di¤erence between A- and A0 -chains. While A0 -chains trigger a Condition C violation from their base position (17), A-chains do not trigger such a violation, as in (12), repeated below:
xvi
Preface
(18) Johni seems to himselfi t to like cheese. The di¤erence between (17) and (18) poses a conundrum, which, I claim, goes to the heart of the grammar. In terms of lexical overlay, (17) and (18) would mean that lexical overlay may occur anywhere in the movement of an A-chain, while lexical overlay must occur prior to A0 -movement. The only derivation of a A0 moved phrase is shown below, with overlay occurring prior to movement. (I have not shown the overlay for the nonmoved phrases.) (19)
Only derivation he saw pro Overlay ! hei saw which pictures of Johni Condition C ! *hei saw which pictures of Johni Move ! *Which pictures of Johni did hei see t?
This is in contradistinction to the derivation of Johni seemed to himselfi t to like cheese, where overlay occurs after movement to escape a Condition C violation. It is obvious, then, that A-chain and A0 -chains operate di¤erently with respect to reconstruction. The question is, is this just a superficial di¤erence in chains, or does this have a deeper aspect bearing on the organization of the grammar? In particular, I argue that the A/A0 distinction should be traced to a theory of Case, A-chains being formed before the application of Case, and A0 -chains being formed after the application of Case. In this book, I give two possible explanations. The more thoroughgoing explanation is based on my work throughout the 1980s and 1990s (Lebeaux 1988, 1991, 1997, 1998, 2000, 2001). Case and theta theory are held to apply to separate phrase markers instead of a single one. The two then fuse (by an operation called Project-a) into a single phrase marker (see also Williams 2003). The operation looks like this: Case frame
!
!
(20) Theta subtree
full tree What does this mean in terms of the schematic structure discussed earlier? Simply put, the Case frame corresponds to the schematic structure and the theta subtree corresponds to the lexical elements—that must be projected (fused into) the open slots of the Case frame. The full organization therefore looks like (21), with movement taking place on the Case frame/ schematic structure.
Preface
xvii
(21)
An example of the Case frame and theta subtree that become fused is for the simple sentence The man saw a woman (Lebeaux 1988, 1991, 1997, 2000, 2001). These are fused as follows. Note that the Case frame essentially consists of closed-class elements, while the theta subtree is openclass elements. (22)
xviii
Preface
Note that the Case frame essentially consists of closed-class elements, while the theta subtree is open-class elements. I have placed simple V here, rather than big V and little v. If one wanted to include the latter, little v would be part of the Case frame. This would strengthen the analysis, in particular the claim that the Case frame consists only of closedclass elements. A-movement takes place on the Case frame optionally prior to the Fusion or Project-a operation. This was shown in (21) above. Thus the schematic elements (pro’s) move. Condition C and all negative conditions apply throughout the derivation. The Case frame contains all Case-receiving and Case-assigning elements. Thus, in this system, determiners receive Case and pass it on to their associated NP. Since they receive case, they are part of the Case frame (also a closed-class element). Prepositions, on the other hand, are a paramount case of a Caseassigning element. Therefore, they are part of the Case frame, assigning Case. The notion of schematic structure used for binding theory, and movement on it, is an argument for a separate Case frame, since only with a separate Case frame/schematic structure can Condition C violations be avoided in examples like (12), repeated here: (23) Johni seems to himselfi t to like cheese. Since John is initially a pro, and movement takes place on the Case frame, where the A-moved element is a pro, Condition C is not triggered. I provide several more arguments in the text for the division into Case frame and theta subtree (see Lebeaux 1988, 1991, 1998, 2000, 2001). I will sketch two of these arguments here. The first argument traces itself back to the language acquisition sequence. If the Case frame and the theta subtree exist as separate entitites, evidence should crop up, perhaps in unexpected places, for each. One such place is in the very early stages of language acquisition (see also Radford 1990). Consider sentences such as the following: (24) a. See ball b. Give toy Mommy c. Allgone shoe These structures are, I would argue, solely part of the theta subtree. They contain a verb or a verblike element—see, give, allgone—and a thematic element. In each case, the Case-receiving or Case-assigning element, for example, the determiner or preposition, is missing:
Preface
(25) Element
xix
Full element
ball the ball Mommy to Mommy shoe unaccusative position TNS
element missing
Case-receiving or Case-assigning element the: Case-receiving to: Case-assigning in theta position, not Case position no Case, therefore no subject
Ball, in such a structure, does not have the next to it; thus the Casereceiving element, the, is missing. This is expected, since, according to the proposal, the Case frame is not present in very early speech. Mommy does not have to assigning it Case; thus the Case-assigning element is missing. The situation is slightly more complex with allgone shoe. Here the question is not of a missing element, but rather with the placement of shoe in allgone shoe. Note that allgone means essentially disappeared: it is an unaccusative predicate. However, shoe is on the wrong side of allgone (given that allgone means disappeared ): it is in the direct object position instead of the subject position. That is, for the adult grammar it should be shoe allgone, not allgone shoe, as in the shoe has disappeared. This points to the following crucial fact: the child is generating the shoe in the theta position, not the Case position. This in turn suggests that the Case frame is missing: the theta subtree is regulating the order of the elements. Finally, the Case-assigning TNS is missing there are thus missing subjects (Hyams 1992). These are all arguments that the theta subtree and not the Case frame is relevant for the description of very early speech. In short, what I call a subgrammar of the full grammar regulates early speech (Lebeaux 2000, 2001). A second argument for a separate Case frame and theta subtree, with movement initially taking place on the Case frame (prior to their fusion), can be found in idioms. Idioms are fixed pieces of structure. It has been known for a long time that some idioms allow passivization, and others do not. This is shown in (26) and (27): (26) take advantage of: passivizes Advantage was taken of John. (27)
kick the bucket: does not passivize *The bucket was kicked.
The question is, what determines whether an idiom passivizes? It is a contention of Lebeaux (1988, 2001, 2008) that what determines the passivizability of idioms is the freedom of the determiner of the object. If the
xx
Preface
determiner is free to vary, then the idiom may passivize; if the determiner is fixed, as part of the idiom, then passivization is impossible. An example is given here: (28) Determiner fixed a. kick the bucket *kick all the bucket *Some men kicked some buckets. b. hit the road *hit all the road *hit some roads c. Passivization *The bucket was kicked. *The road was hit. (29) Determiner free a. take advantage of take some advantage of take a lot of advantage of b. make tracks make some tracks make a lot of tracks c. Passivization Advantage was taken of John. Tracks were made by Mary. In (28), the in kick the bucket is fixed (part of the idiom). Hence passivization is barred. The same consideration applies to the in hit the road. The determiner is part of the idiom, hence passivization is barred. In (29), the determiner is free in take advantage of, hence passivization is possible. The same is true for make tracks (go fast). Note that in some cases, the determiner is part of the citation form, but it actually is freed up in the use of the idiom. In such cases, to the extent to which the determiner has freed up, passivization is possible. (30) Break the ice: Determiner relatively free even though the is part of the citation form. a. Determiner free break a lot of ice break some ice b. Passivization (possible) A lot of ice was broken. Some ice was broken by that remark.
Preface
xxi
While the is part of the citation form in break the ice, the determiner is actually somewhat free, as shown by the grammaticality, perhaps slightly degraded, of break a lot of ice and break some ice. To the extent that the determiner has freed up, it undergoes passivization (A lot of ice was broken, Some ice was broken). The Determiner Generalization is therefore the following (see the main text for further discussion): (31) Determiner Generalization: In a V DP idiom: The determiner is free. $ The idiom passivizes. This leaves us with a large question: why should the possibility of passivization depend on the freedom of the determiner? The answer is found in the division into the theta subtree and the Case frame. While a full explanation is found in the text, idioms like take advantage (of) and make tracks are found in the theta subtree: they are what I will call Level I idioms. Idioms like kick the bucket are found at the post-Fusion point: they are Level II idioms. (32)
Level I idioms are generated at their deepest level on the theta subtree. Level II idioms are generated at their deepest level on the post-Fusion tree. Now recall that A-movement occurs on the free Case frame prior to the Fusion operation. Thus Case frames are free for Level I idioms, but not for Level II idioms. That is, the following holds: (33) A-movement applies in general on the free Case frame. The Case frame is free for Level I idioms, but does not exist as a separate unit for Level II idioms (the post-Fusion structure is the earliest point for these idioms).
xxii
Preface
This concludes another argument for the free Case frame. Recall that the argument from binding theory was the existence of a schematic structure/ Case frame, underlying the derivation for sentences like (34): (34) John seems to himself t to like cheese. In conclusion, the arguments in this book are twofold. The first part of the book proposes that the negative binding conditions, and perhaps all negative conditions, apply throughout the derivation. The key sentences are examples like (35), which, if reconstruction applies optionally at LF, should be grammatical at LF: (35) a. *Hei seems to John’s i mother t to be expected t to win. b. LF: e seems to John’s i mother e to be expected hei to win. Six arguments are given in the text that (A-) reconstruction does (optionally) apply for (35a). Thus the sentence should be grammatical at LF, since the output of reconstruction would be (35b), and this violates no conditions. Since (35a) is ungrammatical, this must mean that Condition C is violated before reconstruction applies, leading to the conclusion that Condition C applies throughout the derivation. The first part of this book concerns negative conditions applying throughout the derivation; the second part takes up a particular case that is problematic for this statement. This is given in (36): (36) Johni seems to himselfi t to like cheese. This sentence is unexpectedly grammatical. Yet if Condition C applies throughout the derivation, it should trigger Condition C at its premovement site. (37) Premovement structure e seems to himselfi Johni to like cheese. The solution to this was to propose a schematic structure that underlies (37). Lexical insertion (at least of names) is distributed throughout the derivation, and an overlay operation (lexical overlay) overlays little pro’s that have moved with full names. Thus John in (36) escapes a Condition C violation in its premovement structure, by starting out as little pro: John later overlays this after A-movement. This schematic structure of little pro’s is then identified with what I call the Case frame. The Case frame and the theta subtree are substructures that go into the making of the full structure, by the operation of Fusion or Project-a (they are fused). Each is a pure instantiation of the primitive it encodes, theta roles and Case—that is, each is a pure instantiation of a
Preface
xxiii
licensing relation. A-movement takes place on the free Case frame. This in turn gives rise to the startling pattern of grammaticality and ungrammaticality of Level I and Level II idioms, where Level I idioms (without the determiner) may passivize, and Level II idioms (post-Fusion) may not. An argument was also given for the free theta subtree from very early stages of acquisition. The identification of the schematic structure with the Case frame allows independent arguments to be made for binding theory from phrase structure itself.
Acknowledgments
I would like to acknowledge a large number of people, especially Alan Munn, Cristina Schmitt, Sandiway Fong, Juan Uriagereka, Piroska Csuri, Christiane Fellbaum, Bob Krovetz, Ken Safir, Bob Freidin, Susan Powers, Joseph Aoun, Dominique Sportiche, Hajime Hoji, Mamoru Saito, Robert Frank, Dan Seely, Ray Jackendo¤, Tom Roeper, Danny Fox, Ada Brunstein, Sandra Minkkinen, and Noam Chomsky. Special thanks to my family, Pam, Mark, and Theo; my late father, Charles; my mother, Lillian; my sister Debbie; and my father-in-law and brothers-inlaw.
1
Introduction
In this book, I would like to argue against the contention of Chomsky (1995) that all conditions apply at the interface levels, and, in particular, at LF.1 I will argue instead that Conditions B and C apply at all points in the derivation, over all intermediate structures. In general, we may di¤erentiate types of theories in which constraints are stated specifically at levels (for example, ‘‘LF’’), from those in which the constraints are stated at all points. I will argue that natural language is, at least with respect to the negative conditions of binding theory, of the second type and not the first. This leads to questions of what other modules are stated homogeneously over the derivation, and this book suggests that at least one other module, namely the Stray A‰x Filter of Lasnik ([1981] 1990), should be added to this list. The particular form that binding theory takes, however, is slightly complex, and the disagreement with Chomsky’s proposal is by no means complete. Binding theory is broken into positive conditions (Condition A), and negative conditions (Conditions B and C). Condition A is a ‘‘positive’’ condition in that it allows certain structures in. Condition A requires that somewhere in its path up the tree, an anaphor find its antecedent (Burzio 1986; Belletti and Rizzi 1988; Kayne 1984; Lebeaux 1988, 1991; Epstein et al. 1998). In a sense, Condition A applies throughout the derivation, if we permit the full power of reconstruction (or its copy-anderasure equivalent) to allow data from the derivation back in; more exactly, however, it applies at LF, by the Single Tree Condition. However, Conditions B and C are ‘‘negative’’ conditions in that they disallow certain structures. I will argue that Conditions B and C—in particular, the negative conditions—require that at no point in the derivation may they be violated: if a negative condition is violated anywhere, a star is assigned that cannot be removed. That is, the negative conditions cannot in any case be put in the form they hold at LF: rather, they must hold throughout the
2
Chapter 1
derivation. This bifurcation in the positive and the negative conditions corresponds to the points at which the theory outlined below agrees with and disagrees with Chomsky’s: the negative conditions apply at every point in the derivation, and at no point in the derivation may be violated. (This may also be true of other negative conditions in the grammar.) It is this statement that directly contravenes Chomsky’s contention that binding theory applies at LF. On the other hand, the positive condition, Condition A, does hold at LF, if the full power of reconstruction or the copy-and-erasure device is used to preserve information from the derivation. In this part, the theory outlined here is in accord with Chomsky’s own (Chomsky 1995). Why is there this di¤erence between positive and negative conditions? I believe this is because positive conditions—interpretive conditions—must give rise to a single coherent representation that encodes them. This is LF. Negative conditions, on the other hand, not being so constrained, may take the other possibility, of applying throughout the derivation. What does it mean to say that the ‘‘positive binding conditions hold at LF’’? This is what I call the Single Tree Condition, which holds at LF. The Single Tree Condition states the following two properties: (1) Any specific piece of lexical material in an element moved many times must be viewed as occupying a particular position in the chain, rather than occupying several positions at once, either directly through the statement of multiple domination relations (Engdahl 1986), or through derivative definitions of c-command, which have the e¤ect of allowing an element to be read as if it were both in its direct position and in its trace site, from the point of view of c-command.2 (2) The positive aspects of the binding conditions (Condition A, Quantifier Raising, idiom interpretation, and bound variable interpretation) all apply to one specific representation, the LF representation. I believe that the Single Tree Condition is no di¤erent than the sort of constraint argued for and envisaged by Chomsky (1995), though I will argue for it in more detail here. For example, rather than having an element ‘‘read’’ as if it were in its trace site, through a derivative definition of c-command, an element must specifically be left (copy-and-erasure) or lowered (reconstruction) to the site. While this might seem intuitively obvious, most classical discussions of binding theory have violated the Single Tree Condition.
Introduction
3
While the statement of the Single Tree Condition mimics Chomsky’s own, the statement of the negative Binding Conditions, Conditions B and C, forms a direct alternative to Chomsky’s (1995) theory, in the sense that they apply homogeneously throughout the derivation. The logic of the argument that negative conditions apply throughout the derivation proceeds as follows. First, I argue that reconstruction occurs down A-chains as well as A0 chains. Second, I propose that the Single Tree Condition holds at LF. Third, I argue that if reconstruction down A-chains is possible, then a set of structures exist that would be grammatical at LF, given reconstruction. However, factually, these structures are ruled out. This means that either (a) reconstruction down A-chains does not exist, or (b) the negative part of binding theory must apply elsewhere than just at LF—for example, homogeneously throughout the derivation. Since I have just given arguments that A-reconstruction does exist, this means that the negative part of binding theory must be stated homogeneously throughout the derivation. Fourth, in the second half of the book, I take up a problematic case for my position that the negative conditions apply homogeneously throughout the derivation. This leads to the conclusion that there is initially a separate thematic representation (subtree) and Case frame, and that these two are fused. Separate conditions may apply over either. Fifth, an additional case where a negative condition applies throughout the derivation is taken up. The important work of Vainikka (1989), Saito (1989, 1992), Fox (2000), Sportiche (2005), Fong (1991), Sauerland (1998), Jackendo¤ (1992), Chametzky (2003), and Hoji (2003) has also been influential in developing my general approach, although not used directly here.
2
Reconstruction Down A-Chains, and the Single Tree Condition
The necessity of a single coherent representation encoding quantificational binding and Condition A can be seen in the following data set, originally discovered by May (1979, 1985), Aoun (1982), and later independently by myself (Lebeaux 1995). Consider the following representation with twice-moved quantifiers. The moved quantifier (2x) and the in-place quantifier (Ey) can have either scope ordering. (For convenience, I have used the notation of indices.) (1) a. Two womeni seem ti to be expected ti to dance with every senator. (Ambiguous) b. 2xEy (There are two women who dance with every senator) c. Ey2x (Every senator has two women who dance with them—not necessarily the same two women) (2) a. Two senatorsi seem ti to be expected ti to be caught ti in every sting operation. (Ambiguous) b. 2xEy (Same two senators caught again and again) c. Ey2x (Each sting operation will net two senators) (3) Two system administratorsi seem ti to be expected ti to crash every system. (Ambiguous—same readings) How can the 2x quantifier be found in the scope of the universal?1 This can only be done by the lowering of the quantifier to its premovement position (or its equivalent in terms of copy-and-erasure, Chomsky 1995)— rather, than, for instance, scoping the downstairs quantifier upstairs. This is shown by two facts. First, placing an anaphor in the top clause freezes the scope ordering of the quantifiers. Now only the Spell-Out ccommanding quantifier may have wide scope. (4) Two womeni seem to each other ti to be expected t i to dance with every senator. 2xEy; not Ey2x
6
Chapter 2
(5) Two senators i seem to each other ti to be expected t i to be caught ti in every sting operation. 2xEy; not Ey2x (6) Two system administrators i seem to each other ti to be expected ti to crash every system. 2xEy; not Ey2x This e¤ect, which we might call a ‘‘trapping e¤ect,’’ exhibits precisely the coherence e¤ect that we are talking about. To get the lower scope reading of two women, two senators, and two system administrators in (1)–(3), we would assume some sort of lowering process occurs (or copy-anderasure), so that the LFs of (1)–(3) are (7)–(9).2 (7) e seem e to be expected two women to dance with every senator. (8) e seem e to be expected e to be caught two senators in every sting operation. (9) e seem e to be expected two system administrators to crash every system. From here, the quantifiers can be scoped: since they are each in the same clause, it would be expected that either ordering would be possible, as it in fact is. The sentences in (4)–(6), on the other hand, ‘‘trap’’ the quantifier upstairs. If the quantifiers in (4)–(6) were lowered, the putative LFs would be (10)–(12). (10) e seem to each other e to be expected two women to dance with every senator. (11) e seem to each other e to be expected e to be caught two senators in every sting operation. (12) e seem to each other e to be expected two system administrators to crash every system. If these putative LFs were real ones, we would would indeed get two scope orderings of the quantifiers. However, these LFs are ill-formed, because the anaphor is unbound: there is not a single-level (LF) to read o¤ the quantificational binding and the anaphoric binding. So the fact that there is only one scope ordering for the quantifiers in (4)–(6) shows that the quantifier is trapped upstairs (it has to bind the anaphor) at LF, and thus shows that there is one coherent representation for quantifier scope and anaphoric binding. It also shows that the lower-clause quantifier is
Reconstruction Down A-Chains, and the Single Tree Condition
7
not scoped upward in (1)–(3), given matrix scope, because then it would be expected that the two scope orderings would still be possible with the addition of the anaphor in (4)–(6). A second argument that the lower-clause quantifier is not scoped upward in (1)–(3) is that if the matrix quantifier is in a to-phrase, only one scope ordering is possible. (13) Maryi seems to two women t i to be expected ti to dance with every senator. 2xEy; not Ey2x (seems to the same two women to be expected to dance with every senator) This shows clearly that it is the element in the chain scoping downward— presumably by virtue of reconstruction in the chain, or binding and erasure—rather than the downstairs element scoping upward, which is crucial. The argument above shows that, in terms of reconstruction, A-chains operate identically to A0 -chains (see also May 1985; Hornstein 1995; Lebeaux 1998). I give five more arguments to show the possibility of Areconstruction. First, the above argument has shown, from quantifier scoping, that Areconstruction occurs. That is, quantifiers can scope downstairs, through the A-chain (or May’s procedure, or Chomsky’s copy-and-erasure), and the possibility of this scoping is blocked if the quantifier is ‘‘trapped’’ upstairs by an anaphor. The second argument is that anaphors inside moved noun phrases may be bound to elements lower in the tree, if they are part of an A-chain. While there is a slight complexity in the examples, they are essentially grammatical. (14) Each other’s mothers seem t to please the two boys. (Each other bound by two boys) (15) Each other’s presents are expected t to please the two children. (Each other bound by two children) (16) Each other’s presents are expected t to make the two children happy. (Each other bound by two children) Two crucial comments should be made about these examples. First, there is a class of examples often given in the literature that is supposed to show that A-reconstruction does not apply, but that are simply irrelevant. These are examples where the anaphor is not embedded in a noun
8
Chapter 2
phrase, but is itself a full noun phrase as in (17) and (18). I agree with the judgment that these are fully ungrammatical. (17) *Himselfi seems t to please t John. (18) *Each otheri seem t to please t the two boys. The reason that examples like (17) and (18) are irrelevant is that while it is true that after lowering, the anaphor would be in place to be bound by the antecedent noun phrase, before reconstruction applies, the structure has been ruled out by Condition C, where the coindexed anaphor is coindexed with and c-commands a name (i.e., himself and John in (17)). Therefore, as long as Binding Condition C applies throughout the derivation, examples like those in (17) and (18) are irrelevant, because they are ruled out by Condition C. That is, while reconstruction would allow the structures in (17) and (18) to take a grammatical form, they are already ruled ungrammatical by Condition C applying throughout the derivation. To see the possibility of reconstruction, one must instead embed the anaphor further, as was done in (14)–(16), for the true case. The second crucial comment about the examples in (14)–(16) is that the alternative analysis, that the basic structure in (14)–(16) allows these anaphors to be bound in place by the quantifier in the lower clause scoping upward at LF, must be ruled out. That is because in this case, these sentences would not constitute evidence that lowering down A-chains has occurred at all. This analysis can be ruled out again by careful analysis. All that needs to be done is to place the anaphor in a to- or a by- phrase, as in (19)–(21). (19) ?*John seemed to each other’s mothers t to please the two boys. (20) ?*The present is expected by each other’s parents t to please the two boys. (21) *The present is expected by each other’s parents t to make the two boys happy. Since these are ungrammatical, it must be that the phrase containing the anaphor is lowering in examples (14)–(16). In other words, Areconstruction is occurring. Another argument for A-reconstruction is similar to the second, but instead turns on quantificational binding. Consider the following sentences: (22) His first book tends t to please every man. (23) Her first performance seems t to be expected t to please every composer.
Reconstruction Down A-Chains, and the Single Tree Condition
9
(24) His Nobel Prize–winning speech seems t to be expected t to please every Nobel Prize winner. (25) Her inaugural address seems t to be anticipated t to backfire on every elected president. In each case, the binding can be gotten. This again shows Areconstruction. How does one again rule out the possibility that the quantifier is scoping upward instead? By placing the DP containing the bound pronoun in a to- or by- phrase. Contrast (22)–(25) with (26)–(28). (26) *The president seems to his first wife t to be expected t to please every man. (No binding of his by every man) (27) *The president is expected by his mother t to please every man. (No binding of his by every man) (28) *Princess Diana is expected by his mother t to seem t to be mourned by every man. (No binding of his by every man) The bindings of the pronoun by the lower quantifier here are impossible. This contrasts with examples (22)–(25). Therefore, this is a third argument that A-reconstruction in fact occurs. In examples (24) and (25), some researchers may have a somewhat hard time getting the binding of the pronoun, because of weak crossover, though I believe that in these psychological predicates, like please, the weak crossover e¤ect is abrogated because the constituent containing the bound pronoun (for example, his) has come from within the VP (Belletti and Rizzi 1988), and thus at its deepest level the pronoun is ccommanded by the quantifier. I will momentarily guard against that using the PRO-gate phenomenon. However, even for these researchers there should be a contrast between examples where the pronoun in the raised subject position of a moved element is bound by a quantifier in a lower clause, versus those in which the pronoun is in a to- or by-phrase. The latter should be disallowed as ungrammatical. That is, the sentences in which the bound pronoun is in a moved phrase should pattern with those in which it is in an unmoved phrase (as slightly odd, perhaps), and these should both be decisively better than those in which the bound pronoun is in a to- or byphrase, several clauses up the tree. In other words, the sentences in (29a) and (29b) should be comparable in grammaticality, while that in (30) should be much worse. I believe this to be the case, even for researchers who find (29b) somewhat di‰cult—that is, it is comparable to (29a), not (30). This precisely shows that lowering or its equivalent has occurred.
10
Chapter 2
(29) (a) and (b) are comparable with respect to binding. a. His mother pleases every man. (Bound reading) b. His mother seems t to please every man. (Bound reading) (30) Not comparable with respect to binding (Binding impossible) Mary seems to his mother t to please every man. A fourth argument for A-reconstruction can be constructed for those who find the possibility of a weak crossover e¤ect in the sentences above too intrusive. This is using the fact of the PRO-gate (Safir 1984; Higginbotham 1983)—that PRO subjects can be bound in weak crossover structures with perfect grammaticality—to construct examples comparable to (22)–(25). The examples are given here: (31) PROi seeing Claire seems t to be expected t to make Marki happy. (32) PROi forgetting his lines seems t to have been anticipated t by Je¤i . (33) PROi knowing Bill seems t to be a point of great pride to Dave i . In each of these cases, the way that the PRO gets bound is by reconstruction and then binding, since in each case the antecedent is several clauses removed. The LF for (32) would be (34): (34) e seems e to have been anticipated (PROi forgetting his lines) by Je¤i . From here, binding may take place. A fifth argument for A-reconstruction has to do with a set of double binding constructions discussed in Lebeaux 1984, Rizzi 1986, and Browning 1987. A certain class of copularlike predicates allow PRO constructions on each side of them. The interesting thing is that the two arbitrary PROs must range over the same arbitrary elements at the same time (Lebeaux 1984). Examples are given in (35) and (36): (35) PRO to know him is PRO to love him. Meaning: For an arbitrary person to know someone is for the same arbitrary person to love that someone. (36) PRO to get a good apartment requires PRO knowing the landlord. Meaning: For an arbitrary person to get a good apartment requires for the same arbitrary person to know the landlord. These sentences, with the linking of the PROs, may be accounted for by positing a single topiclike operator that binds both. (37) OPi ((PROi to know him) is (PROi to love him))
Reconstruction Down A-Chains, and the Single Tree Condition
11
Thus the two PROs are linked in reference, by the operator. This means that all PROs, including so-called arbitrary PROs, are bound elements. See Lebeaux 1984 for further discussion. The generalization is that for these copularlike predicates, the arbitrarily varying element varies simultaneously over the same entity in the two clauses. We may call this the ‘‘linked’’ reading. The necessity for this linked reading disappears if the second clause is embedded. Consider (38a) and (38b): (38) a. PRO having a just society requires that PRO having to serve in the military be abolished. (Two arbitrary PROs, but unlinked: one operator for each of the PROs, hence not linked) b. PRO winning the West by violence requires that PRO settling the East Coast be done first. (Two arbitrary PROs, but unlinked: one operator for each of the PROs, hence not linked) In these examples, the one who is ‘‘having a just society’’ is not the one who is ‘‘serving in the military’’; also, the one ‘‘winning the West’’ is not the one ‘‘settling the East Coast.’’ There are two unlinked operators in (38a) and (38b), because the binding of PRO is a local phenomenon. The di¤erence in these examples is that in (38a) and (38b) the second clause containing the arbitrary PRO is further embedded. The generalization is that if the clauses are simply on opposite sides of a copularlike predicate, they are linked. Now consider (39): (39) PRO to get a good apartment tends t to seem t to require PRO knowing the landlord. Sentence (39), and examples like it, are obligatorily linked: the two PROs arbitrarily range over the same entities, as in (36). Yet the second clause is, on the surface, many times embedded. The only solution is that A-reconstruction of the beginning clause, ‘‘PRO to get a good apartment,’’ must have occurred. This constitutes a fifth argument for A-reconstruction. In this section I have presented five arguments for A-reconstruction: the ambiguity of quantifiers in (40a) together with the lack of ambiguity in (40b) (this also shows the Single Tree Condition), the possibility of anaphor binding in (41), the possibility of pronoun binding in (42), the possibility of PRO-gate binding in (43), and the linked reading in (44). (40) a. Two system administrators seem t to be expected t to crash every system.
12
Chapter 2
b. Two system administrators seem to each other t to be expected t to crash every system. (41) Each other’s mothers seem t to please the two boys. (42) His Nobel Prize–winning speech seems t to be expected t to please every Nobel Prize winner. (43) PRO forgetting his lines seems t to have been anticipated t by Je¤. (44) PRO to get a good apartment tends t to seem t to require PRO knowing the landlord. Because the arguments for A-reconstruction are so powerful, the interesting question is why the reconstruction analysis has not generally been accepted. I believe there are three reasons for this. The first is the fact that sentences like (45) are totally ungrammatical. (45) *Himselfi seems t to please t Johni . Given that reconstruction does in fact occur, and given that John ccommands the base position of himself (assuming Belletti and Rizzi 1988), if one states binding theory at LF, then (45) should be expected to be good, if reconstruction does occur (i.e., himself should be bound by John at LF). My response to this is that (45) is ruled ungrammatical not by the lack of A-reconstruction at LF, but rather because Condition C, which applies throughout the derivation, applies to rule out (45), before it ever has been submitted to LF—that is, at Spell-Out (s-structure) or at the construction moment. In other words, Condition C rules out the structure prior to the LF reconstruction, so the fact that the structure allows binding at LF is irrelevant. The truth of this analysis can be seen by embedding the anaphor further in the noun phrase. In that case, the possibility of a Condition C violation has been excluded, and the sentence is indeed good. (46) Each other’s parents seem t to please t the two girls. The second reason that A-reconstruction analysis (i.e., the possibility of A-reconstruction) has not been unequivocally adopted in the literature is that there is a certain equivocation in the statement ‘‘Reconstruction applies down A-chains.’’ Does this mean that reconstruction must occur, or that it may occur? If it means that it must occur, then there is a simple counterexample to A-reconstruction, namely, the sentence in (47): (47) John seems to himself t to like Mary.
Reconstruction Down A-Chains, and the Single Tree Condition
13
If A-reconstruction must occur, then the DP John would obligatorily reconstruct, leaving the anaphor unbound at LF, and the sentence would be expected to be ruled out—of course, it is good. Therefore, the true statement of A-reconstruction must be that it may occur to any of its trace sites. The comparable statement may also be made in terms of copy-and-erasure. I now immediately state this, with the conditional: (48) A-reconstruction A-reconstruction may occur (at LF) to any of the trace sites of a DP that has been moved many times (but it need not occur). If it does not occur, what is in the trace sites are the bound f-features.3 To make this clearer, and to adopt terminology somewhat closer to that of Chomsky (1995), let us state this in terms of candidate sets. Suppose that an element copied many times produces a candidate set of copied elements as it moves up the tree. (49) DP . . . . . DP . . . . . DP . . . . . DP Y
many-times-moved element
candidate set The candidate set here would thus consist of several instances of John or each other’s mother. So (49) would look like (50), with tree material in between. (50) Each other’s mother . . . each other’s mother . . . each other’s mother The rule for a candidate set at LF is the following: (51) Rule for a candidate set at LF (A-chains) Erase all members of a candidate set, except one.4 What constitutes the criterion that allows a member of the candidate set to survive? It is important to note that any trace site (copied element) will do. Any of the members of the candidate set may survive, just as any point may be a point of A-reconstruction. I have given directly above two erroneous reasons why the Areconstruction analysis has not been generally adopted in the literature, even though the arguments for it seem compelling. These reasons do not hold. In chapter 3, I give six positive reasons for A-reconstruction.
3
More on the Single Tree Condition
In this chapter, I would like to discuss in more detail the notion of the Single Tree Condition. This is in accord with Chomsky’s (1995) notion, and further explicates and gives substance to his idea. Thus my position on the Single Tree Condition mimics Chomsky’s own, unlike the preceding work on A-reconstruction, and the work to follow on the negative Binding Conditions. The basic idea behind the Single Tree Condition is twofold. Single Tree Condition Part 1. First, lexical material must only be read as if it were in one position on a chain. This excludes dual domination, a rare position in the literature (see Engdahl 1986 for a forthright example of this approach), but also bars a type of binding approach that derivatively states c-command in terms of traces as well as direct c-command (an example follows). In the copy-and-erasure theory, it bars all the copied elements simultaneously from fulfilling the binding conditions. A subtlety with this position is discussed in chapter 6, with relative clauses.1 Part 2. The second idea behind the Single Tree Condition is that a large number of positive interpretive conditions—in particular, idiom interpretation (Chomsky 1995), Binding Condition A, Quantifier Raising and Interpretation, and Pronoun Binding—are all stated uniformly at a single level, LF. That is, one level, to which we give the name LF, must coherently represent this information. Part 1 of the Single Tree Condition is a direct alternative to Barss’s (1986) ‘‘Chain-Binding’’ approach (see also Cichetti 1997). To make the Single Tree Condition clearer, let us start with two examples that would violate it. Example 1 (violates the Single Tree Condition, part 2) The following would violate the Single Tree Condition, part 2: ‘‘Binding theory applies
16
Chapter 3
at s-structure; Quantifier Interpretation (QR) applies at LF.’’2 This was of course the standard GB view. The above would violate the Single Tree Condition because binding theory would be stated at a di¤erent level than Quantifier Interpretation. What constitutes empirical evidence against the statement in example 1? Precisely example (4) in chapter 2. (1) a. Two senators seem t to be expected t to be caught t in every sting operation. b. 2xEy or Ey2x (2) a. Two senators seem to each other t to be expected t to be caught t in every sting operation. b. 2xEy, not Ey2x The additional reading disappears in (2) because the quantifier must bind the anaphor in the main clause. However, this is only because the two constraints are stated at exactly the same level. If one were stated at one level and the other at another level, the following would be possible: (3) a. s-structure, read o¤ anaphor binding relation Two senators seem to each other t to be expected t to be caught t in every sting operation. b. Lower quantifier c. LF, read o¤ both quantifier scope possibilities e seem to each other e to be expected e to be caught two senators in every sting operation. In the copy-and-erasure framework, this would disallow one copy being used for the binding relation, and the other being used for quantifier interpretation. Example 1 0 (violates the Single Tree Condition, part 1) The second way of violating the Single Tree Condition would be by allowing derivative notions of c-command to create ‘‘tangled trees.’’ Suppose that one assumed the following: (4) a. A trace-commands B i¤ A c-commands B or A c-commands a trace of B. b. A may bind B if A trace-commands B. Such definitions are common in the literature. In e¤ect, they create a tree in which a whole set of derivative relations—for example, trace-command relations—are defined that have precisely the character of dual domination. Suppose, for example, that we have the tree in (5), where D is the trace of A:
More on the Single Tree Condition
17
(5)
B c-commands D, but does not c-command A, as expected. However, what about trace-command? In this case, B and C trace-command A, even though they do not c-command it. What ‘‘trace-command’’ does is give rise to the following ‘‘tangled tree,’’ where B and C trace-command A (through D), as well as trace-commanding D. In fact, this tracecommand tree is precisely the same as Engdahl’s c-command tree involving dual domination. (6) Trace-command tree (tangled tree)
A trace-commands B i¤ the first branching node dominating A dominates B, in the trace-command tree. It is this structure that I also wish to rule out, by the Single Tree Condition, part 1. What literal examples does this rule out? Precisely the same ones that we wanted to rule out in (1) and (2), while allowing in the readings that we wanted ruled in. Recall the examples in (1) and (2), repeated below: the first allows the quantifier to be lowered, while the second does not. (7) a. Two senators seem t to be expected t to be caught t in every sting operation. b. 2xEy or Ey2x
18
Chapter 3
(8) a. Two senators seem to each other t to be expected t to be caught t in every sting operation. b. 2xEy, not Ey2x The problem with the ‘‘trace-command’’ predicate is that by its clever use, it precisely allows one to get both readings for (8b) as well as for (8a), and this outcome is incorrect—that is, these two readings do not exist. In short, it allows too much into the grammar. Suppose one proposed, for example, the following: (9) Two quantifiers are in mutual scope relations i¤ they are in a tracecommand relation with each other. Then (8) would, wrongly, be predicted to have two readings: two senators could still bind each other from its surface position, but because the tracecommand stipulation in (9) allowed the two quantifiers to be in a mutual scope relationship to each other via the trace-command relation, two readings would falsely be expected. This whole line of overgeneration is barred if predicates like the derived predicate ‘‘trace-command’’ are not allowed, but elements must be literally in a place in a tree to be read as such. This is what is said in the Single Tree Condition, part 1. Example 2 (violates the Single Tree Condition, part 2) This example is taken from Chomsky (1995). The result that Chomsky derives could not be derived if the Single Tree Condition, part 2, were violated. The following violates the Single Tree Condition, part 2: ‘‘Binding Theory A applies at s-structure or Spell-Out; idiom interpretation applies at LF.’’ This statement would violate the Single Tree Condition because Binding Theory A (positive condition) would apply at a di¤erent level than idiom interpretation. What structures would one wrongly rule in by violating the Single Tree Condition, part 2 (thus arguing that the Single Tree Condition is correct)? Exactly those discussed in Chomsky 1995: (10) John wonders which pictures of himself Bill took t? For the idiomatic interpretation of take, pictures of himself must be reconstructed to the lower clause, and thus himself must take Bill as an antecedent (Chomsky) Suppose that one said instead that binding theory applied at s-structure or Spell-Out, and idiom interpretation at LF (violating Single Tree Condition, part 2). Then one could bind the anaphor himself by John at sstructure, lower the wh-phrase restrictor between s-structure and LF, and
More on the Single Tree Condition
19
get the correct idiom interpretation at LF. That is, by allowing binding theory (positive condition) and idiom interpretation to act in two di¤erent levels, one wrongly allows the structure to be grammatical. Example 2 0 (violates the Single Tree Condition, part 1) One also allows the structures in (61) with the wrong binding, if one allows in derivative theories of c-command such as trace-command, which have the e¤ect of allowing the element to appear in two places in the tracecommand tree. Suppose, for example, one assumed the following: (11) A is a trace-sister of B i¤ A is a sister of B or a sister of the trace of B. (12) Idioms are formed from trace-sisters. Then precisely the wrong result will follow. That is, the fronted element, which pictures of himself, can get its binding from its s-structure or SpellOut position (and direct LF position). On the other hand, the idiom is interpreted as well formed through the trace-sister relation. The result is precisely the wrong one: one would expect himself to be potentially bound by Bill—which it cannot be. The conclusion of the set of examples 1 and 2 is that one does not want to allow oneself the power of relations like ‘‘trace-command’’ and ‘‘tracesister,’’ which in e¤ect create a tangled tree, because this allows in structures in the grammar that should in fact be ruled out as ungrammatical. This is the Single Tree Condition, part 1. The other conclusion is that one wants all of the following positive conditions to apply uniformly at one level, presumably LF: Binding Condition A, Quantifier Interpretation, Idiom Interpretation. I have now concluded the argument for the Single Tree Condition. Assuming that some conditions are ‘‘bundled’’ at LF, what are they? I have so far argued for the following: (13) ‘‘Bundled’’ at LF Quantifier Interpretation Binding Condition A Idiom Interpretation There is at least one other condition that we can add to this bundle, namely, bound pronoun interpretation. Consider the following contrast: (14) a. Two women seem t to be attracted t to every man. b. 2xEy or Ey2x (15) a. Two women seem to their mother t to be attracted t to every man. b. 2xEy, not Ey2x when their is bound to two
20
Chapter 3
In this example, two women as a quantifier permutes with every man in (14). This is presumably due to the lowering of the quantifier to get it into the same clause as every man at LF. However, to get the bound reading of their in (15), the quantifier must stay in the matrix clause. If it stays in the matrix clause, then it cannot permute with every man, so there is only one scope ordering, as expected. This means that the following positive conditions are bundled at LF: (16) Bundled at LF Quantifier Interpretation Binding Condition A Idiom Interpretation Bound Pronoun Interpretation Note that the sentences in (14) and (15) also constitute a sixth argument that reconstruction down A-chains occurs, similar to the first one given in this book. Both quantifier readings are dependent on the lack of a bound reading of the coclausal pronoun. When there is a bound pronoun, one of the quantifier readings disappears. This can only mean that the quantifier is lowered in the nonbound pronoun case. A final argument for the Single Tree Condition, and for the proposal that there is exactly one place from which a moved constituent may be interpreted, is the following. Consider the sentences in (17) and (18): (17) a.
Which pictures of each other by him did John think that the boys liked t? b. Which pictures of him by each other did the boys think that John liked t? c. ?*Which pictures of each other by each other did the boys think that the girls liked t?
(18) a.
Which pictures of himself by her did Mary say that John liked t? b. Which pictures of him by herself did Mary say that John liked t? c. ?*Which pictures of himself by herself did Mary say that John liked t?
The above sentences are somewhat hard to process. Nonetheless, it appears that the judgments are fairly secure. The first of these triplets, the (a) sentence, is grammatical because the wh-phrase (or restrictor) reconstructs to the base site (or the copy-and-erasure equivalent). The second of these triplets, the (b) sentence, is grammatical because the wh-phrase
More on the Single Tree Condition
21
reconstructs to the intermediate site. The third example is ungrammatical, because the two conflicting sites would have to be used to get the two bindings—that is, there is not one place that the picture–noun phrase could be placed where it would get both bindings. This supports the Single Tree Condition.3 This case is di¤erent from the adjunct case later, because it involves a sequence of two complements. However, the strongest reason for the Single Tree Condition is not this last argument but the convergence of the four bundled interpretive activities at LF. Note that this also suggests that the previous position that all binding conditions, including positive conditions, apply throughout the derivation (Burzio 1986; Kayne 1984; Lebeaux 1988, 1991; Epstein et al. 1998) cannot quite be upheld, because of the Single Tree Condition. Rather, it is the negative binding conditions that apply throughout the derivation. As for Condition A, while information is collected throughout the derivation, it applies at LF, with moved elements at a particular site in the tree. This chapter has presented a number of arguments for the bundling of several conditions at a single level, and the reconstruction of material to a particular spot on the tree, rather than using multiple active copies, or a derivative notion of c-command. Evidence like the ‘‘trapping-e¤ect’’ argument supports A-reconstruction. This A-reconstruction, or copy-anderasure, was used to place the element in the tree. Six arguments were given for A-reconstruction: Trapping-e¤ect arguments (19) Two womeni seem to each otheri t to be expected t to dance with every senator. (Anaphor) (20) Two womeni seem to theiri mother t to be attracted t to every man. (Bound pronoun) Bound element within DP arguments (21) Each other’si mother seem t to please the two boysi . (Anaphor) (22) Hisi first book tends t to please every mani . (Bound pronoun) (23) PROi seeing Claire seems t to be expected t to make Marki happy. (PRO) Double-binding construction argument (24) PROi to get a good apartment tends t to seem t to require PROi knowing the landlord. These arguments show A-reconstruction e¤ects in six areas.
4
Condition C Applies Everywhere
Given that A-reconstruction applies, for which six arguments have been given earlier, the fact that Condition C applies everywhere can easily be proven. The arguments for A-reconstruction serve as a lemma leading up to the proof of the universal application of Condition C. The fact that Condition C applies everywhere is an important result, because it suggests a particular architecture: one where the negative conditions apply homogeneously throughout the derivation. Three types of examples show that Condition C applies everywhere: first, strong instances of Condition C violations (example (1)); second, weak instances of Condition C violations (example (2)); and third, instances of anaphor binding that show a di¤erence depending on whether the anaphor is embedded or not (example (3)). (1) a. *He i seems to John’s i mother t to be expected t to win. b. *He i seems to Johni t to be expected t to win. (2) ??John seems to John’s mother t to be expected t to win. (Weak Condition C violation, because it involves two names, not a name and a pronoun) (3) a. *Himselfi pleases t Johni . b. Each other’s i parents please t the two boysi . c. He i seems to every mani t to be quite wonderful. (No binding) d. Hisi mother seems to every mani t to be quite wonderful. (Binding) These examples are simple, but crucial. First, consider (1a). This sentence is obviously ungrammatical. However, if there is the possibility of Areconstruction, which I have just extensively argued for, than an LF of (1a) would be (4):
24
Chapter 4
(4) LF of (1a) e seems to John’s i mother e to be expected hei to win. This violates no conditions at LF; in particular, it does not violate Condition C. Therefore, if (a) Condition C applies only at LF, and (b) Areconstruction freely applies, then (1) should be grammatical. It is not grammatical. Therefore, either A-reconstruction does not freely apply— but I have just given six arguments that it does—or Condition C is not exclusive to LF. I take the second conclusion, suggesting a change in the architecture of the grammar. The second example, (2), is similar. Lasnik (1990) points out that there is a marked di¤erence in Condition C depending on whether it is between a pronoun and a name, or two names. I call the second type of violation a ‘‘weak’’ Condition C violation. Consider now the LF of (2). It would be (5): (5) LF of (2) e seems to John’s i mother e to be expected Johni to win. This LF again violates no conditions: in particular, it does not violate Condition C. Therefore, if A-reconstruction (or its copy-and-erasure equivalent) is a possibility, it should ‘‘save’’ the structure (2) at LF. The fact that the structure cannot be saved suggests that it is ruled out before A-reconstruction has had a chance to apply. That is, Condition C has applied throughout the derivation. The third example stands on the contrast between (3a) and (3b). While I have assumed the Belletti and Rizzi analysis, this is not crucial. Why should there be a contrast between (3a) and (3b)? A common assumption might be that (3a) fails Binding Condition A at LF, because the anaphor is not c-commanded by an antecedent. However, the fact that (3b) is grammatical makes this argumentation impossible. For if the anaphor in (3a) could not be bound at LF, why can the anaphor in (3b) be bound at LF? Therefore, the only possibility is that (3a) is ruled out by something else. This must be Condition C: while both structures in (3) can have their anaphors bound at LF by lowering at LF, the structure in (3b) has been previously ruled out by Condition C applying throughout the derivation. It is only in this way that the contrast can be explained. Note crucially that applying Condition C at LF will not explain the di¤erence, because by then the lowering will have already taken place. The contrast between (3c) and (3d) is similarly explained. Why should (3c) not allow binding while (3d) does? The fact that (3d) does, shows that
Condition C Applies Everywhere
25
it is possible to lower the subject of the matrix clause to get into the scope of the quantifier—something we already knew as A-reconstruction. But then why can’t the DP he in (3c) similarly lower? It can, at LF; however, the sentence has already been ruled out by Condition C applying before LF. The above arguments for Condition C applying everywhere are deceptively simple. This is because the bulk of the argumentation has already been taken up by the lemma: that A-reconstruction optionally applies at LF. There is one possible hole in this argument. This is that while Areconstruction does in general apply freely, it does not apply to definite descriptions: this was suggested as a possibility by a reviewer of this manuscript, citing Diesing 1992. Then he in (1) could not lower at LF, and the sentence would be ruled out by Condition C at LF, with the he not lowered. However, it appears that we already have a bulk of data necessary to exclude this possibility. That is, it appears that definite descriptions can lower, at least in raising constructions. The basic examples have already been given in other contexts: note that a DP containing an anaphor subject lowers, a DP containing a pronoun subject lowers, and a gerund containing PRO lowers. (6) a. Each other’s i presents are expected t to seem to the boysi t to be better than their own. b. Each other’s i pieces are expected t to be seen t to please the two artistsi . c. His i mother tends t to seem t to every mani t to be the best woman on earth. d. Heri first piece tends t to seem t to every composeri t to be quite wonderful. e. PROi forgetting his lines seems t to have been anticipated t by Je¤i . Thus in (6a–d), a DP of the form [DP NP] lowers, where the DP corresponds to the genitive. These DPs are certainly definite descriptions. Similarly, in (6e) a gerund lowers. Thus definite descriptions can lower down A-chains. Therefore sentences like (6) cannot be excluded on the grounds that he has not lowered at LF; they must be excluded by Condition C at some pre-LF point. These results can easily be reconciled with Diesing’s. Diesing (1992) dealt with the lowering of definites into the VP, in one clause structures. The structures here deal with the lowering of a definite element down a
26
Chapter 4
subject-of-IP to subject-of-IP chain. We may assume that the latter sort of lowering is possible while the former is not. In fact, since the above arguments for definite descriptions lowering down IP subject chains are so powerful, given (6) and the array of arguments in chapter 2 for the general possibility of A-chain lowering, we are forced to adopt this type of position. So we will adopt the following, accommodating Diesing’s claim in our own: Subject-IP to Subject-IP Lowering is possible; but the definite description cannot be lowered into the terminal VP. Even clearer, given the cases in (6), is the case in (2). If definite descriptions of the type [DP NP] can lower, as in (6a–d), there appears to be no reason why a simple name of the type in (22) in chapter 3 could not lower as well at LF. But if the name John can lower in (2), how could the sentence be excluded, if Binding Condition C applied solely at LF? The conclusion again is that Binding Condition C must apply more than solely at LF. In general, it appears that the set of categories that lower down Achains is not restricted by definiteness and so no general restriction along these lines holds. For example, it cannot be said that the set of strong quantifiers or strong DPs (the man, every man, each man, his mother, and so on; Milsark 1974) do not lower down A-chains. But this means that the examples in (1)–(3) hold, as examples where Condition C has applied throughout the derivation. I conclude, then, that Condition C applies throughout the derivation. This may appear to be a small conclusion to draw from such a large amount of argumentation, but the point is crucial precisely because it pertains to the question of the architecture of the grammar. If some part of binding theory applies throughout the derivation, rather than at the interfaces, than what are we to think about the general architecture? Let us review for a moment. I have argued that the construction of the candidate set—equivalently, the marking of the set of positions to which a noun phrase may lower—takes place throughout the derivation. From these, one is chosen at LF, for interpretation. In a sense, then, even the positive conditions of binding theory apply throughout the derivation— in the sense that the candidate set is constructed there. However, more strictly, Condition A applies at LF, the Single Tree Condition at LF forces each constituent (or each part of a constituent) to be in one position, and it requires that there be one coherent interpretation of this representation. This takes place at LF. However, Condition C, a negative condition, applies throughout the derivation, including at LF. The architecture of the grammar is as in (7):
Condition C Applies Everywhere
(7)
27
(DS) (1) Construction of candidate set or set of reconstruction positions (2) Condition C SS or Spell-Out LF
Single Tree Condition on positive conditions, such as Condition A
This in fact looks more like a type of grammar in which conditions apply homogeneously throughout the grammar than one in which they apply exclusively at the interfaces. Indeed, one may assume that the positive conditions do hold simply at LF, as is stated specifically by the Single Tree Condition. But the negative binding conditions do not: they hold throughout the derivation (as does the construction of the candidate set). Another perspective on these issues can be found by comparing Chomsky’s conjecture, with respect to interface conditions, and Barbara Partee’s (1979) notion of the Well-Formedness Constraint. (8) a. Chomsky’s (1995) conjecture: all constraints apply at the interfaces. b. Partee (1979), ‘‘Montague Grammar and the Well-Formedness Constraint’’: ‘‘The well-formedness constraint: Each syntactic rule operates on well-formed expressions of specified categories to produce a well-formed expression of a specified category.’’ These positions occupy opposite poles of the theoretical map. Which position, if indeed either, is correct? While Chomsky’s position is well known, Partee’s position is of interest here since it places well-formedness conditions throughout the derivation. See also Fong (1991). In the preceding argumentation, I have maintained that Chomsky’s conjecture cannot be upheld in its strong form because of the interaction of A-reconstruction and Condition C. At least a single negative condition, Condition C, must be stated throughout the derivation. This in turn suggests an additional question: Do other negative conditions, perhaps all negative conditions, apply throughout the derivation? In the following paragraphs, I argue that Condition B can be added to the list.1 Like Condition C, Condition B is a negative condition, and it operates similarly. Let us consider the two sentences (9) and (10): (9) *Johni believed himi to be expected t to win. (10) *He i seems to himi t be be believed t to be expected t to win.
28
Chapter 4
In (9), him has been moved from the lower-clause to the upper-clause ECM position. If there is reconstruction down A-chains, as has been argued for extensively in this book, then the LF of (9) would be (11), where him has been lowered: (11) John believed e to be expected him to win. This LF output would be expected not to trigger a Condition B violation. Therefore, if there is A-reconstruction, the ungrammaticality of (9) is a puzzle, unless Condition B is stated at some time before reconstruction has applied. This would be solved if Condition B is stated throughout the derivation. Then (9) would be marked as ungrammatical at some previous structure within the derivation (for example, at Spell-Out), and reconstruction could not save the output. The second sentence, (10), operates similarly. In this case, the movement is into the subject position. Sentence (10) is repeated here: (12) *Hei seems to himi t to be believed t to be expected t to win. In this case, assuming the possibility of A-reconstruction, the subject may again be lowered down the tree, past the point where it would trigger a Condition B violation. (13) e seems to himi e to be believed e to be expected hei to win. Sentence (13) would be expected to be grammatical. Hence if this was the LF of sentence (12), and if Condition B holds only at LF, the ungrammaticality of (12) is left unexplained. Therefore, the ungrammaticality of sentence (12) provides an argument that Condition B applies elsewhere than just at LF. The reader might point out that the ungrammaticality of (9) and (10) has been used to argue that, if Condition B applies only at LF, then Areconstruction does not apply (Chomsky 1995; Lasnik 1999). However, this book has given six arguments to the contrary, showing that in general, A-reconstruction does apply optionally down A-chains. The only conclusion is that the negative binding conditions, Binding Conditions B and C, apply elsewhere than solely at LF.
5
The Structure of the Reconstruction Data (A Hole in Condition C)
I have so far considered reconstruction only in terms of A-reconstruction, but the set of data traditionally considered under the rubric of reconstruction reflect A0 -reconstruction, and it is these sentences, indeed, that are generally acknowledged to undergo reconstruction. Let us therefore step back a little, and consider the A0 -data. First, as is well known, an anaphor that is part of a DP may take its antecedent either in its surface position or in any of its trace sites, including the base one. The various positions where the anaphor may be bound are shown in (1). (1) a. Johni wondered which picture of himselfi Bill said t that Steve liked t? b. John wondered which picture of himselfi Billi said t that Steve liked t? c. John wondered which picture of himselfi Bill said t that Stevei liked t? Given the data in (1), the anaphor could either be bound throughout the derivation, or by reconstruction to a single spot at LF (or its copy-anderasure equivalent), and bound from there. Given my reliance on the Single Tree Condition, I choose the second of these possibilities (as does Chomsky 1995). For example, for (1b), this would involve reconstruction, of an element or part of one, to the intermediate COMP position: (2) Binding structure of (1b) Which John wondered Billi said liked t?
picture of himselfi that Steve
The bound pronoun data operate similarly to the anaphoric binding data. For example, if the moved element contains a bound pronoun, it may be bound to a quantifier that it has ‘‘jumped over’’ at s-structure/
30
Chapter 5
Spell-Out. This suggests that it is being interpreted from a lower position, as in (3): (3) Which of hisi parents did Freud say t that every mani loved t best? (The one of the opposite sex) Here, the pronoun is being interpreted as if it were in the trace site. Therefore, for these two positive conditions, for A0 -binding, it appears that any of the trace sites may act as a possible place of reconstruction. (4) Condition on reconstruction An element may freely reconstruct to any position from which it was moved. Consider now the negative Binding Condition C. This may trigger a binding theory violation from its surface position in an A0 chain, but it may also trigger a violation if the constituent containing it is ‘‘placed back’’ in its trace position, and from there would trigger a Condition C violation, as long as it is not part of an adjunct (Lebeaux 1988, 1991). (5) a. b. c. d.
*He i wondered which pictures of Johni Steve liked t. ?*Which picture of Johni did hei like t? ?*Whose destruction of Johni did hei fear t? ?*Which claim that Johni liked Mary did hei deny?
What do these data mean? The simplest explanation is the following. The positive conditions, Condition A and bound pronoun binding (as well as idiom interpretation from above), can existentially take any of their trace sites—and exactly one such site, according to the Single Tree Condition—as their site at LF. From there, they may be bound. They may be satisfied anywhere in the chain. The negative conditions, on the other hand, may not be violated anywhere—that is, no point on the chain violating a negative condition can exist. This is what was shown earlier, and it is the simplest encoding of the data. The negative condition, which may be violated nowhere, is in essence a negative existential over the derivation, s b. If it is violated anywhere, then a star is assigned that cannot be removed. It is this sense in which it is a homogeneous condition on the derivation. We have so far considered two positive conditions—anaphoric binding and bound pronouns—over A0 -chains, as well as one negative condition. All may be stated as existentials or negative existentials (in the case of the negative condition) over the chain. Let us now consider A-chains. The positive Condition A and bound pronouns have already been considered. The DP may lower to any of the trace sites to pick up anaphoric
The Structure of the Reconstruction Data
31
or quantificational binding. The examples for anaphoric binding are given by (6); the examples for quantificational binding, where the moved element contains a bound pronoun, are given in (7). (6) a. Surface site The boys believed each other to seem t to be expected t to win. b. Trace sites Each other’s parents seem to the boys t1 to be expected t2 to win. (Lowered to t1 at LF) c. Each other’s parents are expected t1 to seem to the boys t2 to win. (Lowered to t2 at LF) (7) a. Hisi mother seems t to please every man. (Lowered to t at LF) b. Hisi mother tends t to seem to every man t to be quite wonderful. (Lowered to t at LF) The examples in (6) show that A-reconstruction may apply (optionally) to any of the A-sites, not simply to the head or tail of the chain. This point warrants emphasis. (8) A-reconstruction A-reconstruction may apply to any of the A-trace sites. A-chains therefore existentially provide for binding to take place at any point in the chain. It begins to look, then, as if the entire paradigm will be pure, and negative conditions for A-chains will be negative existentials —that is, they will not be allowed in any place in the A-chain either. In fact, however, the paradigm has a startling hole in it. The chart for binding and reconstruction, with respect to A-chains, does not fall into line: every place in an A-chain does not trigger a Condition C violation. The general chart is given in (9).1 (9) Binding and reconstruction with respect to A and A0 -bar chains (‘‘Yes’’ denotes: Positive conditions existentially allow binding. Negative conditions negative existentially disallow coreference.)
Surface position Down A0 -chain Down A-chain
Condition A
Bound Pronoun
Condition C
yes (i) yes (iv) yes (vii)
yes (ii) yes (v) yes (viii)
yes (iii) yes (vi) no (ix)
The ninth quadrant is the hole in Binding Condition C. This is the only negative entry in the chart. It encompasses sentences like the following (again, obvious sentences, but ones that compromise the pattern):
32
Chapter 5
(10) a. Johni seems to himselfi t to like cheese. b. John’si mother seems to himi t to be wonderful. Such sentences are of course perfectly grammatical. The puzzle is: they do not follow the general pattern, which is to existentially allow any binding into trace sites for A0 - and A-chains, and existentially deny any referential terms that trigger a Condition C violation there. The base structures of the sentences in (10) would of course be those in (11), and these would be expected to be ungrammatical, as are the tensed sentences beneath them: (11) a. *e seems to himselfi Johni to like cheese. *It seems to himi that Johni likes cheese. b. *e seems to himi John’si mother to be wonderful. *It seems to himi that John’si mother is wonderful. I now discuss the sentences in (9), explicating the yes’s and no’s. Row 1 (i) John believes himself to be expected t to win. Himself triggers Condition A from its surface position. Hence the positive existential; hence the yes. (ii) Every man believes his mother to be expected t to win. His gets quantificationally bound from its surface position. Hence the positive existential; hence the yes. (iii) *Hei believes Johni to win. John triggers Condition C from its surface position. Hence the existence of a negative existential, a star. Hence the yes. Row 2 (iv) Which pictures of himselfi does Johni like t? Himself triggers Condition A from its base position. Hence the positive existential, hence the yes. (v) Which one of his pictures do you think that every artist like t best? His triggers quantificational binding from its base position. Hence the positive existential, and hence the yes. (vi) *Which pictures of Johni does hei like t? John triggers Condition C from its base position. Hence the negative existential, and hence the yes. Row 3 (vii) Each other’s parents are expected t to seem to the boys t to be quite wonderful.
The Structure of the Reconstruction Data
33
Each other’s triggers Condition A from its base position. Hence the positive existential, and hence the yes. (viii) Pictures of hisi father in his youth are known t to seem to every mani t to be quite wonderful. His triggers quantificational binding from its base position. Hence the positive existential, and hence the yes. (ix) Johni seems to himselfi t to like cheese. John unexpectedly does not trigger Condition C from its base position. Hence the negative existential (for negative conditions) does not hold. Hence the no in the chart. Sentence (ix) here contradicts an otherwise systematic relationship. So, having shown the general possibility of A-reconstruction, and the general pattern of the data, there is a hole in Condition C when it comes to Achains. That is, while the general pattern of the data has an anaphor bound from any of its trace sites (Condition A), and has a trace linked to a DP containing a name triggering Condition C from any of its trace sites, an important exception appears in simple sentences like (12), and in the chart in (9). (12) a. Johni seems to himselfi t to like cheese. Premovement structure: e seems to himself John to like cheese. b. Pictures of Johni seem to himi t to be great. Premovement structure: e seem to him pictures of John to be great. How then might one proceed? One way, which I reject, is the following. Instead of the maximally simple existential statements such as those given in (9), a complex set of statements of reconstruction would be given depending on the type of chain, what is contained in the chain, and so on. For example, for A0 -chains one could give the following: (13) a. Reconstruct A0 -element to any of the trace sites if it contains an anaphor. b. Reconstruct A0 -element to lowest of the trace sites if it contains a name. This will indeed get the right empirical result, but at the cost of great clumsiness, and, more important, an unmotivated distinction in chains depending on their content. So I reject this solution. Surely a more attractive solution would be welcome. How, then, can we get the result—in particular, the hole in Condition C shown by the chart in (9)? Factually, we may describe the situation as
34
Chapter 5
follows. The element that is moved up an A-chain may act, from the point of view of Condition C, as if it were not there. In other words, rather than restricting Condition C from acting on A-chains, let us allow Condition C to act unhampered throughout the derivation. We would allow the element itself—that is, the lexical content of the element—to be inserted at any point before, during, or immediately after A-to-A movement has occurred, but not past a point where A0 movement has occurred. This means that before full lexical insertion has occurred, a null category (let us call it pro, but without implying the full set of properties of the standard little pro) has moved up the tree and been coindexed. Lexical insertion covering this pro may take place at any time, with the proviso that it must occur before the element is assigned Case— that is, lexical insertion is staggered and intermeshed with movement. In particular, this allows a derivation in which once the element is past the ‘‘danger’’ point in the derivation, when it would have triggered Condition C, the lexical material can be inserted, covering up the pro. The concept, then, is to escape Condition C, by allowing a particular sort of late lexical insertion, as suggested in work by McCawley many years ago (McCawley 1968). This allows the binding conditions to be stated in their maximally simple form, as negative and positive existentials. The negative condition(s) apply throughout the derivation. Note that the pro and the material covering it up will have the same index—that is, reference—in the semantic sphere. (14) Staggered lexical insertion (Recall that ‘‘pro’’ is not identical to the usual little pro, but is just a null nominal element, with f-features and an index) a. pro is initially generated in every DP structure. b. pro itself may be moved by A-movement, giving rise to coindexing. c. pro may at any time be overlayed with a full DP category. d. pro may not be moved by A0 -movement. (Overlay is used here in a di¤erent sense than in Burzio, 2000.) We may call the initial structures with ‘‘pro’’ schematic structures after the indexed structures of Montague (1974). Of course, many interesting questions arise about them and about the constraints holding over them. This view of lexical insertion, which I will defend shortly, allows for the hole in Binding Condition C to be explained, without giving up the maximally simple existential statement of the binding conditions. I will demonstrate how (14) would work, together with the chart in (9) now all having yes’s.
The Structure of the Reconstruction Data
35
First consider A-chains. Lexical insertion, in other words, lexical overlay of the moved element, may take place at any point before, during, or at the point of closure of the movement (i.e., A-movement, not A0 movement). By an extension of Chomsky’s (1995) use of the cycle, lexical insertion—the overlaying of a pro with a full nominal category—may take place only on the category just moved. It may not ‘‘swoop back’’ down into the tree to e¤ect lexical insertion on pro elements generated earlier. (I will return to this below.) Therefore, pro elements must either be immediately covered over (by a full DP), or moved as pro. Other than that, all reconstruction/binding will occur as before, and the full existential range of options are open (for the positive condition), and barred (for the negative ones). The range of reconstruction, for a DP, constitutes precisely the positions that DP existed at: the set of overlay positions. To illustrate, let us turn to some derivations. The derivation of (15) is (16): (15) Each other’s parents seem to the boys t to be quite wonderful.
!
!
(16) a. pro seem to the boys pro to be quite wonderful Insert DP ! b. pro seem to the boys each other’s parents to be quite wonderful Move ! c. each other’s parents seem to the boys t to be quite wonderful. Places where reconstruction may occur d. LF: e seem to the boys each other’s parents to be quite wonderful. The derivation of (15) is not much di¤erent than what would occur without the proposal on lexical insertion (14) just given. In (16), the construction of the candidate set involves the full set of positions that have been overlaid. In this case, this is the full set of trace positions. This is because lexical insertion has applied before any movement has occurred. Since the candidate set is the full set of positions, reconstruction may take place to any of those positions, and (16d) is generated. The situation does not change substantively if we use Chomsky’s (1995) copy-and-erasure procedure instead. In this case the derivation would look like this: (17) a. pro seem to the boys pro to be quite wonderful. Insert DP ! b. pro seem to the boys each other’s parents to be quite wonderful. Copy !
36
Chapter 5
c. each other’s parents seem to the boys each other’s parents to be quite wonderful. Choose to erase first copy (LF) ! d. e seem to the boys each other’s parents to be quite wonderful. Let us now consider a second derivation with A-chains. In this case, unlike that directly above, the new overlay mechanism of lexical insertion into schematic structures does indeed play a central role. These are the crucial sentences: (18) a. Johni seems to himselfi t to like cheese. b. The pictures of Johni seem to himi t to be quite wonderful. The problem posed by sentences like (18a) and (18b) is that we would expect them to be ungrammatical, if Condition C applies throughout the derivation. They are, however, grammatical—recall that this is the hole in Condition C. Consider, however, the following derivation, making use of late lexical insertion, or overlay. (19) a. b. c. d.
e seems to himself pro to like cheese. A-movement ! proi seems to himself ti to like cheese. Lexical Insertion ! Johni seems to himself ti to like cheese. Bind ! Johni seems to himselfi ti to like cheese.
The derivation for (18b) proceeds similarly. (20) a. e seem to him pro to be quite wonderful. A-movement ! b. proi seem to him ti to be quite wonderful. Lexical Insertion ! c. (The pictures of John)i seem to him ti to be quite wonderful. What has happened is that the nominal element including the name—that is, the moved element—is only a pro in its base position. It may now optionally move up by A-movement, still remaining a pro. At any point, lexical insertion may take place before A0 -movement. This implies that lexical insertion can take place after A-movement has occurred—and after the ‘‘danger zone’’ has been passed. Thus A-chains escape Condition C violations. Derivations (19) and (20) are thus the key in showing how sentences like (21) can avoid a Condition C violation: (21) John seems to himself to like cheese. The situation is logically therefore much like that discussed in Radford 1990 for language acquisition. Radford notes that in early stages, children produce nominals without Case. How then can their production be gram-
The Structure of the Reconstruction Data
37
matical, if the Case Filter or its equivalent exists? One possibility is that the Case Filter does not exist early on. But Radford proposes a more interesting alternative: that the Case Filter exists, but that the early child nominals are NPs and not DPs, and the Case Filter applies to DPs. Therefore, the Case Filter does apply in the early grammar, and throughout, but applies vacuously (has no elements to apply to) in the early grammar, because only NPs, not DPs, exist at that point. The situation here, within the derivation, is logically equivalent to Radford’s: a module applies continuously throughout, but vacuously over certain structures. The derivations in (19) and (20), then, provide a crucial basis for avoiding a Condition C violation in sentences like (21). A-chains start with pro in their premovement structure (and hence Condition C applies vacuously). Let us now consider the derivations for A0 -chains, beginning with those that are anaphorically bound in the trace site. These have lexical insertion prior to movement, allowing the trace site into the candidate set, so reconstruction (or its copy-and-erasure equivalent) may apply there.
!
!
Bill said that Johni likes pro. Lexical insertion ! Bill said that Johni likes which pictures of himself Bind ! Bill said that Johni likes which picture of himselfi Move ! Bill said that which picture of himselfi Johni likes t Move ! Which picture of himselfi did Bill say t that Johni likes t? !
(22) a. b. c. d. e.
Candidate set for reconstruction From (22), reconstruction can apply. Note that what has occurred here is early lexical insertion, so that the base position may be included in the candidate set for reconstruction. What if the Wh-phrase contains a name? Here the only derivation possible is that given in (24), where the lexical insertion has applied early, and the structure is ruled out by Condition C applying throughout the derivation. This is shown in (24). (23) *Which pictures of Johni does hei like t? (24) Derivation a. hei likes pro. Lexical Insertion ! b. he i likes which pictures of Johni Condition C ! c. *he i likes which pictures of Johni Move ! d. *Which pictures of Johni does hei like t? Here, Condition C, applying through the derivation, has assigned a star that cannot be removed.
38
Chapter 5
Note that there is no derivation of (23) comparable to the one of (19), which avoids the Condition C violation through late lexical insertion. This is because, as I have stated, lexical insertion must apply before A0 movement (14d). The derivation that is ruled out is the following: (25) *Ruled-out derivation a. Hei likes pro Move ! b. pro does he i like t? Insert ! c. Which pictures of Johni does hei like t? The di¤erence between A- and A0 -chains is crucial, and I will return to discussion of it in the rest of the book. In all the sentences above, I have been focusing on the moved element, and on how lexical insertion or overlay applies with it. However, the question obviously arises about what occurs with other noun phrases in the sentence. For example, all of these may start out as pro’s, and only later have their full lexical content overlaid, according to (14). That is, the premovement structure of (26) could be (27). (Attempting to deal with one complexity at a time, I put aside for now the idea of building the tree upward; see Vainikka 1989, Chomsky 1993, 1995, and work in the tradition of Montague Grammar.) Consider a surface structure like (26), which could have a base or schematic structure as simple as (27): (26) Which picture of John did he like? (27) proi like prok . Let us quickly investigate some syntactic ramifications of introducing a pro at all positions. First, let us assume that pro has approximately the properties of a pronoun—that is, f-features, an index, but not at the beginning of a derivation, Case. Let us also first make the assumption that the overlaying of pro by an element with lexical content may freely apply at any time in the derivation prior to A0 -movement. Finally, let us assume provisionally that the entire structure is present at DS, as in classic Government-Binding Theory; this assumption will be revised later. Then a sentence like (28a) would have a DS2 like (28b), and note that this would not immediately trigger a Condition C violation at DS, but only when a name had been overlaid on the second pro, as in (28bii). (Assume that Mary is present throughout the entire derivation, because that is irrelevant for the moment.)
The Structure of the Reconstruction Data
39
(28) a. Output sentence *He i said that Mary liked Johni . b. Derivation i. proi said that Mary liked proi (This is the DS and is grammatical) overlay John ! ii. *proi said that Mary liked proi Johni —ungrammatical after insertion, because triggers Condition C— iii. *proi said that Mary likes Johni . As mentioned earlier, sentences like (29a) would only have a derivation like that in (29b), where overlaying occurs before movement, because by assumption, pro cannot be moved by A0 -movement. (This assumption will soon be grounded.) (29) a. Output sentence *Which pictures of Johni does hei like t? b. Derivation i. proi likes proj . Insert ! ii. *proi likes (which pictures of Johni ) triggers Condition C, which movement cannot save; Move ! iii. *Which pictures of Johni does hei like t? Note that the pro object overlaid with which pictures of John still triggers a Condition C violation. Two complexities arise at this point. First, in a tricky complication, the little schematic pro suggested here must also be allowed to be an anaphor, in the sense of sharing an index with other elements in the tree. Otherwise, a sentence like (30a) would obligatorily start out as (30bi), and would immediately trigger a Condition B violation (assuming pro is treated exclusively as a pronoun), before the anaphor could overlay the second pro. (30) a. Output sentence *Johni likes himselfi . (Wrongly marked ungrammatical) b. Derivation i. *proi likes proi (Violates Condition B) ii. insert anaphor and name ! iii. *Johni likes himselfi . (Wrongly marked ungrammatical)
40
Chapter 5
So pro must be allowed to begin as an anaphor, as well as a pronoun. That is, it is neutral between pronounhood and anaphorhood. The following holds ((31c) is the stipulation about anaphorhood and pronounhood): (31) a. pro has f-features. b. pro has no Case (initially). c. pro may be either coindexed or contraindexed with any other element on the tree, subject to the binding conditions. Second, some constraint on the order of lexical insertion must apparently be stated. This is shown by examples like (32): (32) *Which pictures of Johni does hei like t? Now consider the following derivation, in which the basic structure of the wh-noun phrase is inserted early, to allow for movement, but the particular word John is inserted late. (33) Derivation a. proi likes proj insert both DP’s, leaving John out ! b. hei likes which pictures of proi Move-wh- ! c. which pictures of proi does hei like t Insert proi ! d. Which pictures of Johni does hei like t? The sentence in (33d) has wrongly escaped a Condition C violation, through the late overlaying of John, after movement has taken place. That is, in this derivation, the overlaying of John, but not of the entire noun phrase, has been delayed until after movement. Instead, John has been filled with pro. Because of this, it does not trigger a Condition C violation prior to A0 -movement. This is despite the fact that wh-movement of a pure pro (in (25)) has already been barred—this constraint will not help because here the pro is just part of a wh-phrase. I will briefly suggest a way that this may be solved. A solution would be to modulate the ordering of the overlaying of the pro’s in the schematic structure, so that it is in some sense cyclic, or bottom-up (Chomsky 1965, 1977b, 1995). This is also reminiscent of recent work on phases. How could this be done? Here is a possible solution: (34) Cyclic overlaying of pro’s (building bottom up, as in Chomsky 1995) a. Insert full DPs, overlaying pro’s at the cyclic nodes CP, vP, and DP, with the proviso that insertion may not be dominated by another cyclic category at that point. That is, once a cyclic node has been passed, insertion into its contents is impossible.
The Structure of the Reconstruction Data
41
b. However, insertion of the full DP in the specifier of a category is possible, if one is currently on the projection dominating that specifier, as in phases. The cyclic overlaying of pro’s would allow for the following possibilities: (35) a. When on the IP, building bottom up, one can insert into the SPEC of the IP node—that is, the subject. This is necessary to get the right result for structures discussed in detail earlier—for example, those of the form ‘‘John seems to himself t to like cheese.’’ b. When on the DP node, one can insert into the SPEC of the DP node, or for the full DP node itself, but not into a DP that is the object of a preposition within that DP. It is important to note that the above strictures are purposely written so that they do not account for the contrast between A- and A0 -movement— that is, that a pro may move by A-movement, but not A0 -movement. I believe this to follow in a principled way. It will follow from the general theory of schematic structures, to be renamed Case frames. This will be discussed in chapter 7. Recall, finally, the reason for the introduction of pro and the schematic or indexed structure. By allowing for the introduction of pro, movement of pro, and the late lexical insertion of nouns, I was able to state binding theory in the simplest possible terms, as a positive existential and negative existential over the trace sites. (36) a. Binding Condition A (positive condition): bx, from among the candidate set, where binding is possible b. Binding Condition C (negative condition): s bx, from the set of structures in the derivation, where Condition C holds
6
Two Interesting Constructions
The foregoing showed that binding theory could be stated in the simplest form, as negative and positive existentials over the derivation. More exactly, the negative Condition C was a negative existential over the derivation, while the positive Condition A constructed a candidate set that was a positive existential over the derivation. One of the candidate sets was then chosen as the candidate at LF, by the Single Tree Condition. However, it was noted that a hole in Binding Condition C interfered with the maximally simple outward appearance of the binding facts ((9) in chapter 5); this hole then was accounted for by the introduction of pro, and by the late introduction of the name, in the case of A-chains but not A0 -chains, to account for the contrast in (1): (1) a. *Which pictures of Johni does hei like t? b. Johni seems to himselfi t to like cheese. That is, the derivation in (1b) di¤ered from the one in (1a), because in (1a) the full DP which pictures of John had to be inserted before movement, while in (1b) it did not have to be—that is, a derivation existed for (1b) that did not exist for (1a). I return in the next section to attempt to motivate this stipulation. For the moment, however, I would like to look at two very interesting constructions, which show the interaction of Aand A0 -movement, the Single Tree Condition, and the binding conditions. The first of these constructions is discussed in Lebeaux 1991 with respect to a general discussion of adding adjuncts late (Lebeaux 1988, 1991). The facts are well known—the so-called antireconstruction e¤ects of Van Riemsdijk and Williams 1981 as well as Freidin 1986. The observation is that while a name contained in the complement of a fronted noun phrase is ungrammatical with a coreferent pronoun, when the name is part of a fronted adjunct, the resultant is grammatical. Thus the contrast in (2) and (3):
44
Chapter 6
(2) a. ?*Whose claim that Johni liked Mary did hei deny t? b. Which picture that Johni took did hei like t? (3) a. ?*Which picture of Johni did hei like t? b. Which picture near Johni did hei like t? For full discussion, see Lebeaux 1991, where I argue that Condition C applies throughout the derivation, and that relative clauses and adjuncts in general may be added at any time. Therefore (2b) and (3b) escape a Condition C violation through the following derivation: (4) Derivation a. hei likes which pictures. b. which pictures does he i like t? c. Which pictures near Johni does hei like t? At no time is John c-commanded by he. A more complex set of constructions is even more interesting from the point of view of the Single Tree Condition and the interaction of the binding constraints. These are constructions in which the added relative clause or adjunct contains both a pronoun to be bound to a quantifier, and a name to be coreferent with a pronoun. The following examples are from Lebeaux 1991: (5) a.
Which paper that hei gave to Bresnank did every student i think t that shek would like t? (With he bound to every student; she coreferent with Bresnan) b. *Which paper that hei gave to Bresnank did shek think t that every student i would like t? (With he bound to every student; she coreferent with Bresnan)
(6) a.
Which part that hei played with Madonnak did every aspiring actori wish t that shek would support t? (With he bound by every aspiring actor; she coreferent with Madonna) b. *Which part that hei played with Madonnak did shek think t that every aspiring actori had failed at t? (With he bound by every aspiring actor; she coreferent with Madonna)
First, note that in the (a) sentences, a place of coherence exists for the relative clause, namely, the intermediate COMP, where readings are possible, so the sentences are ruled grammatical. No such place exists for the (b) sentences, so the sentences are ungrammatical. Looking at the structure of the data, we see that in (5a) there is a relative clause containing
Two Interesting Constructions
45
two DPs, he to be bound by every student, and Bresnan to be coreferent with she. There is also a place at LF where the pronoun can ‘‘catch’’ the binding of the quantifier, namely, the intermediate COMP. Second, if the relative clause were added when which paper had just been moved to the intermediate COMP, then the name Bresnan would not trigger a Condition C violation with the conjoint she, because at that point it would not be c-commanded by she. Therefore, the following derivation exists that allows (5a) in: (7) a. every student thinks that she would like which paper. Move X b. every student thinks which paper that she would like t. Adjoin X c. every student thinks which paper that he gave to Bresnank that shek would like. Move X d. Which paper that hei gave to Bresnank did every studenti think t that she k would like t? Reconstruct or copy-and-erasure equivalent X did every studenti think (that hei gave to e. Which paper Bresnank ) that shek would like t? (LF representation; relative clause is in intermediate COMP) In the structure in (7e), in the LF representation, the relative clause is in the intermediate COMP. This allows the he to be bound by every student, while the she is coreferent with Bresnan. For this derivation to go through, it was necessary that the relative clause be added late, to escape a Condition C violation (between she and Bresnan). It was also necessary that a structural place exist in the tree where the pronoun could be bound by the quantifier, without the name triggering a Condition C violation. In the (5b) sentence, no such place exists. For the pronoun to be bound by the quantifier, the relative clause containing the pronoun would have to be in the original trace site at LF. However, this would mean that the name in the relative clause would trigger a Condition C violation with the pronoun in the matrix clause (at LF). Thus the intricate pattern of data of the (5) sentences, with every student and she allowed in one order, but not another, is predicted (Lebeaux 1991; Fox 2000). The same holds for the sentences in (6). In (6a), a place exists in the intermediate COMP, for the relative clause both to catch the binding from the quantifier higher up in the tree, and for the name in the relative clause (Madonna) to be coreferent to the pronoun lower down in the tree. Hence the sentence is grammatical. No such place exists in the (6b) sentence. To
46
Chapter 6
catch the quantifier binding from the DP every aspiring actor, the relative clause has to be in the lowest trace site. However, this would trigger a Condition C violation. The sentence is ungrammatical. Given the complexity of these sentences, it is remarkable that their difference in grammaticality, under the readings noted, is predicted by a theory such as that of Lebeaux 1991. Note, finally, how delicately the Single Tree Condition operates in these constructions, and how it is supported. If one had, instead of the Single Tree Condition, a proviso like (8), then the contrast would not hold: (8) A quantifier or pronoun trace-commands a pronoun or name respectively i¤ it c-commands it, or its trace. Both the quantifier every student and the pronoun she in the main sentence (i.e., not the relative clause) equally trace-command the pronoun and name in the relative clause in the two examples (5a) and (5b). Therefore, no contrast would be expected. The fact that it does occur once again buttresses the Single Tree Condition. The above constructions showed not only the Single Tree Condition operating, but also, given Condition C operating throughout the derivation, the late addition of adjuncts. For if there were not the late adjunction of adjuncts, the structure in (5a), repeated here, would violate Condition C immediately at the premovement structure (between she and Bresnan). (9) (5a), premovement structure, if no late insertion of adjuncts *Every studenti thinks that she k would like which paper that hei gave to Bresnank . The above structures, (5) and (6), show one very interesting class of interactions between Condition C, late adjunction of adjuncts, quantificational binding, and the Single Tree Condition. What is particularly interesting is that a similar class of examples can be constructed for A-chains. Recall that we are allowing for the late insertion (overlaying) of elements in A-chains to avoid Condition C violations in examples like (10): (10) Johni seems to himselfi t to like cheese. First recall a central argument of this book, that elements in A-chains may not only reconstruct to their base positions, but that they may existentially reconstruct to any one of the trace positions (optionally). This is shown by examples like (11):
Two Interesting Constructions
47
(11) a. Each other’s parents tend t1 to seem to the boys t2 to be expected t3 to win. b. Each other’s parents tend t1 to be expected by the boys t2 to be believed t3 to have been Underground Freedom Fighters in World War II. In each case, the DP reconstructs to the intermediate t2 site. For many other examples of intermediate reconstruction in A-chains, see the earlier discussion. As noted earlier, we may summarize this as (12): (12) Reconstruction down A-chains Reconstruction may occur to any A-trace site. We have, however, existentially stated Condition C over the derivation, applying at every point. But Condition C can be avoided by late insertion (overlaying) of the DP in the chain as in (10). We are now able to construct a set of examples with a striking similarity to (5) and (6). That is, if there is a name to be coreferent with, and a quantifier to be bound to in the rest of the clause, then the order in which these appear in the construction of the sentence matters. Examples are given here: (13) a.
(Hisi mother’s) k bread seems to every mani t to be known by herk t to be the best there is. b. ?*(Hisi mother’s) k bread seems to herk t to be known by every mani t to be the best there is.
Hisi picture of the presidentk seemed to every mani t to be seen by himk t to be a real intrusion. b. ?*Hisi picture of the presidentk seemed to himk t to be seen by every mani t to be a real accomplishment.
(14) a.
(15) a.
The president’si discussion with himk seemed to every mank t to be seen by himi t to be very beneficial. b. ?*The president’si discussion with himk seemed to himi t to be seen by every mank t to be quite an achievement.
These sentences are quite complex, but the judgments seem fairly sharp. The order of the quantifier and the pronoun, in the unmoved part of the sentence, is significant. Consider why this would work, for (13). The relevant reconstruction site is the middle position, for (13a). The LF is given in (16): (16) LF of (13a) e seems to every mani (hisi mother’s) k bread to be known by herk t to be the best there is.
48
Chapter 6
His mother’s bread has been lowered here to the intermediate site. Note that from here it can ‘‘catch’’ the quantificational binding of every man. The late insertion of DPs—if one assumes that Condition C applies throughout the derivation—is also necessary. For otherwise the premovement structure of (16) would be (17), and it would immediately trigger a Condition C violation, between her and his mother. (17) *e seems to every mani e to be known by herk (hisi mother’s) k bread to be the best there is. The generalization is that (a) one may reconstruct the moved DP to any position where the DP has been overlaid, and (b) all positions where the overlay has occurred are subject to Condition C. The contrast between (13a) and (13b) is also explained. For (13b) to catch the quantificational binding at LF, the DP his mother’s bread would have to be lowered to the original site (not the intermediate site), to be the subject of to be the best there is. (18) LF of (13b) *e seems to herk e to be known by every mani (hisi mother’s) k bread to be the best there is. However, while his correctly captures the quantificational binding of every man in this structure, his mother incorrectly triggers a Condition C violation with respect to her. Therefore this sentence is ruled ungrammatical (under the bound reading). The sentences in (14) and (15) operate identically. In each case, the (a) sentence allows reconstruction to the intermediate site, catching the quantificational binding and avoiding the Condition C violation because of late lexical insertion. In the (b) sentence, in order to catch the quantificational binding, the DP must be inserted early at the base site. However, this triggers a Condition C violation. This explains the ungrammaticality (under the bound reading) of these sentences. How does this type of sentence, where a DP is reconstructed to an intermediate trace site, fit in with the overlay proposal made earlier? It works as follows: the DP pro is moved up the tree successive cyclically. At some intermediate point, it is lexically overlaid with a full DP. (19) His mother’s bread seems to every man t to be known by her t to be the best there is. The premovement structure of this is the following: (20) e1 seems to every man e2 to be known by her pro3 to be the best there is.
Two Interesting Constructions
49
The DP pro is moved up the tree, and may be overlaid by his mother’s bread at any point. In particular, it may be overlaid at the e2 spot. It will thus escape Condition C applying throughout the derivation. (21) e1 seems to every man (his mother’s bread)2 to be known by her t3 to be the best there is. The overlay is then moved to the top node, leaving two overlay positions in the final tree. These are marked with the subscripts 1 and 2 (but not 3). (22) His mother’s bread1 seems to every man t2 to be known by her t3 to be the best there is. Reconstruction of his mother’s bread may apply at LF to any of the overlay positions. These are the surface position itself and the t2 position. Thus reconstruction may apply to t2 , to give the correct structure in (23):
overlay position
(23) overlay position
!
e1 seems to every man (his mother’s bread)2 to be known by her t3 to be the best there is. not overlay position Note that the resulting structure correctly catches the binding, because the overlay position t2 may be reconstructed to. On the other hand, the sentence never triggers Condition C, applying throughout the derivation, because the base position is instead pro, the preoverlay element, and only has his mother’s bread inserted after it has moved past her in the tree. Examples (14) and (15) operate similarly. There are thus two rather remarkable sets of structures, which show the interaction of quantificational binding, Condition C, the Single Tree Condition, and the adding of adjuncts and DPs in DP-chains late. These were shown in (5)–(6) and (13)–(15).
7
The Architecture of the Derivation
I have suggested earlier that binding theory be stated in maximally simple form. The negative conditions apply existentially over all levels of representation, marking as ungrammatical any derivations that violate them (a star is assigned that cannot be removed). The positive conditions, on the other hand, had the derivation existentially produce a set of candidate sites for reconstruction; lowering could then take place to any of those sites, leading to a single tree. The Single Tree Condition applied at LF. This maximally simple theory, however, could only be upheld given that two sorts of late insertions apply: (a) the optional late insertion of relative clauses (Lebeaux 1988, 1991; Fox 2000), and (b) the optional late overlaying of DPs in A-chains (see discussion in chapter 5). While the late insertion of relative clauses and in general adjuncts seems reasonable because of their nonargument status in the grammar, the late overlaying of DP’s in A-chains requires additional justification. (1) a. Johni seems to himselfi t to like cheese. b. Those pictures of Johni seem to himi t to be quite nice. In the second half of this book, I will try to motivate the optional late insertion of elements in A-chains, but not A0 -chains. This di¤erence is not superficial, as it may seem initially, but goes to the heart of the grammar. The topic has major implications for the theory of phrase structure. There are two basic solutions to the problem. One is to draw the solution from a theory of a‰xation, and in particular the Stray A‰x Filter of Lasnik ([1981] 1990). The other solution, a more fundamental and central one, would be to trace the contrast to a precise rendition of how the theories of Case and theta are integrated. That is, the basic solutions are those in (2) and (3):
52
Chapter 7
(2) Alternative 1 State constraints on a‰xation so that a. Lexical insertion (overlay) is basically free, b. Case is an a‰xal element, and c. A‰xal elements may not move by themselves up the tree. This derives the basic di¤erence between A- and A0 -movement of pro elements, since once Case is assigned, a true lexical element must be present to support Case, a type of a‰x. This means that A0 -movement cannot move a simple Case up the tree (since it is unsupported), while Amovement can move the simple pro, which does not have the a‰xal Case with it. (3) Alternative 2 Generate a theta structure and a Case structure as separate structures or tiers (Lebeaux 1988, 1991, 1997, 2000, 2001). These are only fused at a postconstruction moment, and indeed, postmovement point. The Case tier contains a Case frame, but not actual DP elements; it contains mainly closed-class elements. The theta tier contains open-class lexical formatives, the missing nouns and NPs. They are fused by the point that Case is applied. Thus two separate information structures, each a pure representation of a primitive— Case versus theta—are unified in the course of a derivation. This solution is clearly more fundamental. How would solution (2) work? Suppose we first adopt the idea of Travis and Lamontagne (1987) that there is not only an NP and DP, but a KP as well—a Case phrase. This may either be linked with the DP itself, so that the DP is really a DP/KP, or it may be that the K (Case) is a projection above the DP, giving rise to the higher KP. For simplicity, let us assume that the second of these is true. The structure of the noun phrase is therefore as in (4): (4)
Now when does the outermost layer, the K and KP layer appear in the noun phrase? While somewhat unconventional, we may assume that the DP is generated simply as a DP, and that when Case assignment applies, an
The Architecture of the Derivation
53
extra K, and KP layer is added to the DP. That is, a composition operation of the following form will take place at the point of Case assignment. (5) Case assignment: Input and output
Now consider the di¤erence between A- and A0 -movement. Amovement occurs before the element is assigned Case; A0 -movement occurs after the element is assigned Case. Suppose now that the DP starts o¤ as pro, as discussed earlier. Then the following di¤erence holds for Aand A0 -chains: (6) a. A-chains move up a DP because Case has not been assigned yet. This DP may be little pro. b. A0 -chains move up a KP, because wh-movement moves Cased elements. The interesting part of this is (6b). In particular, let us suppose that (7) holds: (7) Movement may not move bare a‰xes. Now if (7) holds, A0 -movement of pro is barred, which is the result we wanted. The reason is that the noun phrase element, at the point that A0 movement takes place, will have a true Case element that crucially is an a‰x, but this a‰x will only have a ‘‘pro’’ to lean on. This pro is not lexical at all, so the a‰x will essentially be a stray a‰x. That is, given the assumption in (6), A0 -movement would have to move a stray a‰x up the tree. This is because the Case a‰x, which was assigned before A0 movement started, would not be associated with any lexical material, but just the little pro. The structure of the noun phrase would be as in (8): (8)
54
Chapter 7
If we assume that bare a‰xes may not move up the tree, then the di¤erence in insertion between A- and A0 -movement is explained. Rather than having a simple di¤erence between the insertion of pro in A- and A0 contexts, the di¤erence is that A0 -movement would have to move a bare a‰x up the tree, if pro were the moved element in A0 -movement. Since the movement of stray a‰xes up the tree is conceptually undesirable, we would prefer to bar it. Finally, we note that while the movement of stray a‰xes is undesirable, it does not follow directly from Lasnik’s ([1981] 1990) Stray A‰x Filter. We return to this in the following section. This concludes the first possibility of explanation, for the di¤erence between insertion in A- and A0 -chains. Let us turn now to a second, more radical proposal. While the relationship between Case and theta theory is generally viewed as unproblematic by linguists, the relation between them is in fact far subtler than is supposed by the conventional literature. In the theory of Lebeaux (1988, 1991, 1997, 2001), Case and theta theory are held not to simply mutually describe a simple phrase marker, but each apply to a separate phrase marker, which is a pure instantiation of each. That is, Case and theta are arranged in distinct tiered representations, each of which is a pure representation of Case and theta relations: these relations in essence constitute distinct structural units (Lebeaux 1988, 1991, 1997, 2000, 2001; see also Sportiche 2005, for a theory of a split DP, and Williams 2003). At a particular point in the derivation, these representations are fused, by a rule of Project-a. (For further discussion see Chametzky 2000.) One of the two representations is a pure representation of thematic relations. In the theory of Lebeaux (1988), the derivation begins with a lexical entry, which is a tree structure (Lebeaux 1988; Hale and Keyser 1993). This is shown in (9a). Into this lexical entry, lexical insertion of open class elements occurs, giving rise to a ‘‘heavy’’ lexical entry, which we may call thematic representation. This is shown in (9b): (9) a. Stage IA: Lexical entry
The Architecture of the Derivation
55
b. Stage IB: Thematic Representation (‘‘Heavy’’ lexical entry, where lexical insertion of open-class elements has applied)
Thus the first representation, (9a), is the lexical representation; insertion takes place into the lexical representation, giving (9b). There is thus no distinction of type between the lexicon and the syntax: the two form a graded whole. The second stage, or Stage IB, which I will call thematic representation, is a pure representation of thematic relations. This is the level that children use in telegraphic speech (see Lebeaux 1988, 2000, 2001). Thus telegraphic speech is a pure instance of thematic representation. For example, the telegraphic speech representation of ‘‘see ball’’ and ‘‘me want cookie’’ are the following thematic structure trees. (10) a.
b.
As will become clearer later, telegraphic speech is a subgrammar of the full grammar; it has only part of the full set of primitives holding over it:
56
Chapter 7
the theta primitives but not the Case primitives. Thus the developmental sequence is primitive linking: at a certain point the Case primitive links into the system. At the initial point only theta primitives hold; at a later point, theta þ Case. This is shown in (11): (11)
See Lebeaux (2000) for detailed discussion on how a subgrammar instantiating the theta subtree becomes a supergrammar characterized by both theta þ Case. The thematic representation or theta subtree is thus pure open-class and thematic speech. Note that it is constructed by doing lexical insertion directly into the lexical entry ((9a) above). Thus it is, in essence, a level in between what is conventionally called the divide between the lexicon and the syntax. Now how may the phrasal syntax be entered? According to the theory of Lebeaux (1988 and succeeding work), a separate representation exists, called the Case frame, into which the thematic representation is projected. The Case frame has a few central properties: First, it is a pure representation of Case, including both the elements that give Case and those which receive it. For example, it contains the Case-assigning properties of the verb, and not its theta-assigning aspects, which are in the thematic tier. The Case frame also contains prepositions—the Caseassigning elements par excellence. It also contains determiners, a Casereceiving element. Second, the Case frame is particularly associated with closed class elements. This last idea is connected with the proposal that (a) closed-class elements such as prepositions generally assign Case, (b) that even the verb is bifurcated into Case and thematic properties, with the Case assigning properties on the Case tier being closed class, and centrally (c) Case is received in the DP, not by the noun head, but rather by the determiner, a closed class element. It is the latter that imposes its Case on the head noun (Lebeaux 1988, 2001). Thus both the assigners of Case (the closed-class Case features on the verb and prepositions) and the recipients of Case (the determiner) are closed class. The rest of the noun phrase, the NP, then receives Case by the determiner reassigning it to the NP.
The Architecture of the Derivation
57
Thus the phrase marker is decomposable into separate representations, a thematic and a Case representation. The Project-a operation fuses them. This is shown in (12): (12)
Note that the the and a, the Case receiving elements, exist on the Case frame. The open-class NPs exist on the thematic structure. And the verb is broken up into Case and theta properties, and exists on both (prepositions would exist on the Case frame). The Case representation is a skeleton into which the thematic representation is projected. Little pro exists in the
58
Chapter 7
NP slots in the Case representation, as in the schematic structure. The derivation would look like (13): (13)
This structure is precisely the sort necessary for the movement-of-pro operation discussed earlier. Questions immediately arise concerning which representation, if not both, movement might apply on, if movement occurred before the fusion operation. The answer is that A-movement takes place on the righthand tree, the Case frame, which contains prolike elements, not full DPs. Before turning to movement, however, I would like to touch on the argumentation for the separation into Case and theta representations from Lebeaux (1988, 1991, 1997, 2000, 2001). Each primitive has its own tree. These arguments do not have to do with binding theory constraints, but rather with phrase structure. One consideration here has to do with the appropriate representation of telegraphic speech. In the acquisition literature, there are two main approaches to telegraphic speech. One approach, which I reject, claims that the early phrase marker is something like the full tree, and contains null categories for the elided elements. In this approach, a single element in early speech, like ball, might correspond to a quintuply binary branching tree. I reject this proposal, which may call the Full Tree approach.
The Architecture of the Derivation
59
(14) Ball (meaning: ‘‘I see the ball’’) Full Tree approach—for example, quintuply binary branching
While this approach is interesting, it provides no structural explanation for why children speaking very early speech (for example, ball ) would not produce the full phrase marker, since they have the structural capacity to do so (the quintuply binary branching tree). In particular, the initial representation is just as complex as the adult representation. In this formulation, simplicity considerations play no role in the positing of the correct tree (contra proposals of Lebeaux 2000). That is, while according to the full tree hypothesis, the child has the competence and computational resources to produce the full tree, she or he still does not do so. In the other, contending approach, the child is argued to have a subphrase marker of the full phrase marker generated by a subgrammar of the full grammar (Vainikka 1985, 1986, 1993; Lebeaux 1988, 1997, 2000, 2001; Vainikka and Young-Scholten 1994, 1996; Guilfoyle and Noonan 1992; Frank 1992, 1998, 2002; Powers 1996a, 1996b; Radford 1990; Grimshaw 1994), The precise form of this subphrase marker varies according to the author. For example, Vainikka (1989) argues that the tree is only built partially up, starting bottom up, giving rise to successive VP, IP and CP trees (see also Powers 1996a for a novel proposal.) The
60
Chapter 7
proposal of Lebeaux (1988), also followed here, is that there is a subphrase marker, but that it is divided into the two frames suggested here—Case and theta frames. The child initially has competence over the theta frame, and only later does the Case frame, and Case, enter the grammar. Thus the child has the upper-left-hand tree in (12) above, generated by a subgrammar of the full grammar (Lebeaux 1988, 2000, 2001). Since closed-class elements are exclusively on the Case frame, and the child initially utters the theta subtree, this explains why closed class determiners are not part of early speech. All these proposals about subtrees have the property of explaining the structural simplicity of early speech. Lebeaux (1988, 2000, 2001) argues that very early speech is di¤erent from later speech, and is structurally simpler. It is a subgrammar of later adult speech. I propose that the theta tree is structurally embedded in the full tree by the Project-a operation in the adult derivation, and the same structural embedding exists in the acquisition sequence. Thus the names in the left hand side of (12) (theta subtree) are embedded into the slots in the right hand side (Case frame). In acquisition, the theta subtree exists as an independent tree, in telegraphic speech. This provides one reason for allowing such an embedding in the adult grammar, and argues for the ‘‘schematic’’ or pro-type representation discussed previously, which is similar to the Case representation of Lebeaux (1988). Recall again that the fusion can take place after movement on the Case frame. How do the full tree hypothesis and the subtree hypothesis di¤er? For example, the determiner is missing in see ball, and all other early twoword utterances (hit ball, kick shelf ). If one assumes the Full Tree hypothesis, then the lack of the determiner is not principled: since the slot for the determiner exists, why should it not be used? If one assumes instead that the child’s representation is a theta subtree as in (15), then the reason for the lack of a determiner is principled. There is no slot in the subtree for the determiner, so it could not be used. (15)
The situation with prepositions is also relevant. Note that while in general prepositions appear late in development, predicative prepositions (in small clauses) come in early. This is shown in (16):
The Architecture of the Derivation
61
(16) a. give ball Mommy Case-marking preposition comes in late (no Case-marking preposition in early child speech, hence no to here) b. boot o¤ predicative preposition comes in early (predicative o¤ comes in early) Precisely this pattern is expected if the Case frame is separate from the thematic representation, and the thematic representation comes in first. Then to, which is part of the Case frame, will not be available to the child, and so will be dropped in give ball Mommy, while the predicative o¤ is part of the thematic structure. The second argument for the separation into separate structures, very roughly open- and closed-class tiers, and the embedding or Fusion operation that unites them can be found in a remarkable paper by Garrett (1975). (For further developments of Garrett’s work in the linguistics literature, see Lapointe 1985, Lapointe and Dell 1989, and Golston 1991.) Garrett, in this and accompanying work, noted two major types of syntactic speech errors. One of these had the following form: the closed-class elements stay rigidly in place, and the open-class elements permute between them. Examples from Garrett are given below. The permuted elements are underlined. In the first speech error, ed and s stay in place and trunk and pack permute around it. That is, trunk and pack appear in each other’s assigned slots. In the second speech error, and ed stays in place constituting the frame, while sound and start permute. In all the examples, the determiners and inflection stay in place, and the open-class elements permute around them (for example, his sink is shipping). (17) Speech error She’s already trunked two packs It just sounded to start I’m not in the read for mooding But the clean’s twoer My frozers are shoulden That’s just a back trucking out His sink is shipping The cancel has been practiced
Target X She’s already packed two trunks X It just started to sound X I’m not in the mood for reading X But the two’s cleaner X My shoulders are frozen X That’s just a truck backing out X His ship is sinking X The practice has been canceled
62
Chapter 7
X X
She’s got her sets sight A puncture-tiring device
She’s got her sights set A tire-puncturing device
These errors, which form one class of common speech errors, are examples from Garrett’s MIT Corpus of 3,400 errors, collected by Garrett and Shattuck-Hufnagel. An analysis of these errors underscores the importance of functional elements, which have not always been su‰ciently appreciated. The closed-class elements form a frame into which the openclass elements are projected, and regulate their insertion/linearization. Thus Shattuck-Hufnagel (1974) and Garrett (1975) suggest that the functional elements and the open class lexical elements exist in separate representations, and that the open-class elements are inserted into the closed-class frame (see Lebeaux 1988, 2001, for a formalization of this procedure). Crucially, there will be permutations of the open class elements, but not the closed-class elements, when the insertion fails: a speech error. The closed-class elements will stay in place. This is shown in (18) and (19): (18) a. Normal insertion pattern CC tier: ed two
OC tier: pack trunk b. Permuted insertion pattern CC tier: ed two s ! OC tier: pack trunk
trunked two packs
!
packed two trunks
!
s
!
!
!
(19) a. Normal insertion pattern CC tier: ed to
OC tier: start sound b. Permuted insertion pattern CC tier: ed to ! OC tier: start sound
it just started to sound
it just sounded to start
!
Garrett (1975, 175) explicitly suggests a grammatical formalism for encoding this type of speech error: It seems clear that a satisfactory reconstruction of the error data requires the postulation of two distinct levels of syntactic analysis, and that these levels will di¤er
The Architecture of the Derivation
63
in that one is sensitive to the functional grammatical relations among words and phrases [this is the thematic or open-class level, DL] while the other is precisely responsive to the integration of grammatical formatives with the serial ordering of content words.
A full tree—with the permuted elements in incorrect order—is shown below, for that’s just a back trucking out. (20)
The speech errors therefore form a dramatic demonstration of the separation of the phrase marker into thematic (open-class) and Case frame (closed-class) trees. To review the discussion up to this point, three arguments have been presented for the separation into Case and theta subtrees: (1) the binding facts requiring the existence of pro elements in A-chains; (2) the acquisition data suggesting the representation of purely open-class elements in early telegraphic speech and the Case frame, its inverse, and (3) the speech error data showing a division into open class (OC) and closed class (CC) representations. A fourth argument may be found in considering the operation of the passive. It is generally accepted that the passive is almost completely free in English: every noun phrase marked with accusative case may passivize. A few verbs, however, do not passivize. (21)
The book costs $35. *$35 is cost by the book.
What accounts for the restriction in passive? One possibility in the literature traces the restriction back to theta theory: that a thematic hierarchy exists, and the subject must be higher on the thematic hierarchy than the object for passivization to occur (Jackendo¤ 1972). Alternatively, the restriction may be traceable back to Case: the direct object in cases that do not passivize are of abstract genitive or dative Case, and hence do not
64
Chapter 7
passivize. (Such case is of course visible in language like German.) Since only (abstract) accusative marked objects passivize, the lack of a passive in these cases follows, and is drawn back to Case. I consider this explanation preferable. What are the ramifications of the Case proposal—the proposal that passive is in general free, restricted to abstract accusative Case, and is not sensitive to thematic roles? The consequences of this can be most clearly seen in the traditional LFG representation. In this representation both theta and grammatical function information are present in the lexical entry, and passive is stated as a pair of grammatical function changing rules. (22) hit (SUBJ, OBJ) agent patient OBJ X SUBJ SUBJ X BY OBJ The weak point in this proposal is the following. Both grammatical functional and thematic role information is present in the lexical form. This means that in principle, passive could refer to thematic information. The fact that it does not is unexplained. That is, the fact that passive is sensitive only to grammatical functional information, I am arguing, and not thematic information is unaccounted for, since thematic information is present in the representation, and could be referred to. The general point that I am making is the following: (23) If a representation has information to refer to, it can in principle, and will sometimes in fact refer to it. If the information is not referred to, this information should be separated from the representation, so that it is not available in principle. The contention in (23), if true, has significant consequences. This might be considered a sort of economy principle. It guides as well the separation into the Case frame and theta subtree. The Case frame has Case, but no thematic information in it: the two have been separated. This is predicted since the passive transformatiion is stated over the Case frame, not the theta subtree, nor the fusion of the two. Thus the sensitivity of passive to only Case information is explained. Note that while the example above has been from LFG, a similar example may be constructed in Minimalism. The large VP has theta information in it; later, elements pick up Case. This means that at the point that passive applies (move), theta information as well as Case is present on the element. The fact that passive is insensitive to theta information
The Architecture of the Derivation
65
(as I am assuming) is therefore unexplained, unless we separate Case and theta into separate representations, as is suggested in this book. This concludes the fourth argument for the separation of the representation into the Case frame and theta representation. A fifth argument may be given on the basis of idioms, from Lebeaux (1988). The basic idea is the following. While phrase markers are built up, they are not built up by monotonically additive, concatenative operations (Merge) as in Chomsky 1995. Rather the building up operations may be considerably more complex, reminiscent of the original generalized transformations of Chomsky ([1955] 1975; 1957), as well as Tree Adjoining Grammar–like operations (Kroch and Joshi 1985, 1988; Kroch 1989; Frank 1992, 1998, 2002; and much other work). Indeed, the Projecta operation proposed above was precisely of this type: an additive operation, but not concatenative. How might one discover the existence of such operations? I would like to suggest an argument for Project-a below, on the basis of fixed structure, following Lebeaux 1988, 2000, 2001. Consider idioms, which are pieces of syntactic structure. They come in di¤ering shapes. For example there are the following idiom types, among others: (24) a. VP idioms: kick the bucket, hit the nail on the head, make the grade, hit the bottle b. IP idioms: the cat has got X’s tongue, the shit hit the fan, the ax fell c. CP idioms: what’s up, what’s cooking, when push comes to shove d. Predicative idioms: in the bag, o¤ the wall, in the pink, dead to rights, dead in the water e. Idioms headed by a modal or aspectual: have had it, has gone to the dogs, will wet your whistle f. Idioms headed by negation: not hold a candle to, not bat an eyelid, not believe X’s eyes, not see the forest for the trees The following generalization seems to hold (Radford 2004; Lebeaux 2008): (25) Constituent Condition Idioms form a constituent, at some level of representation. Consider now internal-to-VP idioms. In particular, consider the following two classes. I will call the first ‘‘Level I’’ and the second ‘‘Level II,’’ using
66
Chapter 7
terminology from Lexical Phonology for reasons that should soon become clear. (26) a. Level I: take advantage (of ), make mincemeat (of ), raise hopes, raise a smile, take pains b. Level II: kick the bucket, hit the hay, take a powder, take the air, hit the road, take a bath (in the stock market) A determiner is present and part of the idiom in Level II idioms, but not Level I idioms. It is striking that the idioms labeled Level I passivize, while those labeled Level II do not. (27) Level I idioms a. take advantage of: advantage was taken (of ) b. make mincemeat of: mincemeat was made (of ) c. raise hopes: hopes were raised d. raise a smile: ?a smile was raised e. take pains: pains were taken (to . . .) (28) Level II idioms a. kick the bucket: *the bucket was kicked b. hit the hay: *the hay was hit c. take a powder: *a powder was taken d. take the air: *the air was taken e. hit the road: *the road was being hit f. take a bath: *a bath was being taken (in the stock market) Now a remarkable generalization holds over these contrasting idioms. Namely, while Level I idioms allow their determiner to freely vary; Level II idioms never do—that is, in Level II idioms the determiner is completely fixed. This is shown below. (29) Determiner free Level I idioms: take some advantage of, take a great deal of advantage of, make much mincemeat of, raise a lot of hopes, raise many smiles, take some pains, take quite a few pains (30) Determiner fixed Level II idioms: *(some men) kicked some buckets, *hit some hay, *take several powders, *take several airs, *hit many roads, *hit some roads, *take some baths, *take a lot of a bath Thus the following generalization holds (Lebeaux 1988, 1996, 2000, 2001, 2008), as a two-way implication, in idioms.
The Architecture of the Derivation
67
(31) Determiner Generalization A VP idiom containing just a V and DP object passivizes $ The determiner of the object may freely vary (It is necessary for these idioms to contain just V and DP—others have di¤erent properties.1) The pairs were given above. A minimal pair would be, for example, kick the bucket vs. take advantage of. (32) Idiom fixed kick the bucket Determiner not free: *some men kicked some buckets Passivization not possible: *The bucket was kicked (33) Idiom free take advantage of Determiner free: take some advantage of/take a lot of advantage of Passivization possible: Advantage was taken of John Now the interesting thing is that nothing like the Determiner Generalization is predicted by Government-Binding Theory, or Minimalism, or indeed any extant grammatical theory that I know of. It is a profoundly unusual constraint: why should the possibility of passivization depend on the freedom of the determiner? In fact, something like it can be captured by the earlier division into the thematic representation and the Case tier (Lebeaux 1988, 1991, 1997, 2001, 2008). Let us call idioms which are specified on the thematic representation, prior to the Fusion or Project-a operation, Level I idioms. Let us call idioms which are specified at the point after the Project-a operation, Level II idioms. Then there are two idiom types corresponding to levels of fusion. Note that level, here, is used in the same sense as Kiparsky’s (1982a, 1982b) notion of level-ordered phonology, rather than the usual syntactic notion of level from Minimalism/Government-Binding Theory. (34) Level I type idioms (before Fusion or Project-a) take advantage make mincemeat raise hopes etc.
68
Chapter 7
(35) Level II type idioms (after Fusion or Project-a) kick the bucket hit the hay take the air etc. These constitute di¤erent types of idioms, which are structurally di¤erent. They are specified at di¤erent levels of the derivation. Level I idioms are specified as pieces of the thematic tier, while Level II idioms are specified at the level of the post-Fusion pattern. This is shown in (36). (36)
For a normal, nonidiomatic, structure, this looks as follows, repeating the earlier example. (37)
The Architecture of the Derivation
69
Post-Fusion point (VP (DP the man) (V 0 see (DP a woman))) with the appropriate Case and theta on the nodes Level I idioms are then formed in the upper-left quadrant, the thematic representation, as shown in (36) and (37). That is their deepest level of representation. Level II idioms, on the other hand, have as their deepest level the post-Project-a or Fusion representation. This is shown beneath in (36) and (37). It is central to note that in this representation both the determiner elements and the head are specified together. Therefore there is no free Case frame in Level II idioms, in contrast to Level I idioms. Level II idioms have as their deepest level of representation the postProject-a representation. Level I idioms have the thematic tier as their deepest level of representation. Consider now how the Determiner Generalization (31) can be derived. An idiom allows passivization i¤ the determiner of its object is free (in a V-DP structure). Now suppose we assume that passive—that is, Amovement or NP-movement—is stated over the Case frame. This is precisely what was suggested earlier, when it was argued that pro’s were moved up the tree in the schematic structure, which I identified with the Case frame. The structure with pro’s would correspond to the Case frame. The free Case frame, however, only exists with Level I idioms. With Level II idioms, the Case frame has already been fused with the associated theta subtree. That is, the Case frame exists independently in Level I idioms, while in Level II idioms, it is already prefused with the theta subtree. Since this is so, in Level II idioms no independent operation, such as passive, is possible on the independent Level I Case frame. Therefore, Level II idioms, which have their determiners specified as part of the idiom, cannot passivize. What about Level I idioms? In this case, unlike Level II idioms, the Case and thematic representation exist separately. The thematic representation is just the idiom itself (for example, take advantage), composed of open class elements, while the Case frame is any ordinary Case frame. Recall that the determiner in such idioms can vary freely, so that there is no need to specify the determiner as part of the idiom. With respect to passivization, the following derivation then exists for Level I idioms, where passive applies to the independent Case frame, and Project-a then occurs.
70
Chapter 7
(38)
For a passive idiom, the derivation would appear as follows. (39) Some advantage was taken (of Bill)
Such a derivation requires the existence of an independent Case frame. Therefore, in addition to o¤ering arguments concerning binding theory, this book presents a hypothesis concerning A-movement: (40) A-movement applies on the Case frame. This concludes the defense of idioms and the Determiner Generalization. To summarize the above, the last section has introduced the theta subtree and the Case frame as separate representations which are fused in the course of the derivation. This fusion takes place after A-movement on the Case frame. I have given five arguments for this. The Case frame was introduced as the schematic structure. The first argument was that late
The Architecture of the Derivation
71
lexical insertion occurred to prevent a Condition C violation in examples like (41): (41) John seems to himself to like cheese. This overlay occurred over pro elements which moved in the Case frame. The second argument was the existence of pure open-class structure (the thematic structure) in examples in very early acquisition, such as (42a) and (42b): (42) a. see ball (I see the ball) b. give ball Mary (I gave the ball to Mary) The third argument was from speech errors, and the existence of the closed-class frame into which open-class elements were projected, and permuted in speech errors. (43) Permuted insertion pattern CC tier: ed to ! OC tier: start sound
it just sounded to start
!
The fourth argument was from passive. The insensitivity of passive to thematic roles, and its sensitivity to Case features, could be explained in principle by the lack of thematic roles in the representation over which passive took place. This in turn was explained by having Case information separated from theta information. These are the Case frame and theta subtree. As before, A-movement takes place on the Case frame, prior to fusion. The fifth argument was from idioms. Two types of idioms were found: Level I and Level II idioms. Level I idioms are specified in the theta subtree, Level II idioms are specified at the post-Fusion point. Because a free Case frame exists for Level I idioms, they can be passivized: take advantage of. Since the Case frame is prefused in Level II idioms, they cannot passivize: kick the bucket. This concludes the argument for separating theta and Case, with their respective representations, which are later fused. With respect to binding theory, movement of little pro’s takes place on the Case frame; these are later overlaid. Two of the arguments above should be considered in tandem. The argument from idioms and the argument from telegraphic speech above fit together as follows. The subtree relevant for very early telegraphic speech, the thematic representation, contains only thematic elements (those that
72
Chapter 7
either give or receive a theta role). Later other primitives are added on, such as X 0 -theory, adjuncts, and so on, making a primitive-linking system, in which one subgrammar gives rise to a larger grammar, where each primitive is successively added in. A fuller description of the linking system is shown below. (For further extensive discussion, see Lebeaux 2000.) (44) Subgrammar approach
This acquisition sequence is repeated in the adult derivation. Thus for idioms, Level I idioms correspond to the thematic representation, while Level II idioms correspond to the thematic representation þ Case representation. These exist only at the post-Fusion point, where both theta and Case information are fused. The sequence in acquisition, theta ! theta þ Case, exists in the adult’s derivation of the surface form as well. The two form an isomorphism. Thus ‘‘see ball’’ exists as a subunit (the thematic subtree) in the adult derivation of ‘‘I see the ball.’’ ‘‘See ball’’ is retained in the adult derivation, prior to fusion with the Case frame. There are three relevant substructures for idioms: the thematic subtree, the Case frame, and the post-Fusion tree. Level I idioms correspond to the thematic subtree; Level II idioms correspond to the post-fusion tree. A-movement corresponds to movement on the Case frame, before fusion has taken place. Small pro placeholders are moved. Later, after the movement, names are overlaid into place. This is then too late to trigger Condition C, so sentences like (45) can occur, with John merged in late after movement. (45) John seems to himself t to like cheese. Let us flesh out exactly how A-movement (or NP-movement) would take place on the Case frame. There is nothing remarkable about the single clause case. In this instance, we will simply say that Move-a, in the traditional way, applies between the object and subject positions (accusative and potentially nominative marked positions). After the movement occurs, Project-a occurs, fusing the thematic representation and the Case representation.
The Architecture of the Derivation
73
The only complication here is the statement of the Project-a or Fusion operation. While the verb fuses with its correlate, the thematic object must project into the head of the chain with respect to which this object position is associated, rather than into the corresponding object position. This is shown in (46). (46) The man was seen (by a woman).
74
Chapter 7
In this instance of Project-a: (i) Ordinary NP movement takes place on the Case frame. (ii) The subject of the thematic representation is put in a by-phrase (I will not try to specify the details here). (iii) In the Project-a operation, the theta-assigning see is fused with the Case-assigning see. What is di¤erent here is that an internal object in the thematic representation, instead of being projected into the object position in the Case frame, which is a bound trace, is projected into the head of the chain corresponding to that position. Things get more intricate as the possibility of several successive NP movements (A-movement) occurs. In this case, what is needed is for a concatenation or substitution of Case frames to exist (in something like the di¤erent senses of Chomsky [1955] 1975, 1957, 1995; Kroch and Joshi 1985, 1988; Kroch 1989; Frank 1992, 1998, 2002; and other work on Tree-Adjoining Grammar). We may follow through one of these latter cases briefly. The frame for simple clausal passive has already been given. Assume now the following set of Case frames. These are close to the full set of structures over which NP movement takes place, connected to the verb. I give the Case frames in (47): (47) a. T1
The Architecture of the Derivation
b. T2
c. T3
d. T4
75
76
Chapter 7
Consider now the case of ECM passivization from a lower clause. The relevant sentence is given in (48a), the full tree in (48b). Therefore, the full tree in (48b) will contain three movements: from lower-clause SPECVP to lower-clause SPEC-IP, from SPEC-IP to upper-clause SPEC-VP, and from upper-clause SPEC-VP to upper clause SPEC-IP. (48) a. John was t believed t to have t seen Mary. b.
Now it can be seen by simple inspection that the Case frames in (47) are su‰cient to handle this type of movement—that is, to describe it. All that is necessary is to embed the substructures T1, T2, T3, and T4 in each other in the appropriate order. Each operation is a substitution operation. For example, to describe the SPEC-VP to SPEC-IP movement in the lower clause, one needs to embed T1 into T4, by substituting the root VP node in T1 into the frontier VP node in T4, as can be seen by simple inspection. Successive movement may now take place over these extended Case frames. The theory I am using here is not fully TAG, since it uses the substitution transformation exclusively instead of the TAG adjoin operation. For a more strict use of the TAG formalization, see Lebeaux 1997.
The Architecture of the Derivation
77
(49) Extended Case frame formed by embedding T1 into T4 fembed (T1, T4) ¼ T5 (i.e., the embedding function on T1 and T4 produces T5)
(50) Extended Case frame formed by embedding T1 into T4, and the resultant into T3 (as can be seen, this may be done indefinitely) fembed (T1, T4) ¼ T5; fembed (T5, T3) ¼ T6
As can be seen, this may recur indefinitely, and the set of trees in (47), together with an embedding or substitution operation are perfectly adequate to produce extended Case frames, over which passives may apply. One interesting question is how close is this to Chomsky’s (1995) Merge? Obviously, there is a radical di¤erence in the postulation of the Case frame as a separate level of repesentation. Given that, the building up of the Case tier might be assumed to follow first, producing extended Case tiers, with movement following that. This would correspond to
78
Chapter 7
something like Chomsky’s original concept of a deep or base structure. Or, one might assume that movement takes place almost immediately as the Case frame is built up—that is, movement would apply, then an embedding transformation, then movement, then an embedding transformation, and so on. This would be closer to Chomsky’s minimalist approach (Chomsky 1995). Perhaps economy principles would help us choose between them. This concludes the current section, motivating the late overlaying in A-chains. I gave two proposals. The first, briefly discussed, involved free lexical insertion, and the impossibility of moving stray a‰xes in whmovement. NP movement (A-movement) did not have stray Case a‰xes, and could move freely without lexical insertion. The second proposal, far more thoroughgoing, raised the possibility of a thematic subtree and an independent Case frame. The latter had pro elements, and movement took place on it. Five arguments were given for this proposal, which requires rethinking the architecture of the grammar.
8
Another Negative Condition
In chapter 4, I noted that Condition C, a negative condition, applied throughout the derivation. In (7) in chapter 4, repeated here as (1), I gave the following sketch of an architecture, with respect to a few conditions: (1)
(DS) Throughout the derivation, including LF: (1) Construction of candidate set or set of reconstruction positions (2) Condition B (3) Condition C SS or Spell-Out LF
LF: Single Tree Condition applying on positive conditions
What determined whether something could be reconstructed to a particular position? Put simply, it could be reconstructed if it had been overlaid at that point. That is, reconstruction could apply optionally to any position on a chain over which insertion or overlay had applied. In the architecture of (1), negative conditions—for example, Conditions B and C—apply thoroughout the derivation. That is, negative conditions are continuous and homogeneous, and if they are anywhere violated, the derivation crashes. This argument about negative conditions in turn depended on reconstruction being free, and optional, for A-chains. The proposal that negative binding conditions applied throughout the derivation suggested a di¤erent architecture than that of Chomsky (1995). Chomsky would propose the following: (2) Chomsky’s interface conjecture All constraints apply at the interfaces, in particular LF.
80
Chapter 8
I take as the null or default case instead the following: (3) Homogeneity conjecture All negative conditions apply throughout the derivation. Since I have been arguing that at least two negative conditions, Conditions B and C, apply throughout the derivation, the question arises if there are any other negative conditions that can be shown to apply throughout the derivation, buttressing the general point. I would like to argue here that there is at least one other such condition, namely, the Stray A‰x Filter of Lasnik ([1981] 1990). I would like to suggest that this condition, a negative condition, also holds at all points in the derivation. That is, the following holds: (4) Stray A‰x Filter (all points in the derivation, at the end of a cycle) A‰xes may not be stray (exist by themselves). I return to the first alernative discussed earlier. Recall that this started with the observation that pro elements could move up the tree by NPmovement (A-movement), but not by wh-movement. However, this was because wh-movement occurred after the assignment of Case, and Case is an a‰x. Therefore, while movement of pro elements was in general free, and occurred during NP-movement, it was not possible during whmovement, because wh-movement of pro would mean unsupported movement of the bare Case a‰x. This would be ruled out by a condition saying that a‰xes could not move by themselves. We then arrived at the conclusion in (7) of chapter 7, repeated here: (5) Bare a‰xes may not move. But why would (5) be expected to be true? Before considering this, which was formulated with respect to the abstract situation of wh- and NPmovement (A-movement), let us take a more concrete example. I am assuming that a‰x hopping involves the movement of the head to the a‰x rather than the reverse. For example, in have eaten, eat adjoins to -en, not the reverse: X
(6) have -en eat Move Nor does -en move up to have. Movement of eat to -en gives rise to have eaten, rather than, for example, *haven eat, which would have occurred if -en moved to have. This latter is a real possibility that must be excluded.
Another Negative Condition
81
(7) Possible outputs of (6) a. Possible output: have eaten b. Impossible output: *haven eat Similarly, throughout the auxiliary system, it is the open-class stem that moves to the auxiliary stem, rather than the a‰x moving to its selector, using the term selector to mean the relationship between have þ en, and be þ ing. This is shown again in (8) and (9): X
(8) be -ing eat (9) Possible outputs of (8) a. Possible output: be eating b. Impossible output: * being eat Note that the above derivations do not assume the ‘‘checking theory’’ of Chomsky (1993), but something closer to the traditional theory in which stems and inflectional endings are separately generated. Here, the former are moved into the latter.1 With respect to the A-movement versus wh-movement case discussed earlier, if a‰xes themselves cannot be moved, then one would not expect Case by itself to be able to be moved. This means that the lexical insertion of the entire wh-phrase must have taken place prior to wh-movement. But this then di¤erentiates the A-movement from the A0 -movement case, as needed. Why shouldn’t a‰xes move by themselves, to their selector, as in (7b) and (9b)? Note that here -en has moved to have, and -ing has moved to be, their respective selectors, giving rise to ungrammaticality. As noted directly above, I conjecture that this is due to the Stray A‰x Filter (Lasnik [1981] 1990) applying not only at LF, but everywhere: (10) Stray A‰x Filter (all points in the derivation, at end of cycle) A‰xes may not be stray (exist by themselves). Now how can the Stray A‰x Filter above be made to work, and, more important, how can it explain the NP- versus wh-movement facts, and the facts in (6)–(9), where it can be seen that the stem moves to the a‰x, not the a‰x to its selecting stem (for example, not -en to have)? This is the constraint in (5). The formulation of the Stray A‰x Filter in (10) applying throughout the derivation would be impossible in a theory like that of Lectures on Government and Binding with a single DS, because a‰xes exist at DS,
82
Chapter 8
and so would automatically mark as ungrammatical every structure at that point. However, in a theory like that of Chomsky (1995) with Merge, accepting ‘‘building up’’ but not ‘‘checking o¤,’’ it is possible because we may suppose that the minute an a‰x is generated, building up, it becomes the target of a movement by another element, a stem—which adjoins to it, and removes its stray a‰xal status. Thus for example for (11), the a‰xal status of -en is removed by the end of the cycle—by eat moving into it: (11) John may have eaten a shark.
Here, eat moves into -en, removing its a‰xal status as soon as -en is generated in the tree (building upward), by the end of the cycle. Note that this correctly blocks the opposite type of derivation in which an a‰x would adjoin upward to a lexical element (*haven eat above). This prediction is correct, because, as noted above, when a‰xes and heads fuse, it is the head that moves upward into the a‰x position, not the reverse (assuming that in ‘‘a‰x hopping’’ the stem moves to the a‰x, not the reverse). Similarly, in (12), eat moves to -ing, removing its a‰xal status: (12) John may be eating a shark.
Another Negative Condition
83
The derivation that is disallowed is that in (13), where the a‰x moves upward to the head: (13) A‰x-to-head derivation: Correctly ruled out
Correctly ruled-out output: *John may haven eat shark. In this derivation, the a‰x would have to be generated in Cycle 2, and then moved to the head, in Cycle 3. However, this would be ruled out by the Stray A‰x Filter applying everywhere, since at the end of Cycle 2 the a‰x would be stray. It is thus the derivation of *haven eat that is ruled out by the Stray A‰x Filter. Similarly, the derivation of *may being eat is blocked. This is shown in (14). Ungrammatically, the a‰x -ing moves up to the stem be. (14) A‰x-to-head derivation (upward): Correctly ruled out
Correctly ruled-out output: *John may being eat shark. Again, in this derivation, the a‰x would have to be generated in Cycle 2, and then moved to the stem, in Cycle 3. This would be ruled out by the Stray A‰x Filter applying everywhere, since at the end of Cycle 2 the a‰x would be stray.
84
Chapter 8
The above argumentation assumes something like the traditional division between stems and a‰xes. A large body of psycholinguistic evidence exists showing that this distinction is syntactically active. The Stray A‰x Filter, then, applying everywhere in the derivation, explains the generalization that stems move upward to a‰xes, not a‰xes downward to stems, nor a‰xes upward to their selectors. That is, in have þ en eat, eat moves to -en, not -en to eat, and not -en to have. It is the last of these three that the Stray A‰x Filter, applying at the end of the cycle, excludes. In the case of a sentence with just a main verb (e.g., John eats bread), the verb moves to INFL, then the whole complex reconstructs in the syntax. I therefore assume that reconstruction in the syntax is possible, and used. The full construction of the grammar, then, would have at least three negative conditions applying throughout the derivation: the two negative conditions of binding theory, Conditions B and C, and the negative condition of the Stray A‰x Filter. The candidate set for reconstruction would be constructed throughout the derivation, and the Single Tree Condition, a coherence condition on positive binding conditions and quantification, would apply at LF.
9
Conclusion
This book has posed the question of why positive and negative conditions apply di¤erently in the grammar and has developed an architecture to account for the di¤erence. The book’s initial argument is that A-reconstruction applies freely down A-chains. Moreover, the Single Tree Condition holds at LF, encompassing two subconditions: (a) that a single level, LF, feeds interpretation, where a bundled set of positive conditions apply (Condition A, quantifier interpretation, quantifier binding, idiom interpretation), and (b) that the tree is not tangled, either directly or by auxiliary definitions of c-command that allow the tangling of the tree with respect to that predicate. Given that A-reconstruction (taking a copy) applies freely at LF, sentences such as (1) are very problematic for Chomsky’s conjecture that all binding conditions apply at the interface, since the subject could be freely lowered at LF to escape binding theory violation; yet the sentence is ungrammatical, suggesting that binding theory applied before reconstruction had a chance to apply: (1) *He i seems to John’s i mother t to be quite wonderful. The negative binding conditions therefore apply existentially throughout the derivation. The homogeneous and existential application of the negative binding conditions was counterexemplified by the ‘‘hole in Condition C’’ for Achains, which appears in constructions like (2)—these constructions, while simple and obvious, unexpectedly do not trigger Condition C violations (at the premovement site): (2) a. Johni seems to himselfi t to like cheese. b. John’s i mother seems to himi t to be the best in the world. The hole in Condition C for A-chains was then repaired by allowing late insertion or overlay of lexical material in A-chains, but not A0 -chains.
86
Chapter 9
This argued for the existence of a schematic structure, in which little pro initially filled NP slots. This schematic structure was later identified with the Case frame. These developments suggested a change in the architecture of the grammar, to one in which there was a separate Case frame and theta subtree. The theta subtree was projected into the Case frame. Five arguments were given for the division into the Case frame and theta subtree, and their fusion. The arguments drew on diverse evidence from binding, telegraphic speech, speech errors, passive, and idioms. A-movement takes place on the Case frame, prior to the fusion operation. The book has therefore provided an explanation for the di¤erence in how positive and negative conditions apply. The resulting solution has broad implications for research in syntax and the architecture of the grammar.
Notes
Preface 1. I am indebted to Juan Uriagereka for discussion of (14). Chapter 1 1. For a recent opinion of Chomsky, see Chomsky 2005. 2. By ‘‘derivative definitions of c-command,’’ I mean definitions that allow an element A that is not actually c-commanding an element B to derivatively ccommand it, through the fact that B binds a trace that is in the c-command domain of A. I discuss this in detail later. Chapter 2 1. I do not include all the indexings here, for readability. What is bound to what should be clear from the context and discussion. This practice is followed throughout the book. In all cases, traces are shown. 2. May’s (1985) system is a simple alternative, which makes no di¤erence for the point argued for here. 3. By saying that bound f-features are left at the trace sites, I mean that person and number features are left at the trace site, and may be bound to. An example of binding to the trace site without reconstruction is the following: (i) John seemed to himself t to like himself. 4. I am intentionally remaining neutral on the question of a literal reconstruction analysis versus a copy-and-erasure analysis. Nothing in the book hinges on which of these formulations is used. Chapter 3 1. There may be doubling, as in the sorts of examples discussed in Butler and Mathieu (2005). The key point is that the quantificational force of the A-moved
88
Notes
quantifier or wh-elemet never appears in more than one place, as shown by the previous discussion. 2. I am examining a theory that uses two relevant levels of representation, and hence am using the term s-structure from Government-Binding Theory. 3. I am indebted to Kyle Johnson and Bob Freidin for discussion of these sentences. Chapter 4 1. Before proceeding, I will mention one way that the above argumentation against Condition C applying throughout the derivation can be sidestepped, but at the cost of making unsubstantiated claims. Suppose one first adopted the ‘‘copy-and-erasure’’ proposal of Chomsky (1995), leading to the sort of A-chain shown in (ia). At that point (ib), Condition C would be stated and would rule out the structure. At the next point (ic), the excess NPs would be erased. And at the final point (id), any positive binding conditions, for example Condition A, would apply. (i) a. He seems to John’s mother he to be expected he to win. (he copied, coreferent with John) b. Condition C applies, ruling out structure c. Erasure applies, erasing any combination of he’s except one d. Condition A of binding theory applies, if necessary e. Output structure: *He seems to John’s mother e to be expected e to win. This procedure seemingly rules out the output structure (ie) without having Condition C apply throughout the derivation. However, the trick was the following: to have Condition C apply before erasure. At this point, because of the copying operation, all of the information of the derivation was still present. Therefore, having Condition C apply at that point was the same as having it apply throughout the derivation: indeed, this was an attempt to encode information throughout the derivation at one level. Obviously, this sort of trick makes the claim attendant on it vacuous. More important, even if this were allowed, the mechanism does not do what it is supposed to. While interface conditions must refer to a single level—the interface— the mechanism here crucially introduces two levels, the level prior to erasure, where one part of binding theory applies, and the level posterasure, where the other part of binding theory applies. Therefore, the conditions are no longer stated at a single interface in any case. This means that the single-interface condition is not validated by this device: two interface levels are needed. It therefore appears that the technique of applying one part of binding theory, the negative conditions, then erasure, and then the other part of binding theory, cannot be upheld. Chapter 5 1. I would like to thank Alan Munn very much for discussions leading up to the formulation of the chart. 2. Using for now the classic notion of a deep structure (DS).
Notes
89
Chapter 7 1. It is important that it be a bare V-NP structure. Other structures, such as V-NP-PP, with a specified prepositional phrase, have di¤erent properties. Chapter 8 1. In some cases the placement of the output seems to be where the head originates and not the a‰x. Thus in (i), from (ii), the output is in the V slot, an argument for a‰x lowering (a‰x hopping). (i) John likes Bill. (ii) John TNS like Bill. I assume that in such cases, the V raises to TNS, and then the whole complex reconstructs to V, in the derivation. (iii) John TNS like Bill. John like þ TNS t Bill. John e like þ TNS Bill. This is in accord with the generalization that lowering does not exist, except via reconstruction.
References
Aoun, J. 1982. ‘‘On the Logical Nature of the Binding Principles: Quantifier Lowering, Double Raising of ‘There’, and the Notion Empty Element.’’ Proceedings of NELS 12: 16–35. Bach, E. 1977. ‘‘The Position of the Embedding Transformation in the Grammar Revisited.’’ In A. Zampoli, ed., Linguistics Structures Processing. New York: North-Holland. Barss, A. 1986. Chain-Binding. Doctoral dissertation, MIT. Belletti, A., and L. Rizzi. 1988. ‘‘Psych-Verbs and Theta-Theory.’’ Natural Language and Linguistic Theory 6: 291–325. Bresnan, J. 1982. The Mental Representation of Grammatical Relations. Cambridge, MA: MIT Press. Brody, M. 1995. Lexico-Logical Form. Cambridge, MA: MIT Press. Browning, M. 1987. Null Operator Constructions. Doctoral dissertation, MIT. Burzio, L. 1986. Italian Syntax. Dordrecht: Kluwer/Reidel. Burzio, L. 2000. ‘‘Anatomy of a generalization.’’ In E. Reuland, ed., Arguments and Case: Explaining Burzio’s Generalization. Amsterdam: John Benjamins, 195– 240. Butler, A., and E. Mathieu. 2005. ‘‘Split-DPs, Generalized EPP, and Visibility.’’ In M. McGinnis and N. Richards, eds., Perspectives on Phases. MIT Working Papers in Linguistics 49. Cambridge, MA: MITWPL, Department of Linguistics and Philosophy, MIT. Chametzky, R. 1996. The Phrase Marker and the Theory of the Extended Base. SUNY Press. Chametzky, R. 2000. The Phrase Marker: From GB to Minimalism. Oxford: Blackwell. Chametzky, R. 2003. ‘‘Phrase Structure.’’ In R. Hendrick, ed., Minimalist Syntax. Malden, MA: Blackwell, 192–225. Chomsky, N. [1955] 1975. The Logical Structure of Linguistic Theory. Chicago: University of Chicago Press. Chomsky, N. 1957. Syntactic Structures. The Hague: Mouton. Chomsky, N. 1965. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
92
References
Chomsky, N. 1977a. Essays on Form and Interpretation. New York: NorthHolland. Chomsky, N. 1977b. ‘‘On Wh-Movement.’’ In P. Culicover, T. Wasow, and A. Akmajian, eds., Formal Syntax. New York: Academic Press. Chomsky, N. 1993. ‘‘A Minimalist Program for Linguistic Theory.’’ In K. Hale and S. J. Keyser, eds., The View from Building 20: Essays in Linguistics in Honor of Sylvain Bromberger, 1–52. Cambridge, MA: MIT Press. Chomsky, N. 1995. The Minimalist Program. Cambridge, MA: MIT Press. Chomsky, N. 2001a. ‘‘Derivation by Phase.’’ In M. Kenstowicz, ed., Ken Hale: A Life in Language. Cambridge, MA: MIT Press. Chomsky, N. 2001b. ‘‘Beyond Explanatory Adequacy.’’ MIT Occasional Papers in Linguistics 20. Cambridge, MA: MITWPL, Department of Linguistics nad Philosophy, MIT. Chomsky, N. 2005. ‘‘Three Factors in Language Design.’’ Linguistic Inquiry 36, no. 1: 1–22. Cambridge: MIT Press. Cichetti, C. 1997. ‘‘Doubling Stucture and Reconstruction.’’ Unpublished manuscript. Deprez, V. 1989. On the Typology of Syntactic Position and the Nature of Chains: Move-a to the Specifier of Functional Projections. Doctoral dissertation, MIT. Diesing, M. 1992. Indefinites. Cambridge, MA: MIT Press. Engdahl, E. 1986. Constituent Questions. Dordrecht: Kluwer/Reidel. Epstein, S., E. Groat, R. Kawashima, and H. Kitahara. 1998. The Derivation of Syntactic Relations. Oxford: Oxford University Press. Fong, S. 1991. Computational Properties of Principle-Based Grammatical Theories. Doctoral dissertation, Artificial Intelligence Laboratory, MIT. Fox, D. 2000. Economy and Semantic Interpretation. Cambridge, MA: MIT Press. Frank, R. 1992. Syntactic Locality and Tree Adjoining Grammar: Grammatical, Acquisition, and Processing Perspectives. Doctoral dissertation, University of Pennsylvania. Frank, R. 1998. ‘‘Structural Complexity and the Time Course of Grammatical Development.’’ Cognition 66: 249–301. Frank, R. 2002. Phrase Structure Composition and Syntactic Dependencies. Cambridge, MA: MIT Press. Freidin, R. 1978. ‘‘Cyclicity and the Theory of Grammar.’’ Linguistic Inquiry 9: 519–549. Freidin, R. 1986. ‘‘Fundamental Issues in the Theory of Binding.’’ In B. Lust, ed., Studies in the Acquisition of Anaphora. Dordrecht: Kluwer/Reidel. Fromkin, V. 1971. ‘‘On the Nonanomalous Nature of Anomalous Utterances.’’ Language 47: 27–52. Garrett, M. 1975. ‘‘The Analysis of Sentence Production.’’ In G. H. Bower, ed., The Psychology of Learning and Motivation, vol. 9. New York: Academic Press.
References
93
Golston, C. 1991. Both Lexicons. Doctoral dissertation, UCLA. Grimshaw, J. 1994. ‘‘Minimal Projections and Clause Structure.’’ In Barbara Lust, Margarita Suner, and John Whitman, eds., Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives, Volume 1: Heads, Projections, and Learnability, 75–83. Mahwah, NJ: Erlbaum. Guilfoyle, E., and M. Noonan. 1992. ‘‘Functional Categories in Language Acquisition.’’ Canadian Journal of Linguistics 37, no. 2: 241–272. Hale, K., and J. Keyser. 1993. ‘‘On Argument Structure and the Lexical Expression of Syntactic Relations.’’ In K. Hale and S. J. Keyser, eds., The View from Building 20, 53–109. Cambridge, MA: MIT Press. Halle, M., and A. Marantz. 1993. ‘‘Distributed Morphology and the Pieces of Inflection.’’ In K. Hale and S. J. Keyser, eds., The View from Building 20, 111–176. Cambridge, MA: MIT Press. Heycock, C. 1995. ‘‘Asymmetries in Reconstruction.’’ Linguistic Inquiry 26, no. 4: 547–570. Higginbotham, J. 1983. ‘‘Logical Form, Binding, and Nominals.’’ Linguistic Inquiry 14: 395–420. Hoji, H. 2003. ‘‘Falsifiability and Repeatability in Generative Grammar: A Case Study of Anaphora and Scope Dependency in Japanese.’’ Lingua 113: 377–446. Hornstein, N. 1995. Logical Form: From GB to Minimalism. Oxford: Blackwell. Hyams, N. 1992. ‘‘A Re-analysis of Null Subjects in Child Language.’’ In J. Weissenborn, H. Goodluck, and T. Roeper, eds., Theoretical Issues in Language Acquisition, 249–267. Mahwah, NJ: Erlbaum. Jackendo¤, R. 1972. Semantic Interpretation in Generative Grammar. Cambridge, MA: MIT Press. Jackendo¤, R. 1992. ‘‘Madame Tussaud Meets the Binding Theory.’’ Natural Language and Linguistic Theory 10: 1–31. Kayne, R. 1984. Connectedness and Binary Branching. Dordrecht: Foris. Kayne, R. 1994. The Antisymmetry of Syntax. Cambridge, MA: MIT Press. Kennelly, S. 2004. Quantificational Dependencies. Amsterdam: Netherlands Graduate School of Linguistics (LOT). Kiparsky, P. 1982a. ‘‘From Cyclic to Lexical Phonology.’’ In H. van der Hulst and N. Smith, eds., The Structure of Phonological Representations. Dordrecht: Foris. Kiparsky, P. 1982b. ‘‘Lexical Morphology and Phonology.’’ In I. S. Yang, ed., Linguistics in the Morning Calm. Seoul: Hanshin. Kroch, A. 1989. ‘‘Asymmetries in Long Distance Extraction in a Tree Adjoining Grammar.’’ In Mark Baltin and A. Kroch, eds., Alternative Conceptions of Phrase Structure. Chicago: University of Chicago Press. Kroch, A., and A. Joshi. 1985. The Linguistic Relevance of Tree Adjoining Grammar. Technical Report MS-CIS-85-16. Philadelphia: Department of Computer and Information Science, University of Pennsylvania.
94
References
Kroch, A., and A. Joshi. 1988. ‘‘Analyzing Extraposition in a Tree Adjoining Grammar.’’ In G. Huck and A. Ojeda, eds., Syntax and Semantics 20: Discontinuous Constituency. New York: Academic Press. Lapointe, S. 1985. ‘‘A Model of Syntactic Phrase Combination during Speech Production.’’ In S. Berman, J. W. Choe, and J. McDonough, eds., Proceedings of NELS 15. Amherst: GLSA, University of Massachusetts. Lapointe, S., and G. Dell. 1989. ‘‘A Synthesis of Some Recent Work in Sentence Production.’’ In M. Tannenhaus and G. Carlson, eds., Linguistic Structure in Language Processing. Dordrecht: Kluwer/Reidel. Lasnik, H. [1981] 1990. ‘‘Restricting the Theory of Transformations.’’ In H. Lasnik, Essays on Restrictiveness and Learnability. Dordrecht: Kluwer/Reidel. Lasnik, H. 1990. ‘‘On the Necessity of the Binding Conditions.’’ In R. Friedin, ed., Principles and Parameters in Comparative Grammar, 7–28. Cambridge, MA: MIT Press. Lasnik, H. 1999. Minimalist Analysis. Oxford: Blackwell. Lasnik, H., and M. Saito. 1992. Move-a. Cambridge, MA: MIT Press. Lebeaux, D. 1984. ‘‘Anaphoric Binding and the Definition of PRO.’’ Proceedings of NELS 14, 253–274. Amherst: University of Massachusetts. Lebeaux, D. 1988. Language Acquisition and the Form of the Grammar. Doctoral dissertation, University of Massachusetts. Lebeaux, D. 1991. ‘‘Relative Clauses, Licensing, and the Nature of the Derivation.’’ In S. Rothstein and M. Speas, eds., Syntax and Semantics 25: Phrase Structure: Heads and Licensing, 205–239. New York: Academic Press. Lebeaux, D. 1995. ‘‘Where Does the Binding Theory Apply?’’ University of Maryland Working Papers in Linguistics 3: 63–88. Lebeaux, D. 1996. ‘‘Some Observations on Idioms.’’ Unpublished manuscript. Lebeaux, D. 1997. Determining the Kernel II: Prosodic Form, Syntactic Form, and Phonological Bootstrapping. NEC Technical Report 97-094. Appears shortened as ‘‘Prosodic Form, Syntactic Form, Phonological Bootstrapping, and Telegraphic Speech.’’ In B. Hoehle and J. Weissenborn, eds., Approaches to Bootstrapping. Amsterdam: John Benjamins, 2001. Lebeaux, D. 1998. Where Does the Binding Theory Apply? NEC Technical Report 98-048. Princeton, NJ: NEC Research Institute. Lebeaux, D. 2000. A Subgrammar Approach to Language Acquisition. NEC Technical Report 2000-077. Princeton, NJ: NEC Research Institute. Lebeaux, D. 2001. Language Acquisition and the Form of the Grammar. Amsterdam: John Benjamins. Lebeaux, D. 2008. ‘‘Idioms.’’ Unpublished manuscript. Mahajan, A. 1990. The A/A0 Distinction and Movement Theory. Doctoral dissertation, MIT. May, R. 1979. The Grammar of Quantification. Doctoral dissertation, MIT. May, R. 1985. Logical Form. Cambridge, MA: MIT Press.
References
95
McCawley, J. 1968. ‘‘Lexical Insertion in a Transformational Grammar without Deep Structure.’’ Papers from the 4th Regional Meeting of CLS, University of Chicago. Milsark, G. 1974. Existential Sentences in English. Doctoral dissertation, MIT. Montague, R. 1974. Formal Philosophy. Ed. Richard Thomason. New Haven, CT: Yale University Press. Munn, A. 1993. Topics in the Syntax and Semantics of Coordinate Structures. Doctoral dissertation, University of Maryland. Munn, A. 1994. ‘‘A Minimalist Account of Reconstruction Asymmetries.’’ Proceedings of NELS 24: 397–410. Partee, B. 1979. ‘‘Montague Grammar and the Well-Formedness Constraint.’’ In Frank Heny and Helmut Schnelle, eds., Syntax and Semantics 10: Selections from the Third Groningen Round Table, 275–315. New York: Academic Press. Powers, S. 1996a. The Growth of the Phrase Marker: Evidence from Subjects. Doctoral dissertation, University of Maryland. Powers, S. 1996b. ‘‘MAPping Phrase Markers.’’ In C. Koster and F. Wijnen, eds., Proceedings of the Groningen Assembly on Language Acquisition, 303–312. Radford, A. 1990. Syntactic Theory and the Acquisiton of English Syntax: The Nature of Early Child Grammars of English. Oxford: Blackwell. Radford, A. 2004. Minimalist Syntax. Cambridge: Cambridge University Press. Reinhardt, T. 1983. Anaphora and Semantic Interpretation. London: Croom Helm. Rizzi, L. 1986. ‘‘Null Objects in Italian and the Theory of pro.’’ Linguistic Inquiry 17: 501–557. Safir, K. 1984. ‘‘Multiple Variable Binding.’’ Linguistic Inquiry 15: 603–638. Safir, K. 1986. ‘‘Relative Clauses in a Theory of Binding and Levels.’’ Linguistic Inquiry 17: 663–689. Safir, K. 1998. ‘‘Reconstruction and Bound Anaphora: Copy Theory without Deletion at LF.’’ Unpublished manuscript, Rutgers University. Saito, M. 1989. ‘‘Scrambling as Semantically Vacuous A 0 -Movement.’’ In Mark R. Baltin and Anthony S. Kroch, eds., Alternative Conceptions of Phrase Structure. Chicago: University of Chicago Press. Saito, M. 1992. ‘‘Long Distance Scrambling in Japanese.’’ Journal of East Asian Linguistics 1: 69–118. Sauerland, U. 1998. The Meaning of Chains. Doctoral dissertation, MIT. Shattuck-Hufnagel, S. 1974. Speech Errors: An Analysis. Doctoral dissertation, MIT. Speas, M. 1990. Phrase Structure in Natural Language. Dordrecht: Kluwer/ Reidel. Sportiche, D. 1995. ‘‘Clitic Constructions.’’ In L. Zaring and J. Rooryck, eds., Phrase Structure and the Lexicon. Dordrecht: Kluwer/Reidel.
96
References
Sportiche, D. 2005. ‘‘Division of Labor between Merge and Move: Strict Locality of Selection and Apparent Reconstruction Paradoxes.’’ Unpublished manuscript, UCLA. Travis, L., and G. Lamontagne. 1987. ‘‘The Syntax of Adjacency.’’ Paper presented at the West Coast Conference on Formal Linguistics. Uriagereka, J. 1988. On Government. Doctoral dissertation, University of Connecticut. Vainikka, A. 1985. ‘‘The Acquisition of English Case.’’ Paper presented at the 10th Boston University Conference on Language Development. Vainikka, A. 1986. ‘‘Case in Acquisition and Finnish.’’ Unpublished manuscript. Vainikka, A. 1989. Deriving Syntactic Representations in Finnish. Doctoral dissertation, University of Massachusetts. Vainikka, A. 1993/1994. ‘‘Case in the Development of English Syntax.’’ Language Acquisition 3, 257–325. Mahwah, NJ: Erlbaum. Vainikka, A., and M. Young-Scholten. 1994. ‘‘Direct Access to X 0 -Theory— Evidence from Turkish and Korean Adults Learning German.’’ In B. Schwartz and T. Hoekstra, eds., Language Acquisition Studies in Generative Grammar, 265–316. Amsterdam: John Benjamins. Vainikka, A., and M. Young-Scholten. 1996. ‘‘Gradual Development of L2 Phrase Structure.’’ Second Language Research 12, no. 1: 7–39. Van Riemsdijk, H., and E. Williams. 1981. ‘‘NP Structure.’’ Linguistic Review 1: 171–207. Williams, E. 2003. Representation Theory. Cambridge, MA: MIT Press.
Index
A‰x, conditions on movement, 52–54, 79–84 A‰x hopping, 80 A-movement, 29–41 A0 -movement, 29–41 Anti-reconstruction e¤ects, 43–49 Aoun, J., 5, 91 Architecture of the derivation, 51–78 Bach, E., 91 Barss, A., 15, 91 Belletti, A., xi, 1, 9, 12, 91 Binding theory, xi–xxiii, 1–50, 85–86 Bresnan, J., 91 Brody, M., 91 Browning, M., 10, 91 Bundling of conditions, 19–21 Burzio, L., xi, 1, 21, 91 Butler, A., 87n1, 91 Case as a‰x, 52–54 Case frame, xvi–xxiii, 34–41, 52–74 Chametzky, R., 3, 54, 91 Chomsky, N., xi, xii, 1, 2, 3, 5, 13, 15, 18, 27, 28, 29, 35, 38, 40, 58, 65, 74, 79, 81, 82, 87n1, 88n1, 91, 92 Chomsky’s conjecture, xi, 1–3, 27, 79– 80, 85–86 Cichetti, C., 15, 92 Condition A, xi, xiii, xvi, 1–3, 19–21, 41, 85–86 Condition B, xii, 1–3, 27–28, 79, 85–86 Condition C, xi–xv, 1–3, 8, 12, 23–28, 31–33, 41, 85–86
as applying throughout the derivation, xi–xvi, 8, 12, 23–41 Constituent Condition, 65 Definite descriptions, and lowering, 25 Dell, G., 61, 94 Deprez, V., 92 Determiner Generalization, 66–70 Diesing, M., 25, 92 Dual domination, 15–19 Engdahl, E., 15, 17, 92 Epstein, S., 1, 21 Fong, S., 3, 27, 92 Fox, D., 3, 45, 51, 92 Frank, R., 59, 65, 74, 92 Freidin, R., 43, 88n1, 92 Fromkin, V., 92 Full tree approach, 58–61 Functional elements, 51–73, 80–84 Fusion, xvi–xxiii, 54–74 Garrett, M., 61, 62, 92 Golston, C., 61, 93 Grimshaw, J., 59, 93 Groat, E., 92 Guilfoyle, E., 59, 93 Hale, K., 54, 93 Halle, M., 93 Heycock, C., 93 Higginbotham, J., 10, 93 Hoji, H., 3, 93
98
Hole in Binding Condition C, 29–42 Homogeneity conjecture, xi, 27, 79–80 Hornstein, N., 7, 93 Hyams, N., xix, 93 Idioms and Determiner Generalization (see Determiner Generalization) and freedom of determiner, xxi, 66– 67 Level I idioms, xix–xxi, 65–70 Level II idioms, xix–xxi, 65–70 and passive, xix–xxi, 65–70 Interfaces, xi, 1–3, 27, 79–80, 85–86 Jackendo¤, R., 3, 63, 93 Johnson, K., 88n3 Joshi, A., 65, 74–78, 93, 94 Kawashima, R., 92 Kayne, R., xi, 1, 21, 93 Kennelly, S., 93 Keyser , J., 54, 93 Kiparsky, P., 67, 93 Kitahara, H., 92 KP, 52 Kroch, A., 65, 74–78, 93, 94 Language acquisition and Case frame (see Case frame) and thematic representation (see Thematic representation) and thematic subtree (see Thematic subtree) Lamontagne, G., 52, 96 Lapointe, S., 61, 94 Lasnik, H., 1, 24, 28, 80–84, 94 Lebeaux, D., xi, xvi, xvii, xviii, xix, 1, 5, 7, 11, 21, 30, 43, 44, 45, 46, 51, 52, 54, 55, 56, 58, 59, 60, 62, 65, 66, 67, 72, 94 Lexical elements, 54–73 Lexical entry, 54–55 Lexical insertion into pro, xiv–xvi, 33–41, 47–49 into thematic representation, 54, 55 staggered, xiv–xvi, 33–41, 47–49
Index
Lexical overlay, xiv–xvi, 33–41, 47–49 Lexical phonology, 67 Mahajan, A., 94 Marantz, A., 93 Mathieu, E., 87n1, 91 May, R., 5, 7, 87n2, 94 McCawley, J., 34, 95 Milsark , G., 26, 95 Montague, R., 34, 95 Munn, A., 88n1, 95 Negative Conditions, xi, xii, 1–3, 27– 28, 29–41, 79–86 Noonan, M., 59, 93 Partee, B., 27, 95 Passive, 63–65 Positive Conditions, 1–3, 27, 41 Powers, S., 59, 95 Pro, 34–41 PRO, and double binding structures, 10–12 Project-a, xvi–xxiii, 54–74 Quantifiers, scoping of, 5–11 and reconstruction, 5–7 and trapping e¤ect (see Trapping e¤ect) Radford, A., xviii, 36, 59, 65, 95 Reconstruction, xii–xv, 1–14, 23–28, 30–37, 43–49, 85 in A-chains, xi–xiv, 5–13, 15–21, 47– 49 in A0 -chains, 43–46 in relative clauses, 43–46 Reinhardt, T., 95 Relative clauses and the conditions, 43–49 Rizzi, L., xi, 1, 9, 12, 91, 95 Safir, K., 10, 95 Saito, M., 3, 94, 95 Sauerland, U., 3, 95 Schematic structure, xiv–xvi, 33–41 Shattuck-Hufnagel, S., 62, 95
Index
Single Tree Condition, xiv, 1–3, 5–20 Speas, M., 95 Speech error data, 61–63 Sportiche, D., 3, 54, 95, 96 Stray A‰x Filter, 52–54, 79–84 Subgrammar, 54–56, 71–72 Subtree approach, 54–61 Telegraphic speech, xviii–xix, 58–61 Thematic representation, xvi–xxiii, 54– 74. See also Theta subtree Thematic subtree, xvi–xxiii, 54–74 Theta subtree, xvi–xxiii, 54–74 Trace-command, 16–19 Trace-sister, 19 Trapping e¤ect, xiii, 5–7 Travis, L., 52, 96 Tree-adjoining grammar, 74–78 Uriagereka, J., 87n1, 96 Vainikka, A., 3, 38, 59, 96 Van Riemsdijk, H., 43, 96 Well-Formedness Constraint, 27 Williams, E., 43, 96 Young-Scholten, M., 59, 96
99
Linguistic Inquiry Monographs Samuel Jay Keyser, general editor 1. Word Formation in Generative Grammar, Mark Arono¤ 2. X Syntax: A Study of Phrase Structure, Ray S. Jackendo¤ 3. Recent Transformational Studies in European Languages, Samuel J. Keyser, Ed. 4. Studies in Abstract Phonology, Edmund Gussmann 5. An Encyclopedia of AUX: A Study in Cross-Linguistic Equivalence, Susan Steele 6. Some Concepts and Consequences of the Theory of Government and Binding, Noam Chomsky 7. The Syntax of Words, Elisabeth O. Selkirk 8. Syllable Structure and Stress in Spanish: A Nonlinear Analysis, James W. Harris 9. CV Phonology: A Generative Theory of the Syllable, George N. Clements and Samuel Jay Keyser 10. On the Nature of Grammatical Relations, Alec P. Marantz 11. A Grammar of Anaphora, Joseph Aoun 12. Logical Form: Its Structure and Derivation, Robert May 13. Barriers, Noam Chomsky 14. On the Definition of Word, Anna-Maria Di Sciullo and Edwin Williams 15. Japanese Tone Structure, Janet Pierrehumbert and Mary E. Beckman 16. Relativized Minimality, Luigi Rizzi 17. Types of A¯-Dependencies, Guglielmo Cinque 18. Argument Structure, Jane Grimshaw 19. Locality: A Theory and Some of Its Empirical Consequences, Maria Rita Manzini 20. Indefinites, Molly Diesing 21. Syntax of Scope, Joseph Aoun and Yen-hui Audrey Li 22. Morphology by Itself: Stems and Inflectional Classes, Mark Arono¤ 23. Thematic Structure in Syntax, Edwin Williams 24. Indices and Identity, Robert Fiengo and Robert May 25. The Antisymmetry of Syntax, Richard S. Kayne 26. Unaccusativity: At the Syntax-Lexical Semantics Interface, Beth Levin and Malka Rappaport Hovav 27. Lexico-Logical Form: A Radically Minimalist Theory, Michael Brody 28. The Architecture of the Language Faculty, Ray Jackendo¤ 29. Local Economy, Chris Collins 30. Surface Structure and Interpretation, Mark Steedman 31. Elementary Operations and Optimal Derivations, Hisatsugu Kitahara 32. The Syntax of Nonfinite Complementation: An Economy Approach, Zˇeljko Bosˇkovic´ 33. Prosody, Focus, and Word Order, Maria Luisa Zubizarreta 34. The Dependencies of Objects, Esther Torrego 35. Economy and Semantic Interpretation, Danny Fox
36. What Counts: Focus and Quantification, Elena Herburger 37. Phrasal Movement and Its Kin, David Pesetsky 38. Dynamic Antisymmetry, Andrea Moro 39. Prolegomenon to a Theory of Argument Structure, Ken Hale and Samuel Jay Keyser 40. Essays on the Representational and Derivational Nature of Grammar: The Diversity of Wh-Constructions, Joseph Aoun and Yen-hui Audrey Li 41. Japanese Morphophonemics: Markedness and Word Structure, Junko Ito and Armin Mester 42. Restriction and Saturation, Sandra Chung and William A. Ladusaw 43. Linearization of Chains and Sideward Movement, Jairo Nunes. 44. The Syntax of (In) dependence, Ken Safir 45. Interface Strategies: Optimal and Costly Computations, Tanya Reinhart 46. Asymmetry in Morphology, Anna Maria Di Sciullo 47. Relators and Linkers: The Syntax of Predication, Predicate Inversion, and Copulas, Marcel den Dikken 48. On the Syntactic Composition of Manner and Motion, Maria Luisa Zubizarreta and Eunjeong Oh 49. Introducing Arguments, Liina Pylkka¨nen 50. Where Does Binding Theory Apply?, David Lebeaux