cover
next page >
Cover
title: author: publisher: isbn10 | asin: print isbn13: ebook isbn13: language: subject public...
54 downloads
932 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
cover
next page >
Cover
title: author: publisher: isbn10 | asin: print isbn13: ebook isbn13: language: subject publication date: lcc: ddc: subject:
Course in Generalized Phrase Structure GrammarStudies in Computational Linguistics Bennett, Paul. 1857282175 9781857282177 9780203499610
cover
next page >
< previous page
page_i
next page >
Page i A Course in Generalized Phrase Structure Grammar
< previous page
page_i
next page >
< previous page
page_ii
next page >
Page ii Studies in Computational Linguistics Series Editor Harold Somers Editorial Board: Joseba Abaitua, Universidad de Deutso Doug Arnold, University of Essex Paul Bennett, UMIST Bill Black, UMIST Michael Hess, University ät Zürich Rod Johnson, Lugano Jun-ichi Tsuji, UMIST Mary McGee Wood, University of Manchester Peter Whitelock, Sharp Laboratories of Europe Christoph Zähner, UMIST Further titles in the series Linguistic and computational techniques in Machine Translation system design Peter Whitelock & Kieran Kilby Analogical natural language processing Daniel Jones
< previous page
page_ii
next page >
< previous page
page_iii
next page >
Page iii A Course in Generalized Phrase Structure Grammar Paul Bennett UMIST
Published in association with the Centre for Computational Linguistics
< previous page
page_iii
next page >
< previous page
page_iv
next page >
Page iv © Paul Bennett 1995 This book is copyright under the Berne Convention. No reproduction without permission. All rights reserved. First published in 1995 by UCL Press. Published in association with the Centre for Computational Linguistics. This edition published in the Taylor & Francis e-Library, 2004. UCL Press Limited University College London Gower Street London WC1E 6BT The name of University College London (UCL) is a registered trade mark used by UCL Press with the consent of the owner. British Library Cataloguing-in-Publication Data A CIP catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data are available ISBN 0-203-49961-1 Master e-book ISBN ISBN 0-203-22944-4 (OEB Format) ISBN: 1-85728-217-5 (Print Edition) PB
< previous page
page_iv
next page >
< previous page
page_v
Page v Contents Preface Abbreviations 1 Introduction 1.1 Historical background 1.2 On formalization 1.3 Further reading 2 Basics of syntactic structure 2.1 Phrase-structure rules 2.2 The lexicon 2.3 More on constituent structure 2.4 X-bar theory 2.5 Alternative versions of X-bar theory 2.6 On semantics 2.7 Further reading 3 The ID/LP format for grammars 3.1 Dominance and precedence 3.2 Advantages of ID/LP format 3.3 Universal ordering principles 3.4 Generative capacity 3.5 Summary 3.6 Further reading 4 Features and subcategorization 4.1 Syntactic features 4.2 The status of S 4.3 Subcategorization 4.4 Examples of subcategorization 4.5 Some other approaches to subcategorization 4.6 The structure of AP 4.7 Further reading 5 The Head Feature Convention 5.1 Introducing the Head Feature Convention 5.2 Extending the applicability of the HFC 5.3 The notion of ‘head’ 5.4 Summary 5.5 Further reading
< previous page
page_v
next page > ix xi 1 2 4 5 6 6 8 9 15 18 20 23 24 25 26 30 34 36 36 37 37 40 43 46 48 50 52 53 53 56 60 62 63
next page >
< previous page
page_vi
Page vi 6 Feature instantiation and related topics 6.1 Extension of categories 6.2 Feature Co-ocurrence Restrictions 6.3 Feature Specification Defaults 6.4 Concluding remarks 6.5 Further reading 7 More on English structure 7.1 Prepositional phrases 7.2 The complementizer system 7.3 More on infinitival VPs 7.4 Noun phrases revisited 7.5 Auxiliaries 7.6 Adverbs and other adverbials 7.7 Further reading 8 The Foot Feature Principle 8.1 Relative and interrogative pro-forms 8.2 The distribution of the interrogative feature 8.3 Relative clauses 8.4 Genitives 8.5 Further reading 9 The Control Agreement Principle 9.1 Subject-verb agreement 9.2 The Control Agreement Principle 9.3 Expletive subjects 9.4 Infinitival complements again 9.5 Reflexives 9.6 Other approaches to agreement 9.7 Further reading 10 Metarules 10.1 Subject-auxiliary inversion 10.2 Passive 10.3 Extraposition 10.4 The power of metarules 10.5 Further reading 11 Unbounded dependencies 11.1 Slash categories in relative clauses 11.2 More on slash categories in relatives 11.3 Other relative types 11.4 Wh -questions 11.5 Tough-movement 11.6 Empty categories 11.7 Island constraints 11.8 Further reading
< previous page
page_vi
next page > 64 64 67 70 72 73 74 74 78 81 83 87 92 98 99 100 102 105 106 111 112 112 114 116 120 126 128 129 130 130 135 141 145 146 147 149 153 160 164 168 169 172 173
next page >
< previous page
page_vii
Page vii 12 Co-ordination 12.1 Simple co-ordination 12.2 Co-ordination of unlikes 12.3 Co-ordination schemata 12.4 Co-ordination and unbounded dependency 12.5 Some remaining problems 12.6 Further reading 13 Current issues 13.1 Category Co-occurrence Restrictions 13.2 Passives revisited 13.3 Lexicalism 13.4 Head-driven Phrase Structure Grammar 14 Relevance to computational linguistics 14.1 Problems with parsing GPSG 14.2 Various solutions 15 Conclusion References Index
< previous page
page_vii
next page > 175 176 180 185 189 192 194 195 195 196 199 203 205 207 210 214 217 223
next page >
< previous page
page_viii
next page >
page_viii
next page >
Page viii This page intentionally left blank.
< previous page
< previous page
page_ix
next page >
Page ix Preface This is a textbook on syntax, an area in which many other monographs and textbooks exist. I believe, however, that the present book has something new to offer, and that it does not simply duplicate material already available. First, it is not a general introduction to syntax: it assumes that the reader has a grasp of constituency, category and other basics of syntactic analysis. Secondly, it is not a survey or comparison of a number of different frameworks. Rather, it presents one specific theoretical approach, though I have also tried to show that much of what is discussed here is of general relevance and interest. Thirdly, it is not about transformational grammar, but instead deals with a nontransformational theory of syntax. Aside from what it is not, I hope this book also has some positive characteristics. It offers, within the framework of a textbook, an account of a relatively formalized approach to grammar. I have deliberately not attempted to present definitions and so on in a completely formalized manner, but have sought a way of expressing linguistic statements that brings out their purpose and content without too much forbidding terminology and notation. Those who wish for greater formalization can find it in the references cited. In addition, the rules given cover a fairly wide area of English grammar, and this should be useful in itself for grammar writers and others exploring English structure. I firmly believe that Generalized Phrase Structure Grammar (GPSG) currently offers the best framework for presenting a wide-coverage, formalized grammatical description, one that will be useful reading even for those who are interested in other approaches that still fall within the general area occupied by GPSG, i.e. those using features and unification. Teachers may need to adopt their own approaches to the material presented here, depending on the level of their students and on their previous familiarity with formal syntax. I have found with undergraduates that the ideas of phrase-structure need to be presented at some length before the use of features and feature principles is introduced gradually. For comparable reasons, the book does not include exercises: I feel it is better to leave it to individual teachers to provide exercises that suit their classes. Acknowledgements This book could not have existed without the superb work of Gerald Gazdar, Ewan Klein, Geoffrey Pullum and Ivan Sag in developing the theory of GPSG. I am also most grateful to Harold Somers, for editorial contributions beyond the call of duty, and to Doug Arnold and Roger Evans, who read the text and made a number of helpful suggestions. And I should not forget the students who sat through the lectures on which the book is based, and who helped me to see which parts needed expressing more clearly.
< previous page
page_ix
next page >
< previous page
page_x
next page >
page_x
next page >
Page x This page intentionally left blank.
< previous page
< previous page Page xi Abbreviations ACC Adj Ad-S Ad-V Adv, ADV AdvP ADVTYPE AGR AP Art AUX BSE CAP CFG CFL CP Comp, COMP CONJ ECPO e FCR FFP FIN FSD GB GPSG H HFC HPSG ID iff INF INFL INTRANS INV JPSG LFG LOC LP MC N NG NOM NP OV P PAS PER PFORM PLU POS POST PP PRD prep PRO PRO PRP PS PSP Q R S Sinf Sfin Spec, SPEC SpecA
page_xi —accusative —adjective —sentential adverb —verb—modifying adverb —adverb —adverbial phrase —adverb type —agreement —adjectival phrase, Agreement Principle —article —auxiliary —base —Control Agreement Principle —context-free grammar —context-free language —complementizer phrase —complement, complementizer —conjunction —Exhaustive Constant Partial Ordering —empty —Feature Co-ocurrence Restriction —Foot Feature Principle —finite —Feature Specification Default —Government-Binding —Generalized Phrase Structure Grammar —head —Head Feature Convention —Head-driven Phrase Structure Grammar —immediate dominance —if and only if —infinitive —inflection —intransitive —inversion —Japanese Phrase Structure Grammar —Lexical-Functional Grammar —locative —linear precedence —main clause —noun —noun group —nominative —noun phrase —object precedes the verb —preposition —passive —person —prepositional form —plural —part of speech —post-nominal —prepositional phrase —predicate —preposition —empty subject —pronominal —present participle —phrase structure —past participle —quantifier —relative —sentence —infinitive sentence —finite sentence —specifier —specifier of adjective
next page >
SpecN SpecP SUBCAT SUBJ TRANS V VG VO
< previous page
—specifier of noun —specifier of preposition —subcategorization frame —subject —transitive —verb —verb group —verb precedes the object
page_xi
next page >
< previous page
page_xii
next page >
page_xii
next page >
Page xii This page intentionally left blank.
< previous page
< previous page
page_1
next page >
Page 1 1 Introduction This text is an introduction to modern grammatical theory, specifically the syntactic aspects of the theory known as Generalized Phrase Structure Grammar (GPSG). In addition to providing a course in GPSG, there are three subsidiary aims. The first is to look at a variety of structures in English, and to examine the properties of them that any linguistic description will need to consider. The second is to illustrate the kinds of problems (and answers) that any formalized account of linguistic structure will encounter, whether couched in GPSG or some other framework. The third is to introduce, rather sporadically, some aspects of other current theories and formalisms. It is assumed that the reader has taken an introductory syntax course, and is familiar with ideas of syntactic categories, constituency, phrase markers and phrase structure rules (though the start of Chapter 2 revises some of these notions). This is not, then, a book for those with no previous knowledge of linguistics or syntax: rather, it is aimed at people who have a basic understanding of syntactic analysis and wish to learn more about how formalized approaches work. It may be helpful to set out the structure of subsequent chapters and how they might usefully be tackled. In addition to revising some basic notions, Chapter 2 also introduces X-bar theory, a constrained notion of phrase structure rules that is adopted in some form by a number of different grammatical approaches. Chapter 3 describes ID/LP format, a means for separating the treatment of word order from that of constituent structure. Chapters 4–6 contain the basic ideas of features and feature percolation, ideas that have become generally accepted in syntactic theory. Anyone covering the first six chapters will have received a grounding in much of the common core of current views on grammar. This core is extended in subsequent chapters; Chapter 7 describes
< previous page
page_1
next page >
< previous page
page_2
next page >
Page 2 further aspects of the structure of English, and later chapters both expand the coverage of English and introduce new theoretical concepts: foot features, agreement, metarules, unbounded dependencies, coordination. These may be read to learn about the GPSG treatment of the topics they cover, or just to gain an appreciation of the kinds of data and analyses that work on syntax addresses. Chapter 13 discusses some current issues, and Chapter 14 introduces the relevance of all these ideas to computational linguistics. As has been said above, we shall concentrate here on syntax. Morphology will be discussed occasionally, to see to what extent syntactic principles can be extended to the study of word structure. A little more (but probably too little) will be said about semantics. Apart from the current paragraph, we shall say nothing at all about phonology. This is because GPSG, like most present-day theories of grammar, accepts the Principle of Phonology-Free Syntax, according to which syntactic phenomena do not depend on the phonological properties of the words in them (e.g. we do not find statements like ‘‘Apply this rule if one of the words begins with a voiced sound”). GPSG makes use of a variety of descriptive devices, and these will be introduced gradually, so that what is said in earlier sections will in some cases be revised or rendered quite invalid by what is said later. Quite apart from seeming to be the best practice from a pedagogic viewpoint, this also enables us to exploit work done within other theoretical frameworks and so show that more than might be thought is shared by (sometimes furiously) competing approaches. We shall try not to introduce too much material that needs subsequent revision, so some of the early sections will be rather restricted in their coverage. In the remainder of this introductory section, we discuss the historical background of GPSG and the importance of formalization. 1.1 Historical background GPSG is a variety of generative grammar, and so aims at providing a precise description of linguistic phenomena, of the link between sound and meaning, a description that does not rely on intuition or undefined notions. Linguistic research within the generative grammar tradition was begun by the work of Noam Chomsky in the 1950s. Chomsky applied his own and others’ work within formal language theory to natural language, and considered where natural languages and their grammars fell within the hierarchy of classes of formal languages and formal grammars that he had established. He argued that context-free grammars did not suffice for description of natural languages, and that more powerful types of rule were needed. These were transformational rules, and the theory Chomsky developed became known as transformational grammar. One particular aspect of such grammars is that two significant syntactic structures are assigned to each sentence, only one of which
< previous page
page_2
next page >
< previous page
page_3
next page >
Page 3 reflects the actual surface string of words; these two structures are related by a set of transformational rules. A transformation may be said to map one tree structure into another, whereas context-free rules build up tree structures. The two levels are deep structure (generated by context-free rules) and surface structure (generated from deep structure via transformations). Over three decades of research it became clear that transformational rules were far too powerful, and allowed for all kinds of operations that were never in fact needed in natural language grammars. The aim of constraining the transformational component therefore became one of the underlying themes of research. From one point of view, GPSG can be seen as the natural conclusion of this approach, viz. it offers a maximally constrained view of the transformational component by simply abolishing it. GPSG assigns just a single syntactic representation to each sentence; it is therefore sometimes labelled a monostratal theory. This rejection of transformations GPSG shares with a number of other approaches to syntax (we will refrain from just listing their names here). It also shares much else, including the notion of unification, which refers to the merging of information from different sources to produce an integrated description of some larger entity. The various unification-based theories form one of the most important strands in grammatical research, and GPSG has both drawn from and contributed to other approaches. Knowing about GPSG and some of its central proposals is therefore important even for those whose main interest is in other theoretical frameworks. GPSG itself has undergone a number of changes since it was first developed around 1980. The major account of GPSG is Gazdar et al. (1985), and it is the theory presented there that will be dealt with here. But recent ideas and controversies will be discussed when they seem relevant or worthwhile (especially in Chapter 13, where we will also relate GPSG to other theories that have grown out of it). We shall not trace the history of GPSG (important earlier papers include Gazdar 1981, 1982); some pre1985 papers can be hard to read because of differences in theory and/or notation, and we shall point this out where appropriate. One further difference between GPSG and Chomsky’s work should be made clear. Chomsky emphasizes the view that linguistics is part of psychology, that a grammar describes a speaker’s competence (their knowledge of their language) and that a basic problem to be explained is that of language acquisition. How are children able to acquire knowledge of the vastly complex system of any language, given the poverty of the data they are exposed to (no information about ambiguity or what is ungrammatical, for instance)? His answer is to ascribe innate knowledge of language universals to the child. GPSG is also interested in universal aspects of language, but simply rejects (or remains agnostic towards) the psychological interpretation provided by Chomsky. GPSG, then, provides an account of linguistic structure, and makes no assumptions about human linguistic knowledge.
< previous page
page_3
next page >
< previous page
page_4
next page >
Page 4 1.2 On formalization As was suggested earlier, work in generative grammar implies the development of formalized accounts and precise definitions. In fact, GPSG takes this principle far more seriously than most other grammatical theories, and the GPSG literature is full not just of explicit rules but also of obscure-looking definitions and formidable notations. The principle generally adopted here will be to introduce formal notions in a fairly informal way, providing definitions in ordinary prose as much as possible: we shall mostly not give definitions in all their formal glory. But first we should justify the role of formalization. The following passage is taken from the first published book-length account of generative grammar: This study…forms part of an attempt to construct a formalized general theory of linguistic structure and to explore the foundations of such a theory. The search for rigorous formulation in linguistics has a much more serious motivation than mere concern for logical niceties or the desire to purify wellestablished methods of linguistic analysis. Precisely constructed models for linguistic structure can play an important role, both negative and positive, in the process of discovery itself. By pushing a precise but inadequate formulation to an unacceptable conclusion, we can often expose the exact source of this inadequacy and, consequently, gain a deeper understanding of the linguistic data. More positively, a formalized theory may automatically provide solutions for many problems other than those for which it was explicitly designed. Obscure and intuition-bound notions can neither lead to absurd conclusions nor provide new and correct ones, and hence they fail to be useful in two important respects. (Chomsky 1957: p. 5) Formalization, then, both exposes problems and provides generality; in fact, without it, linguists can have no real understanding of what they are doing or of the claims their analysis is making. We might add that Chomsky has in many ways abandoned his earlier view (though he would deny any such shift in opinion). For instance, he has more recently written: The serious problem is to learn more, not to formalize what is known and make unmotivated moves into the unknown. (Chomsky 1990: p. 147) Our aim here, however, has simply been to show that formalization is invaluable, and not mere decoration.1 It might be useful to close this chapter by citing the following passage from one linguist who espouses Chomsky’s approach: constructing grammars for particular languages is no longer seen as the crucial enterprise it once was: largely because the purely descriptive problems involved in such an enterprise have to a considerable extent been solved, 1. The issue of the usefulness of formalization remains a controversial one: see Ludlow (1992), who argues that it is sufficient for a proposal to be ‘rigorous enough’ to be discussed and evaluated, without it necessarily being formalized to the last detail.
< previous page
page_4
next page >
< previous page
page_5
next page >
Page 5 and the more significant problem is how to provide explanatory principles accounting for why grammars have the form they do. (Smith 1989: p. 11) It will become clear that GPSG is concerned with both constructing particular grammars and seeking explanatory principles, and that there are plenty of descriptive problems left! 1.3 Further reading For the history of generative grammar, including some remarks on the origin of GPSG, see Newmeyer (1986); GPSG is discussed on pp. 209–15. Wasow (1985) and McCloskey (1988) are useful surveys of the syntactic scene, with discussions of the place of GPSG. Steurs (1991) is a paper presenting a helpful overview of GPSG, but it would probably be best to read this only after covering four or five chapters of the present text. Trask (1993) is an excellent dictionary of grammatical terms and concepts, including entries for many key GPSG terms. Other introductory accounts of GPSG are Sells (1985: Ch. 3), Horrocks (1987: Ch. 3) and Borsley (1991). These have the disadvantage of often discussing GPSG ideas vis-à-vis transformational grammar, whereas the present text presupposes no knowledge of transformational approaches. Borsley discusses transformational grammar alongside GPSG, and also includes a lot of material on a slightly different phrase-structure-based theory, Head-driven Phrase Structure Grammar (HPSG); when reading Borsley, you are advised to concentrate on the GPSG parts of his text. Both Sells and Horrocks are concerned with presenting GPSG theory rather than with discussing the structure of English; in addition, Sells’ book is often very laconic. However, all contain a great deal of information and should certainly be the first port of call when reading further, being considerably easier than primary sources. It is advisable to look at at least one of these three books as a supplement to the present text. Three other textbooks on grammatical theory and English syntax that we shall refer to are Radford (1988), McCawley (1988b), and Baker (1989). None is oriented specifically to GPSG, but each contains a lot of very useful material. Radford is a textbook of transformational grammar, but most of the first five chapters (and some of the rest) is more generally applicable. Baker is a comprehensive study of various English structures. McCawley employs his own variety of transformational grammar; his book is the hardest of the three, and should be read for its insights into English, not to learn about McCawley’s own theory. We shall also direct the reader to a shortish but standard grammar of English, Greenbaum & Quirk (1990). For some observations on the importance of formalization, see Gazdar et al. (1985: pp. 1–6 and 18–19). Pages 5–6 deal with the issue of the relevance of psychology.
< previous page
page_5
next page >
< previous page
page_6
next page >
Page 6 2 Basics of syntactic structure This chapter introduces some basic principles of syntactic structure. We shall begin by considering phrase-structure rules, before going on to examine some properties of specific constructions and a particular way of constraining phrase-structure rules. 2.1 Phrase-structure rules Many linguistic formalisms make use of context-free phrase-structure rules (which we shall refer to as ‘PS-rules’). Such a rule has a single non-terminal symbol (i.e. the name of a syntactic category such as S, NP or VP) on its left-hand side, and a string of non-terminal symbols and/or terminal symbols (i.e. lexical items in the language) on its right-hand side. The following would be examples of PS-rules: (1) a. S NP VP b. NP Art N c. PP P NP d. AP Adj By making use of brackets to show optionality and Kleene star to show iteration, we could have (2):
< previous page
page_6
next page >
< previous page
page_7
next page >
Page 7 (2)
a. VP V(NP) b. NP Art Adj* N To handle words, we could employ rules like (3): (3) a. N man | book | table b. V see | made | laughed In fact, we shall shortly suggest an alternative way of introducing words into structures. If we reject rules like (3), we can say more simply that the right-hand side of a rule is a string of non-terminals (plus expressions such as Adj*). PS-rules can be used to construct (or generate) a phrase-marker or tree for a sentence, such as (4):
(4) Each PS-rule is responsible for a part of a tree. For instance, rule (1a) says that a sentence (S) can consist of a noun phrase (NP) followed by a verb phrase (VP), and this is reflected in the topmost part of (4). Rule (1b) is reflected twice in (4), in the internal structure of both NPs. It is worth learning some basic terminology about phrase-markers. A phrase-marker consists of nodes joined by lines. The nodes are labelled with either words such as man or category names such as NP or N. The notion of domination is clear in an informal sense: a node dominates the node(s) below it in a tree. So the node labelled V in (4) dominates that labelled made; the VP node dominates all the following nodes: V, made, Art, a, N, table . A node immediately dominates those nodes immediately below it: so VP in (4) immediately dominates V and NP but does not immediately dominate Art (etc.). A node is the mother of the nodes it immediately dominates, its daughters; nodes sharing the same mother are sisters. The string a table is dominated by the NP node, and so is shown to be a constituent of category NP. The topmost node in a tree is the root node. (For a formalized discussion of tree-related notions, see Partee et al. 1990: pp. 439–46.) Grammatical relations do not need to be shown explicitly in a phrase-marker, as they can be defined in terms of more primitive notions. The subject is the NP
< previous page
page_7
next page >
< previous page
page_8
next page >
Page 8 immediately dominated by S, the object is the NP node immediately dominated by VP. 2.2 The lexicon PS-rules are a rather blunt instrument for capturing subcategorization facts, i.e. statements about the distribution of words that depend on more than classification into traditional syntactic categories. A familiar example is the distinction between transitive and intransitive verbs: (5) a. John resembles his father. b. *John resembles. (6) a. John died. b. *John died Bill. A common solution is to divide a linguistic description into two parts, a grammar and a lexicon. The grammar makes general statements (as embodied in PS-rules, for instance), while the lexicon deals with more subtle and idiosyncratic information about individual words.1 We could then do away with rules like (3a), and instead make use of lexical entries that give information about category etc. of words, e.g. (7). (7) a. man: N b. the: Art Where subcategorization is significant, we could add information about the environment in which a word can occur: (8) a. resemble: V,—NP b. die: V,—# (8a) states that resemble occurs before an object NP, while (8b) states that die occurs before ‘nothing’ (i.e., the end of a sentence). We would now need to impose the extra restriction that a word can occur in (be inserted in) a phrasemarker if and only if its position in that phrase-marker matches the word’s subcategorization frame. So resemble could occur only if there were a following NP, and (5b) could not be generated. We will not give further examples of subcategorization frames, because the actual way in which these are stated in GPSG differs somewhat from that introduced here. Equally, GPSG does not in fact make use of PS-rules (in spite of its name!). But 1. This is a very sweeping characterization, and should not be taken as claiming that the lexicon cannot be the locus of generalizations; in fact, much current linguistic research is concerned with formulating general rules and statements that apply to the lexicon rather than the grammar: see 13.3.
< previous page
page_8
next page >
< previous page
page_9
next page >
Page 9 it is essential to be aware of these notions, and to know why GPSG considers them inadequate. 2.3 More on constituent structure Before turning to these inadequacies, however, we shall continue to look at some properties of constituent structure in English. We claimed in 2.2 that die could occur only before the end of a sentence, but of course this is quite untrue: (9) John died on Tuesday. A time adverbial such as on Tuesday is not subcategorized by the verb; unlike objects, time adverbials can be added to practically any clause, irrespective of the verb therein. They are adjuncts or modifiers, rather than complements or arguments. It is not intended to review here the tests for distinguishing arguments from modifiers (see Matthews, 1981, Ch. 6; Somers, 1984); instead we shall look at some consequences for phrase structure. An initial problem is how we can ensure that die is insertable into the tree for (9). This can be solved by assigning slightly more structure to sentences than was done in 2.1, and slightly revising the restriction on lexical insertion given in 2.2. We shall adopt the principle that all and only the arguments of a lexical item (with the exception of its subject) are its sisters. Modifiers must be at a higher level in the tree. We can use the term ‘group’ rather than ‘phrase’ for the constituent consisting of a word and its arguments (and verb group, say, will abbreviate as VG; note that this is not a standard abbreviation). This would give us PS-rules such as (10). (10) a. S NP VP b. VP VG AdvP c. VG V (NP) We shall regard all adverbials as instances of AdvP, even if they have the structure of PPs; this approach will be amended in due course, but is convenient for present purposes. The tree for (11a) might then be (11b). (11) a. John wrote the letter on Tuesday.
b.
< previous page
page_9
next page >
< previous page
page_10
next page >
Page 10 For noun phrases, we could write: (12) a. NP Art NG b. NG N (PP) These would allow for NPs such as the man and the resemblance to Jim. At the same time, we would have to say the condition on lexical insertion applies only to the sisters of the word in question, it ignores larger aspects of the tree. This is in fact a desirable limitation, since it constrains the power of subcategorization requirements and makes them purely local, i.e. they take into account a node and its sisters, not an entire tree. Suppose we now consider the possibility of a verbal group being followed by more than one adverbial, as in (13). (13) a. John wrote the letter in Manchester on Tuesday. b. John wrote the letter quickly on Tuesday. c. John knocked on the door intentionally twice. d. John knocked on the door twice intentionally. e. John knocked on the door intentionally at midnight twice. It should be clear that there is no principled limit to the number of adverbials that can occur here, so it would be quite wrong to extend rule (10b) to (14). (14) VP VG (AdvP) (AdvP) (AdvP) Any limitation on the number of adverbials is due to performance factors, not to grammatical rule. We must allow, then, for an indefinite number of adverbials to be present. PS-rules allow for two possible ways of achieving this. One is to use Kleene star: (15) VP VG AdvP* The other is to make use of a recursive rule, where the symbol on the left-hand side also appears on the right: (16) VG VG AdvP These would respectively assign the structures (17a) and (17b) to (13c).
(17) a.
< previous page
page_10
next page >
< previous page
page_11
next page >
Page 11
(17) b. Both approaches seem to ‘work’, so how can we choose between them? First, note that there is a definite semantic difference between (13c) and (13d): (13c) involves two instances of intentional knocking, while (13d) involves one intentional instance of knocking twice. A structure such as (17b) provides a better account of this difference in meaning, since each adverbial can be interpreted as modifying the VG it is a sister of (which after all is what happens when a VG is accompanied by a single adverbial). We are therefore assuming (i) a congruence between syntactic structure and semantic interpretation (see 2.7 below), and (ii) a principle whereby modifiers semantically modify their sisters. Secondly, we can note the distribution of the pro-form do so: as in (18). (18) a. John put his coat on the chair, and Bob did so too. b. * John put his coat on the chair, and Bob did so on the sofa. c. John knocked on the door intentionally twice, and Bob did so too. d. John knocked on the door intentionally twice, and Bob did so once. e. John knocked on the door intentionally twice, and Bob did so by mistake. The contrast between (18a, b) suggests that do so cannot be followed by an argument of the verb; i.e. its minimal antecedent is an entire VG. (18c, d, e) show that different antecedents are possible: knocked on the door intentionally twice in (18c), knocked on the door intentionally in (18d), and knocked on the door in (18e). In terms of (17b), we can say that the antecedent of do so can be any VG or VP; but no such simple statement suffices if we adopt (17a). It is particularly (18d) that represents a problem, since knocked on the door intentionally is not a constituent in (17a), and there is good reason to assume that an antecedent of a pro-form must be a constituent. We thus have further reason for adopting the recursive analysis, as seen in (16) and (17b). We can now return to the analysis of NPs. A head noun can be followed by argument and modifier PPs:
< previous page
page_11
next page >
< previous page
page_12
next page >
Page 12 (19)
a. a student of physics b. a student with long hair c. a student of physics with long hair d. *a student with long hair of physics (19d) suggests that a modifer PP cannot precede an argument inside an NP. Given our earlier remarks about arguments being sisters of a lexical head, we would expect a complement PP to be a sister of N, and an adjunct PP to be a sister of NG: (20) a. NP Art NG (PP) b. NG N (PP) These rules immediately account for the ordering properties of arguments and modifiers. It is of course possible to have more than one modifier PP: (21) a student from London with long hair Again, we have the problem of how to account for this, whether to use Kleene star or recursion: (22) a. NP Art NG PP* b. NG NG PP Before endeavouring to answer this, however, we should also note the presence of modifying adjectives before the head noun: (23) a. a tall student from London b. an alleged English baron c. an English alleged baron d. an alleged baron from England Now, (23b, c) have different meanings: someone who is alleged to be an English baron vs. someone who is English and alleged to be a baron. And (23d) is ambiguous between these readings. The rules in (24) would be rival analyses. (24) a. NP Art AP* NG PP* b. NG AP NG If we continue to accept the principle enunciated above, that a modifier is to be taken as modifying the constituent it is a sister of, we would need to posit the bracketings shown in (25) for (23b-d) (leaving aside for the moment the question of the nature of the constituents): (25) a. an [alleged [English baron]] b. an [English [alleged baron]] c. an [alleged [baron from England]] d. an [[alleged baron] from England] There are two analyses of (23d), each corresponding to a different interpretation. This evidence, then, would suggest a recursive analysis, which would be needed in
< previous page
page_12
next page >
< previous page
page_13
next page >
Page 13 order to have the appropriate constituents, viz. (26):2 (26) a. NP Art NG b. NG NG PP c. NG AP NG d. NG N (PP) The structures in (27) would be assigned to some of our crucial examples.
(27) a. b. c. We should add that the semantic differences with ‘stacked’ modifiers do not just arise with adjectives like alleged. There are also contrasts between a famous boring book and a boring famous book, or between a large small firm and a small large firm (see also Greenbaum & Quirk 1990: pp. 388–9). A further argument can be constructed on the basis of the distribution of the pro-form one (though this is unfortunately somewhat vitiated by the existence of dialectal differences). Consider the examples in (28). (28) a. John saw a tall girl, and I saw one too. b.John was tricked by an alleged English baron, and I was tricked by a real one. c. John was tricked by an English alleged baron, and I was tricked by a French one. d.John was tricked by an alleged baron from England, and I was tricked by a real one. e. John was tricked by an alleged baron from England, and I was tricked by one from France. In (28a), the antecedent of one is the whole NP a tall girl . In the other examples, however, the antecedent is less than a full NP, the precise antecedents being: in (28b), English baron; in (28c), alleged baron; in (28d), baron from England; and in (28e), alleged baron. On the assumption that only a constituent can function as an antecedent, we need structures in which all of these sequences are constituents. In other words, for an alleged English baron, we need (27a) and not, for instance (29). 2. We no longer need to have PP as a daughter of NP: compare (26a) with (24a).
< previous page
page_13
next page >
< previous page
page_14
next page >
Page 14
(29) We can state that one can function as a pro-form for either NP or NG. Before moving on to another topic, we can briefly consider whether the recursive analysis is correct for all languages. The emphasis of this text is on English, but we shall sometimes look at other languages, to point up similarities or contrasts with English. Gil (1983, 1987) notes the non-synonymy of a fake old coin and an old fake coin, and argues that in Hebrew the semantic distinctions do not hold, and that the examples in (30) mean ‘a coin that is old and fake’. (30) a. matbea yašan mezuyof COIN OLD FAKE b. matbea mezuyof yašan COIN FAKE OLD In Japanese, the contrast between small powerful engine and powerful small engine does not hold, the phrases in (31)3 both meaning ‘an engine that is small and powerful’. (31) a. tiisai tuyoi enzin SMALL POWERFUL ENGINE b. tuyoi tiisai enzin POWERFUL SMALL ENGINE In Hebrew and Japanese, then, a structure more like (29) might be appropriate (with appropriate changes of word order in the Hebrew case). So there is some evidence that the recursive analysis is motivated only for certain languages. It may also be the case that flat structures are sometimes appropriate for English. For example (32) is understood as ‘a car that is small and powerful’, with both adjectives modifying the head noun.4 (32) a small, powerful car 3. The misprint in Gil (1987) is corrected here. 4. On the other hand, it may be that (32) involves coordination of two adjectives with no explicit coordinating conjunction.
< previous page
page_14
next page >
< previous page
page_15
next page >
Page 15 The main point we have made in this section is the need for a recursive analysis of VPs and NPs, based on the rules given in (16) and (26). We shall be revising the details of these analyses in due course, but will be keeping the recursive backbone. 2.4 X-bar theory We have seen that PS-rules provide a fairly powerful and flexible way of describing linguistic phenomena. As will emerge in the course of this text, however, they are in some respects insufficiently flexible, and do not always provide a natural means for capturing linguistic generalizations. But before considering these objections, we want to examine a sense in which PS-rules are too powerful, in that they enable linguists to write rules that would never be needed. The rules in (33) are perfectly respectable PS-rules (i.e. they are respectable when considered purely in the light of the definition of a PS-rule). (33) a. VG VN b. AP V NP Yet both are bizarre: (33a) is odd because one expects the sisters of a lexical category to be phrases, not lexical categories themselves; (33b) is odd because an adjectival phrase would be expected to contain an adjective. Such rules are not just inappropriate for (say) English; we can confidently predict that they will never be required for any natural language grammar. What is needed is a way of restricting the theory of PS-rules so that rules like (33) are simply impossible. The main constraints on PS-rules form X-bar theory (the reason for this odd-sounding name will be explained shortly). There are many different variants of this (it is really a family of theories, rather than a single theory), and in some respects GPSG adopts a rather unrestrictive version of it. So we shall expound some of the main principles here, and leave the GPSG-specific variant till later. One of the guiding principles of X-bar theory is that constituents above the lexical level (i.e. phrases and groups in our terminology so far) are to be seen as projections of lexical categories. By this is meant that the category of a phrase necessarily matches that of its head: e.g. a phrase built round a noun (containing its arguments, modifiers, etc.) is automatically a noun phrase, rather than just happening to be called an NP. Similar considerations would apply to VP, PP and AP. This emphasis on heads makes Xbar theory closer to dependency grammar, but it remains different in that it still makes use of phrases and so of constituents larger than the word. With the concept of projections, subtrees such as (34) (implied by (33b)) are excluded. (34)
< previous page
page_15
next page >
< previous page
page_16
next page >
Page 16 In effect, we are now claiming that phrases are endocentric (cf. the trees in 2.3).5 There may be more than one level of projection, e.g. NG and NP in our earlier discussion. One of these will always be the highest level; outside the NP, we are not dealing with a projection of N at all. This highest level is known as the maximal projection, a term that would apply to NP, PP and AP.6 We can now add a further constraint, to the effect that categories that are not part of the projection line (the set of nodes on the path from the maximal projection to the lexical head) of a phrase must be maximal projections. (33a) is no longer permitted, whereas (35) is fine. (35) VG V NP Tree (27b) above illustrates this point: one projection line runs NP—NG—NG—NG—N; the other maximal projections that are sisters of these nodes are PP and AP. There is also a projection path AP— Adj.7 We have now divided notions like ‘noun phrase’ into two parts, one stating the kind of constituent (noun vs. verb, etc.) and one stating the hierarchical level (phrase vs. group or word). The standard terminology tacitly recognizes this, but it gets lost if we simply write ‘‘NP” in rules or trees. Instead, we can explicitly describe a noun phrase by means of two separate features, concerning type and level, and we might write this, say, as [noun, phrase]. Noun groups and nouns would be, respectively, [noun, group] and [noun, word] (note that we now acknowledge that ‘noun’ on its own names a word-level category). This use of features is not unique to syntax, cf. the application of semantic features, as in [human, male, adult] as a description of the meaning of man, or phonetic description using place and manner of articulation as features, so that [m] would be described as [bilabial, nasal]. The precise notation used for syntactic categories is not in fact crucial. Usually, hierarchical level is expressed numerically, with zero indicating the word level, and increasingly higher numbers indicating higher levels. We will take 2 as the highest number for hierarchical level, and maximal projections will have this value for their level feature. So N, NG and NP would be shown as [noun, 0], [noun, 1] and [noun, 2]; we will also use [N, 0] etc. These can be abbreviated as N0 (or just N), N1 and N2; other possibilities include using primes: N, N′, N′′. In the first incarnation of X-bar theory, these were written as bars (hence the name of the theory) above the category label—e.g. —but this was typographically awkward, and so is now rarely used, although linguists still refer to something ‘having so-and-so number of bars’. It is unfortunate that different notations exist and are all in use; 5. Note that we are using “head” here in a rather loose sense: a head need not be a lexical category, e.g. an NG can be the head of an NP. 6. There is a problem concerning the projections of V, viz. whether these extend beyond VP to S; we shall leave this aside till we have encountered more ideas specific to GPSG (see 4.3 below); so most of our remarks here will concern NPs. 7. The status of Art will be tackled shortly.
< previous page
page_16
next page >
< previous page
page_17
next page >
Page 17 the important point is to grasp the idea of the distinction between type and level of linguistic unit and to appreciate the statement of these as distinct features.8 Tree (27b), rewritten using the N2 (etc.) notation would be as in (36).
(36) Using features in this way does not alter the interpretation of PS-rules at all, and we could write (37a, b), for instance, instead of (26a, b). The notation in (37c) (instead of (37a)) is also often used. (37) a. [noun, 2] Art [noun, 1] b. [noun, 1] [noun, 1] [prep, 2] c. N2 Art N1 So far we have imposed constraints on possible structures (e.g. excluding (34)). We can now extend these ideas a little by proposing a schema for what a permissible PS-rule may look like. One requirement is that the right-hand side of the rule must contain a category of the same type as is on the left-hand side, but at the same or the next-lower level (note that in (36), N1 may have N1 or N0 as a daughter). If we use our [noun, 2] -type notation, we could introduce variables and write [X, n ] for an item of type X and level n . Then we could state that PS-rules must be as in (38). (38) [X, n ] … [X, m]… (where m=n or m=n−1) (38) encodes the requirement just stated. The dots on either side of [X, m] stand for nodes not on the projection path; more will be said about this shortly. The rules in (37) conform to the schema in (38), but rule (33b) does not do so, because it would be stated in feature notation as (39). (39) [adj, 2] [verb, 0] [noun, 2] 8. As will be seen later, GPSG makes massive use of syntactic features, so it is as well to become familiar with them now.
< previous page
page_17
next page >
< previous page
page_18
next page >
Page 18 What about our proposal that nodes not on the projection path must be maximal projections? We can incorporate this into (38) as in (40). 40 [X, n ] [Y, max]* [X, m] [Y, max]* (where m=n or m=n−1) Here, [Y, max] means a maximal projection of any type,9 and the use of Kleene star means that any number of maximal projections may occur. So (40) states that sequences of maximal projections may precede or follow the [X, m] node. Rule (33a) does not conform to (40), since it would be stated as (41), where [noun, 0] is not a maximal projection. (41) [verb, 1] [verb, 0] [noun,0] We have not yet accounted for the presence of Art in (37a). The usual solution is to claim that certain word classes do not project to higher-level phrases, and that Art is one of these. Complementizers, such as that, might be another example. A common name for such classes is minor categories. So a weaker, but still restrictive, claim would be that nodes off the projection path must be either maximal projections or minor categories. Using the vertical bar to show alternatives, we could recast (40) in its final form as (42). (42) [X, n ] {[Y, max] | minor}* [X, m] {[Y, max] | minor}* (where m=n or m=n−1) This looks more complicated than it really is, as it in fact represents a formalization of the basic ideas of X-bar theory as developed in this section: the intuitive idea behind it is really quite simple, viz. that categories contain heads of the same type and same or lower level, and that other daughters are either maximal projections or minor categories.10 2.5 Alternative versions of X-bar theory As we pointed out at the beginning of 2.4, X-bar theory is a family of theories, rather than a single theory. Accordingly, we shall now say a little about alternative versions; all, however, maintain the intuitive idea behind X-bar theory just pointed out. The best worked-out account is that of Jackendoff (1977); he proposes that all phrases be analyzed in a uniform way, so that APs and NPs (say) have very similar internal structures. Among other things, this means that the level of maximal projection is the same for all classes, something that is not logically necessary, though 9. Since we are taking 2 to be the indicator of a maximal projection, we could write [Y, 2] in place of [Y, max]. However, we deliberately wish to state the restriction so that it applies also to approaches that do not accept this restriction, hence (40) is formulated in less precise terms. It may be helpful to note that constituents labelled ‘phrases’ (NP, PP, etc.) and clausal constituents (S) are maximal projections. 10. We shall have more to say about heads in 5.3 below.
< previous page
page_18
next page >
< previous page
page_19
next page >
Page 19 it was silently assumed above.11 But Jackendoff claims that this maximal level is 3, not 2, so that he would render NP as N3 not N2. In addition, he adopts the Kleene star rather than the recursive analysis of modifiers (cf. 2.3). This is partly because he enforces a rather more restrictive schema, requiring that the level of the head be one less than that of the right-hand side category. For clarity’s sake, we shall modify (38) in this direction, and leave it to the reader to modify (42): (43) [X, n ] … [X, m]… (where m=n−1) Jackendoff’s rules for NP would thus be as in (44) (slightly simplified here).12 (44) a. N3 (Art) N2 b. N2 AP* N1 PP* c. N1 N0 (PP) Structures such as (45) would be assigned. Note that this implies rather more depth than tree (29), but it is subject to just the same objections as the non-recursive analysis sketched there, since it neither allows direct semantic interpretation nor provides appropriate constituency for pro-form distribution or interpretation. The recursive structure (27a) is still preferred.
(45) Jackendoff’s is not the only alternative to the view adopted here. Thus, Napoli (1989) argues that all PSrules conform to (43) (though she does not use this notation). Napoli not only rejects the recursive analysis of modifiers, but also claims that there is no structural distinction between arguments and modifiers (cf. 2.3): all will be sisters of the zero-level node. With regard to some of the data we made use of earlier, she writes: 11. It is not in fact a universal assumption, for Emonds (1985: p. 24) sees V as allowing higher-level projections than other categories. 12. Here, and elsewhere, we shall use non-X-bar categories such as AP when it is more convenient not to worry about the X-bar status of the constituent in question.
< previous page
page_19
next page >
< previous page
page_20
next page >
Page 20 the interpretation of pro-forms (such as one) has never convincingly been shown to be related to syntactic constituency. (Napoli 1989: p. 4) She points out (p. 181) that, in a language with post-nominal adjectives such as Italian, it is common for adjectives (which are modifiers) to intervene between head noun and argument PPs: (46) la distruzione brutale di Troia THE DESTRUCTION BRUTAL OF TROY ‘the brutal destruction of Troy’ This contrasts with the arguments-before-modifiers order in English (cf. (19c, d)). Napoli notes that her approach to X-bar theory is more restrictive than its rivals, and is therefore to be preferred, other things being equal. The aim here, though, is just to show that alternatives to the view adopted here are perfectly respectable. Napoli, then, would assign the structure in (47) to (19c).
(47) It remains to be added that Jackendoff found it necessary to recognize certain exceptions to his general schema for PS-rules. We shall see later that GPSG in fact incorporates a rather weak version of the theory that happily allows for all manner of exceptions. The GPSG variant is, however, closest to the approach taken here in 2.4, with (say) N2 as the equivalent of NP, and with the recursive analysis of modifiers. But most varieties of generative grammar include some version of X-bar theory by way of constraining PS-rules, so it is useful to know what its motivations and general principles are. 2.6 On semantics The focus of this book is on syntax not semantics, but we can by no means ignore semantic matters altogether. Consequently, we shall now give a short introduction to the approach that GPSG takes to sentence meaning. GPSG makes use of a kind of formal semantics known as model-theoretic semantics (also known as Montague semantics, because of the important contribution made by the philosopher Richard
< previous page
page_20
next page >
< previous page
page_21
next page >
Page 21 Montague). This is also used by a number of other grammatical approaches, and some understanding of it is useful. One of the aims of model-theoretic semantics is to link up language and the world that we use language to talk about, and it does this by stating the conditions under which a sentence would be true. Suppose, for instance, we rewrite the simple sentence (48a) the way it would be put in predicate logic, (48b). (48) a. Smith walks. b. (walk (Smith)) Here, walk is a predicate, specifically a one-place predicate, because it occurs with a single argument; Smith is a term; and (48b) is a formula, created by placing the term in parentheses after the predicate. Now we can ask what the meaning of the formula is, i.e. what its interpretation is, what the terms and predicates (and the whole formula) denote. Model-theoretic semantics provides the following answer: the term denotes a particular individual or entity, the predicate denotes a set of individuals, and the formula denotes a truth value. A formula containing a one-place predicate is true iff (‘if and only if’) the individual denoted by the term is a member of the set of individuals denoted by the predicate; otherwise, it is false. In this case, (48b) is true iff Smith is among the set of individuals who walk. The intention here is to provide a formal account of what is involved in a sentence being true or false. Note what is not being addressed on this account: nothing is being said about how walking differs from running, or how we decide whether something is true (this is not a problem for linguistics). All we are doing is to supply an interpretation for simple formulae consisting of a one-place predicate plus a term. The relation between simple intransitive clauses and their predicate logic counterparts is fairly straightforward; it remains so when the verb is a copula that plays no logical role: the formula corresponding to (49a) is just (49b), which is true iff Jones is among the married entities. (49) a. Jones is married. b. (married (Jones)) One advantage in explicating meaning in terms of sets and set membership is that we can exploit all the mathematical work on set theory. Transitive verbs correspond to two-place predicates, i.e. they occur with two terms or arguments. So (50a) is represented as (50b). (50) a. Smith helps Jones. b. (help (Smith, Jones)) As one-place predicates denote sets of individuals, two-place predicates naturally denote sets of ordered pairs of individuals, and a formula containing such a predicate is true iff the pair of terms in the formula are members of the set of pairs denoted by the formula. In this case, (50b) is true iff (Smith, Jones) is in the set of pairs of helpers-and-helped.
< previous page
page_21
next page >
< previous page
page_22
next page >
Page 22 There is a close link, then, between the interpretation of an expression and the number of arguments it takes (cf. 2.3 on valency and arguments). In fact, the idea of arguments can be employed consistently and predicates can be viewed as functions. A one-place predicate, for instance, can be described as a function mapping entities to truth values: the predicate takes an entity as its argument and returns a truth value as the result. This way of looking at things enables us to view two-place predicates in a slightly different way. Instead of seeing (say) help as a function that maps two entities directly to a truth value, it can be regarded as a function that maps a single entity into a one-place predicate that is similar to walk. Just as walk is linked to a term to form a formula, so we could say that (51) (to use a slightly unusual notation) also needs a term in order to become a well-formed formula. (51) (help (_, Jones)) So a two-place predicate is a function (or functor) that maps entities into ‘functions from entities to truth-values’. All multi-place functions can be interpreted in this way, via a series of one-place functions. The relevance of all this to GPSG is that syntactic trees are subject to interpretation by means of this process of function application: the interpretations of the parts are combined by applying a function to its arguments to yield the interpretation of the whole construction, which may then be combined with other parts of the sentence. GPSG thus claims a very close relation between syntax and semantics: the structure provided by syntactic rules is the one interpreted by semantic rules. These semantic rules output logical representations, the vocabulary of which is not strictly lexical items of English, but what are called constants. These are usually shown just by adding a prime to words of English, e.g. walk′ is the logical constant corresponding to the English word walk. So our original example (48b) would be shown as (52). (52) (walk′ (Smith′)) Two-place predicates are usually shown with the subject and object in separate brackets, and the object closest to the verb. So (53) corresponds to (50b). (53) ((help′ (Jones′)) (Smith′) We can now examine how this is related to trees, and how the semantic representation is built up compositionally from the syntactic constituents. To take a simple and simplified example, totally ignoring issues such as tense, assume that each word in (54) (next page) is given an interpretation as a constant, viz. Smith′, help′ and Jones′. The verb and object combine to give (55), i.e. the function denoted by the verb is applied to the entity denoted by the object.13 (55) (help′ (Jones′)) Then (51) is applied to the subject to give the interpretation of the whole sentence, viz. (53). 13. Note that we no longer need to posit an odd formula such as (51).
< previous page
page_22
next page >
< previous page
page_23
next page >
Page 23
(54) For the most part, we shall in what follows leave aside questions of semantics, though the reader should be aware that there are semantic principles that are operating to bridge the gap between syntactic structure and model-theoretic semantics. We shall introduce points concerning semantic interpretation only where it seems necessary. 2.7 Further reading The most useful source for this chapter is Radford (1988); constituency and phrase-markers are explained in Chapters 2 and 3. Another work which contains a lot of material on phrase-markers is Baker (1989). Borsley (1991: Ch. 2) also deals with constituency and the existence of intermediate categories such as NG (which he terms N′). Some of the arguments for the recursive analysis of NPs and VPs are taken from Andrews (1983). On the internal structure of NPs, see Radford (1988: Ch. 4). On VP structure, see Borsley (1991: pp. 61–3). On X-bar theory, see Radford (1988: pp. 253–64). There is also a good short account in Cook (1988: pp. 94–102); you can ignore what he says about ‘INFL’. Kornai & Pullum (1990) is a study of the various claims of X-bar theory and of how restrictive it really is; this is relatively advanced, but merits reading at some stage. For reasonably comprehensible introductory accounts of model-theoretic semantics, see Martin (1987: Part II), Bach (1989), and (at a much fuller but still introductory level) Cann (1993).
< previous page
page_23
next page >
< previous page
page_24
next page >
Page 24 3 The ID/LP format for grammars We are now going to examine some further shortcomings of PS-rules, and to propose a slightly different format for grammatical rules. As a preliminary, it will be useful to state again the rules established in the previous chapter, this time in a more consistent format: (1) a. S N2 V2 b. V2 V1 (AdvP) c. V1 V1 AdvP d. V1 V0 (N2) (P2) e. N2 Art N1 f. N1 N1 P2 g. N1 A2 N1 h. N1 N0 (P2) i. P2 P0 N2 j. A2 A0 (P2) Note that not everything has been put into the X-bar form. S, for instance, is completely outside the system, but this will be remedied later (see 4.2). Rules (1b-c) allow for verb phrases containing modifer adverbs, plus objects and argument P2s. (le-h) generate N2s with articles, modifier P2s and A2s, and argument P2s.
< previous page
page_24
next page >
< previous page
page_25
next page >
Page 25 Rules (1i) and (1j)1 are included for the sake of completeness, although they do not conform to our Xbar schema for PS-rules. We shall look at the internal structure of AP and PP later (in 4.6 and 7.1). 3.1 Dominance and precedence Let us consider a familiar PS-rule: (2) S N2 V2 This tells us two different kinds of information. First, it tells us about immediate dominance: that S can immediately dominate N2 and V2. Secondly, it tells us about linear precedence: that the N2 precedes the V2. GPSG factors out these two different kinds of information, and employs distinct linguistic devices to encode them. Information about dominance is expressed in immediate dominance rules (ID-rules, for short), which simply give information about what the daughters of a node can be. Their format is identical to PS-rules, except that commas separate the daughters on the right-hand side. So the ID-rule corresponding to (2) would be (3a). (3) a. S N2,V2 This states that S can dominate N2 and V2, and tells us nothing about the relative order of the daughters. It could equally be written as (3b): this is in fact the same rule as (3a), whereas the PS-rule (3c) is very different from (2). (3) b. S V2, N2 c. S V2 N2 Information about precedence is expressed in linear precedence statements (LP-statements, for short), which give information about the relative order of sisters. We shall use the ‘precedes’ symbol ‘ ’ to code LP-statements as in (4). (4) N2 V2 This states that N2 must precede V2 when they are sisters in a tree; or, alternatively, that a V2 cannot precede an N2 when they are sisters. It does not state that N2 must immediately precede V2. We shall look more closely in 3.4 at how ID-rules and LP-statements interact to generate trees, but the intuitive idea should be fairly clear, for together (3a) and (4) specify the tree (5) as well-formed. 1. Rule (1j) allows for APs such as similar to his brother.
< previous page
page_25
next page >
< previous page
page_26
next page >
Page 26
(5) The tree in (6) is ill-formed, as it violates (4).
(6) It should be stressed that LP-statements apply only to sisters. In (7), the V2 cooked the meal precedes the N2 Mary; but these phrases are not sisters, so (4) does not apply to them. (7) John cooked the meal and Mary ate it. A grammar making use of ID-rules and LP-statements, rather than PS-rules, is said to be in ID/LP format. We should emphasize at this point that one does not convert a grammar to ID/LP format by taking each rule individually and rewriting it as an ID-rule and an LP-statement. This method is incorrect for two reasons: (a) one should look for ordering generalizations that hold over more than just single PS-rules; and (b) free order may mean that no LP-statement is needed at all. When converting a grammar to ID/LP format, one has to consider the whole grammar, and not perform the conversion rule-by-rule. 3.2 Advantages of ID/LP format It might appear that adopting ID/LP format is rather pointless, for we seem to have replaced one statement by two in the case discussed in the last section. However, we are now going to see what the advantages are. The LP-statement (4) does not just apply when N2 and V2 are the only daughters of S, for LPstatements are not linked to specific ID-rules. Rather, they apply throughout the whole grammar: (4) says that N2 always precedes V2 when they are sisters. So it will hold in other cases too. For instance, consider the examples in (8). (8) a. Has Jim eaten the sandwich? b. Jim told Bob to leave.
< previous page
page_26
next page >
< previous page
page_27
next page >
Page 27 For illustrative purposes, we shall assume that these have the structures shown in (9).
(9) a. b. In each case, we could argue about the structure, but later we shall see that the trees given in (9) are well motivated. It will be seen that in (9a) N2 precedes V2, even though they also have a V0 as sister. And in (9b), in addition to N2 preceding V2 as daughters of S, N2 precedes V2 when both are daughters of V1. These orderings are in keeping with (4). If we used PS-rules, we would need to state this fact again and again in separate rules, but ID/LP format avoids this redundancy. This, then, is one of the advantages of ID/LP: it enables the linguist to formulate generalizations that extend over several structures, which PSrules are generally unable to achieve. We can further illustrate this point, and show an advantage of our feature-based approach to categories, by reconsidering some of our PS-rules from (1). We repeat in (10) the rules which introduced lexical categories (V0, etc.) on the right. (10) a. V1 V0 (N2) (P2) b. N1 N0 (P2) c. P2 P0 N2 d. A2 A0 (P2) In each case, the zero-level category is the first of the daughters. It appears to be a valid generalization for English that zero-level categories precede all their sisters. If one just uses PS-rules, there is no way of capturing this. But it can be captured by a grammar in ID/LP format provided one exploits the use of features to encode the notion ‘zero-level category’. If a category is broken down into a feature for type and one for level, then (see 2.4) we can use the notation [X, 0] to mean a category of any type and of level zero. What we want to say in an LP-statement is that such a category must precede anything, and we can do this as in (11), in which Y stands for any category. (11) [X, 0] Y So the rules in (10) can be rewritten as ID-rules:
< previous page
page_27
next page >
< previous page
page_28
next page >
Page 28 (12)
a. V1 V0, (N2), (P2) b. N1 N0, (P2) c. P2 P0, N2 d. A2 A0, (P2) Including an LP statement (12e), we can use (11) and (12) to derive the effect of (10), but now capturing a generalization, which is always better than stating lots of individual cases. (12) e. N2 P2 The LP-statement (11) is respected by the trees in (9). It seems only fair, however, to mention a possible counter-example. Radford (1988: pp. 196–207) would assign the structure in (13) to physics student .
(13) Here, a zero-level category follows its sister, and (11) is not adhered to. But most linguists would probably not assign the tree in (13) to the string physics student, for they would not regard it as being a syntactic construct at all. Instead, it would be seen as a morphological object, a compound that, though it is orthographically two words, is linguistically a word that should not be analyzed syntactically. More specifically, it is a noun, not an N1, and it might be analyzed as in (14) (see Selkirk 1982: p. 29).
(14) It does not matter whether we regard morphological trees as being extensions ‘downwards’ of syntactic trees, or as constituting a separate structure altogether. For the domain of (11) is syntactic structure, and (14) is not a syntactic object.2 This view of words as syntactic atoms is developed by Di Sciullo & Williams (1987), who point out a number of differences between syntactic and morphological 2. It should be added that there is very little work on word-structure within the GPSG framework.
< previous page
page_28
next page >
< previous page
page_29
next page >
Page 29 objects. We cannot go into this matter here,3 and we shall just accept that (14) is on the right lines, and that examples like physics student do not invalidate (11). In fact, Radford has to acknowledge that pre-nominal complements are in some ways rather different from post-nominal ones.4 So far we have seen that ID/LP format enables the linguist to capture generalizations that hold over the whole grammar, whereas PS-rules force one to state ordering restrictions afresh in each rule. Now we are going to look at another advantage, viz. what happens when there are no (or very few) ordering restrictions. Consider again the ID-rule (3). Suppose a grammar contained no LP-statement about the relative order of N2 and V2; this would mean that either order would be allowed, and both of the trees in (5) and (6) seen above would be generated. In a PS-rule system, we would need two separate rules for (5) and (6), namely (2) and (3c) above. But only a single ID-rule is required (3b). ID/LP format, then, simplifies the description of ‘free word order’, as it is often called, by allowing variant orders to be generated by the simple device of not using LP-statements to exclude any orders. It remains to be shown that phenomena like the one considered in the last paragraph really do exist in natural language. It is actually rather hard to illustrate this from English, but one possible case (cf. (15)) is provided by the fact that certain manner adverbs can occur either before or after a verbal group (cf. further 7.6). The ID-rule (16) suffices. (15) a. John intentionally crashed the car. b. John crashed the car intentionally. (16) V2 V2, AdvP If no LP-statement says anything about the relative order of V1 and AdvP, both orders will be permitted. However, other languages do exhibit fewer restrictions on constituent order, and so allow a better demonstration of the power of ID/LP in this regard.5 For Greek, Horrocks (1983) proposes the LPstatement (17) (reformulated here for the sake of consistency with our current notation).6 (17) [X, 0] Y S A zero-level category always comes first, and an embedded S is always the last in a sequence of sisters; other types of complement are positionally free. This allows for all sorts of order, including those illustrated in (18). 3. It is by no means uncontroversial: see Liberman & Sproat (1992). 4. And even if the Di Sciullo/Williams approach is rejected, the revised version of (11) that we shall eventually adopt in 4.4 has no bearing on (14), since our new version will say nothing about the relative order of two N0 nodes. 5. We shall not give actual sentences here, as this would complicate matters far too much. 6. Note that it is perfectly possible to have more than two items mentioned in an LP-statement.
< previous page
page_29
next page >
< previous page
page_30
next page >
Page 30 (18)
a. V0 - N2 b. V0 - S c. V0 - N2 - P2 - S d. V0 - P2 - N2 - S The two possibilities for N2 and P2 (18c, d) are allowed by (17). Our other example is from Makua, spoken in Tanzania (Gazdar et al. 1985: p. 48). Within verb phrases, many orders are possible (we have not given all the possibilities here): (19) a. V0 - S b. V0 - N2 c. N2 - V0 d. V0 - N2 - P2 e. P2 - N2 - V0 f. V0 - N2 - S g. V0 - S - N2 h. N2 - V0 - S The only constraint that holds is that the verb must precede the embedded S, and this is easily statable: (20) V0 S In addition to enabling generalizations to be stated, then, ID/LP format makes it easier to describe languages with greater freedom of constituent order. More generally, it can be said that this format increases the universality of grammatical rules by enabling the obvious superficial differences between languages in terms of constituent order to be placed in a separate module (the LP-statements). ID-rules for languages with different ordering principles will therefore look far more similar to each other than if ordinary PS-rules were used. We should just add that some of our LP-statements for English will be modified as we proceed, but all in the cause of greater generality. 3.3 Universal ordering principles The basic idea behind ID/LP, that of separating statements about constituency and dominance from statements about ordering, has quite a long history within generative grammar, a history we shall not recapitulate here. As an example, it has sometimes been argued within transformational grammar that deep structures (see 1.1) should be unordered and that only surface structures should be constrained by precedence statements (note that this implies that deep structures should be unordered trees). We shall return to other approaches to factoring out dominance and precedence after considering ways of capturing so-called universals of word order.
< previous page
page_30
next page >
< previous page
page_31
next page >
Page 31 Since the work of Greenberg (1966), it has been clear that languages tend to group into certain classes with regard to word order. For instance, an OV language (i.e. where the object precedes the verb, such as Japanese) will be likely to have the head noun at the end of an NP, and to have postpositions rather than prepositions. These are not universals in the true sense, for there are exceptions (e.g. Persian is OV yet has prepositions), but they are definite statistical trends. Some work in this area has argued that there are two consistent language types, head-initial and head-final, with the word order properties shown in Table 1. Table 1 Word order properties. Head-initial Head-final VO OV Prepositions Postpositions N-Adj Adj-N N-Demonstrative Demonstrative-N V-Auxiliary verb Auxiliary verb-V Consistent languages of either type are in effect mirror-images of each other; see Smith (1978), who adds: A basic strategy for rough translations of descriptive prose from Japanese to English is first to identify the subject of the sentence, then to move to the end of each clause [the verb] and work up. (p. 78) The trouble is that many, if not most, languages are inconsistent (consider English, for instance)! There have been attempts to incorporate such tendencies in generative grammar, and in particular to employ X-bar theory to capture them. For example, Lightfoot (1979: pp. 50ff.) proposes the universal schema (21) for PS-rules. (21) a. X2 {Spec X1} b. X1 {Comp X0} Symbols such as X2 are variables over category types (equivalent to our earlier [X, 2]). The curly brackets on the right-hand side indicate that the symbols here are unordered. ‘Spec’ stands for specifier (this would include English articles, for instance, though the exact category would depend on the category of the sister), while ‘Comp’ stands for a string of complements. The schemas in (21) are not actual rules (so these are not ID-rules); rather the idea is that the PS-rules in each language have to be instantiations of them, with consistent ordering across categories. For English, for instance (keeping Comp as a useful abbreviation), we have (22). (22) a. N1 N0 Comp b. V1 V0 Comp c. P1 P0 Comp
< previous page
page_31
next page >
< previous page
page_32
next page >
Page 32 Whereas for Japanese one might suggest (23). (23) a. N1 Comp N0 b. V1 Comp V0 c. P1 Comp P0 The rules in (23) would describe the fact that Japanese is OV, has postpositions, and has complements of nouns preceding the noun. Lightfoot allows for the conventions in (21) to be relative rather than absolute, so that it is possible for grammars to have rules that violate them, but only at a ‘cost’ (i.e. the resulting rules are marked and subject to historical change). On this interpretation, it is possible for a grammar to have the rules in (24) (without having (22a) or (23b) alongside them). (24) a. N1 Comp N0 b. V1 V0 Comp But such a language has a more complex grammar than one in keeping with the schema in (21). Equally, LP-statements for (24) would have to be more complex than for (22) or (23). It may have been noticed that we have concentrated here on instantiations of (21b) rather than of (21a). Indeed, the hypothesis of consistent ordering of complements vis-à-vis the head is far better supported cross-linguistically than the comparable claim about specifiers, or modifiers (which Lightfoot ignores).7 In fact, Hawkins (1982) argues that X-bar theory fares rather poorly in capturing appropriate generalizations in the realm of word order. Note, however, that this is not necessarily a problem for GPSG, which captures ordering facts in LP-statements, not in conditions on X-bar-based PS-rules. ‘Consistent’ languages are easier to describe by means of general LP-statements, whereas inconsistent languages need more specific statements. It is reasonable to say that many languages are predominantly head-initial or head-final, and that this property is easily describable using features and LP-statements. But it is not clear that some of the more esoteric regularities noted by Hawkins should be captured by a generative grammar at all. Now we can get back to other approaches to factoring out dominance and precedence information. Falk (1983) independently proposed an approach very similar to that adopted in GPSG. ‘O-rules’ (short for ordering rules) may impose an order between pairs of sisters (e.g. NP » PP); this restriction to pairwise ordering is not imposed on LP-statements, and it does not seem to be particularly advantageous. In addition, o-rules may prescribe particular positions for constituents; a constituent of category A could be ordered as in (25). (25) a. A initial b. A second c. A final 7. For more on this point, see 4.4 below and the discussion of Dryer (1992) there.
< previous page
page_32
next page >
< previous page
page_33
next page >
Page 33 These have the expected interpretation, e.g. (25a) means that A must be the leftmost sister. A rule such as (25b) might be useful in German, to state the requirement for a finite verb to be in second position.8 For instance, (26) could be written (using the notation adopted here) to state the generalization that in English zero-level categories precede all their sisters. (26) [X, 0] initial This approach makes it unnecessary to use variables for ‘any category’ in o-rules. Current transformational grammar (known as Government-Binding theory, or GB) attempts to do away with PS-rules altogether, because of their lack of explanatory power; in the words of one practitioner: rules are in general only stipulations (however mathematically rigorous and computationally implementable they may be). (Everett 1989: p. 339) We shall return to this point later. Constituent order is seen as a parameter, i.e. a locus of diversity among languages and something that the child must fix for their specific language. Rather than suppose that the child acquires this information afresh in each construction, order is seen as a function of a single abstract property of the grammar, viz. the directionality of government. We cannot explain this fully here, but it can be likened to the traditional concept whereby one says (for instance) that verbs govern their objects. If lexical categories govern their complements, then the distinction between English and Japanese (cf. (22) and 23)) would be captured by saying that in English lexical categories govern to their right, and in Japanese to their left. Alternatively, we could say that English is a headinitial language, and Japanese a head-final language. The child would need only to learn the direction of government in their language, i.e. determine whether their language was head-initial or head-final. Presumably, comparable statements (though not formulated in terms of government) would account for the order of level-one ([X, 1]) categories and their modifiers. The details here are for present purposes less important than the fact that GB also recognises the redundancy of stating order separately in every PS-rule and attempts to provide principles of order that apply across the whole grammar (though it is far less explicit than GPSG). Fodor & Crain (1990) claim that GPSG is at least as good as GB in explaining the kind of ordering generalizations emphasized by GB grammarians, but they do not discuss Greenberg-style universals, and the empirical basis of their paper is very slender. 8. Though the correct ordering statements for German depend on what one decides is the most appropriate constituent structure; e.g. the analysis in Uszkoreit (1987) would not use anything corresponding to (25b). See 11.4 below.
< previous page
page_33
next page >
< previous page
page_34
next page >
Page 34 3.4 Generative capacity In considering the use of features in 2.4, we noted that their use in no way alters the interpretation of PS-rules. Hence they do not affect the weak generative capacity of the grammar, which remains able to generate only context-free languages (henceforth, CFLs). But now that we have introduced ID/LP format in place of PS-rules, we should ask whether this innovation affects generative capacity. We have so far seen a number of cases in which a set of PS-rules (whether we explicitly stated them or not) could be converted into a grammar in ID/LP format. However, this is not always the case. Suppose we have the grammar (27) (for the sake of simplicity, we shall not use labels like N2 and will stick to NP, etc.). (27) a. S NP VP b. NP Art NG c. VP V (NP) (PP) (S) d. PP P NP e. NG N (S) (PP) If one compares rules (27c) and (27e), it will emerge that it is not possible to make a valid LP-statement about the order of PP and S. PP precedes S when they are daughters of VP, but follows S when they are daughters of NG. LP-statements apply to daughters irrespective of their mothers, so do not permit the necessary statements here to be made. We can conclude, then, that not every context-free grammar (CFG) can be put into ID/LP format, so that the format is in some ways more restrictive than PS-rules. However, suppose we reformulated (27) so that S was no longer a daughter of VP, and we used a different label there instead, as in (28). (28) a. S NP VP b. NP Art NG c. VP V (NP) (PP) (Z) d. PP P NP e. NG N (S) (PP) f. Z S (28) no longer contains the ordering contradiction inherent in (27), and can be reformulated in ID/LP terms: (29) a. S NP, VP b. NP Art, NG c. VP V, (NP), (PP), (Z) d. PP P, NP e. NG N, (S), (PP) f. Z S g. NP VP h. Art NG
< previous page
page_34
next page >
< previous page
page_35
next page >
Page 35 i. V others j. N others k. NP PP Z l. S PP But the question now arises of whether the grammar in (29) is linguistically motivated. If the category Z has been introduced simply in order to evade our LP-problem, then (29) is not a motivated grammar. Unless independent justification can be found for Z, the grammar (29) would have to be rejected, and we would have to conclude that the language being described does not have a motivated ID/LP grammar. Of course, what constitutes ‘linguistic motivation’ is an æsthetic, not a formal, issue. We saw earlier that not every CFG can be put into ID/LP format. As a distinct observation, however, every CFL does have a grammar in ID/LP form. We shall not prove this here (for a proof, see Shieber 1984, pp. 145–6), but the example just given illustrates how this can be achieved. If new non-terminals can be invented at will, any CFG can be rewritten so that it generates the same language but lacks ordering contradictions and so can be put into ID/LP format. A CFG that can be put into ID/LP format is said to have the property of Exhaustive Constant Partial Ordering (ECPO, for short). The ordering is partial because not every pair of categories need be ordered with respect to each other, and it is constant because any ordering of sisters is maintained across all the rules of the grammar. It is an empirical claim of GPSG that all natural languages have motivated grammars with the ECPO property and can therefore be written in ID/LP format. But ECPO is a rather abstract property of grammars, and it is not possible to pronounce on it without formulating a reasonably complete grammar for the language in question. We shall close this section with a few remarks on the formal interpretation of ID/LP. To do this we need the notion of a local tree, which is a tree of depth one, i.e. it consists of a root node and its daughters. We can now say: A local tree T is admitted by an ID/LP grammar G if and only if T is consistent with some ID rule in G and every LP statement in G . (Gazdar et al. 1985: p. 46) So, given our grammar so far for English, the local tree (30a) is among those admitted. This is consistent with the ID-rule (30b), and is consistent with every LP-statement (i.e. there is none requiring N2 to precede V0).
(30)
a. b.
< previous page
V1
V0, (N2), (P2)
page_35
next page >
< previous page
page_36
next page >
Page 36 3.5 Summary We close this chapter by listing the grammar for English developed thus far, in ID/LP format: (31) a. S N2, V2 b. V2 V1, (AdvP) c. V1 V1, AdvP d. V1 V0, (N2), (P2) e. N2 Art, N1 f. N1 N1, P2 g. N1 A2, N1 h. N1 N0, (P2) i. P2 P0, N2 j. A2 A0, (P2) (32) a. N2 V2 b. N2 P2 c. [X, 0] Y d. Art N1 e. A2 N1 f. N1 P2 It is noteworthy that this could be generalized a little more by collapsing (32b, f) as (33) (a nominal category of any level precedes P2); this would then overlap a little with (32c). (33) [N, n] P2 3.6 Further reading On ID/LP format, see Sells (1985: pp. 84–6), Horrocks (1987: pp. 174–7), Borsley (1991: pp. 38–41), Gazdar & Pullum (1981: pp. 107–11) and Gazdar et al. (1985: pp. 44–50). Radford (1988: pp. 271–8) discusses the elimination of ordering from PS-rules in GB terms.
< previous page
page_36
next page >
< previous page
page_37
next page >
Page 37 4 Features and subcategorization In this chapter we continue to look at more specific aspects of GPSG that are intended to solve various shortcomings of PS-rules. We shall expand considerably the use of syntactic features, and also see how subcategorization phenomena are handled; it will emerge that these two topics are in fact closely related to each other. In some respects, however, we shall not be able to see the full advantages of feature-based syntax until at least the following chapter. 4.1 Syntactic features So far we have made use of two syntactic features, one to show the type of a category (nominal, verbal, etc.) and one to show the level (word vs. group vs. phrase). So NP is now expressed as [noun, 2] (usually abbreviated as N2). This gives us the ability to refer to classes of categories, e.g. [X, 0] for ‘all zero-level categories’ and [N, n ] for ‘all nominal categories’. Such notions are simply not expressible in a system using atomic categories such as NP, which do not consist of features. Linguists traditionally make informal use of features in describing syntactic objects, e.g. ‘third-person singular NP’ uses the features [3rd-person] and [singular], while ‘finite verb’ uses the feature [finite]. So what is discussed
< previous page
page_37
next page >
< previous page
page_38
next page >
Page 38 in this chapter is by no means as revolutionary as it may appear: the idea is just to develop a proper theory of features. We can immediately take a step in this direction by expressing features as pairs of attributes and values. This has in fact been implicit in our discussion so far: N has been a value of the attribute ‘type’, while 2 has been a value of the attribute ‘level’. It should be uncontroversial that ‘3rd’ is a value of the attribute ‘person’, and ‘singular’ a value of ‘number’. A common notation is to link attribute and value by an equals sign, as in [number=singular] or [level=2]. We shall return to notational issues a little later. Part of a grammatical description of a language involves listing the syntactic categories posited. If features are used, then one has to state the attributes employed, and the possible values of each. From now on, we shall write alphabetic attributes and values in small capitals, and when specifying possible values will use a colon to separate an attribute from its set of values. So (1) means that SINGULAR and PLURAL are possible values of the attribute NUMBER. (1) NUMBER: {SINGULAR, PLURAL} In fact, we shall follow the GPSG literature, which, perhaps for arbitrary reasons, does not use an attribute NUMBER but instead an attribute PLURAL (shortened to PLU) with values ‘+’ and ‘−’ (2). (2) PLU: {+, −} Attributes with only ‘+’ and ‘−’ as values are known as Boolean features; they have the merit of being very convenient from a notational point of view. The attributes and values in (3) should need little explanation. (3) a. CASE: {NOM, ACC} b. PER: {1, 2, 3} English pronouns can be in the NOMinative or Accusative case (German, for instance, would require extra values for CASE). PER is short for ‘person’. So far we have used an attribute LEVEL, and it would be intuitively preferable to continue with this. But, because of the influence of X-bar theory, GPSG instead uses the attribute BAR, with the same interpretation: (4) BAR: {0, 1, 2} Nor does GPSG use a ‘type’ feature (e.g. TYPE=NOUN). Instead, notions like ‘noun’ and ‘verb’ are not treated as primitives but are broken down into two Boolean features: (5) a. N: {+, −} b. V: {+, −} The analysis expressed in (5) goes back to transformational-based versions of X-bar theory, and GPSG makes relatively little use of the flexibility offered by (5). Most GPSG rules, etc. could (for English, anyway) be formulated without decomposing
< previous page
page_38
next page >
< previous page
page_39
next page >
Page 39 notions such as ‘noun’. But there are a number of occasions when (5) is useful, so we shall employ it here. The four main types of category are analyzed as in (6). (6) a. noun=[+N, −V] b. adjective=[+N, +V] c. preposition=[−N, −V] d. verb=[−N, +V] It is important not to be misled by the attribute names here: it is not being claimed that adjectives are both nouns and verbs: they are [+N, +V], which is quite different ([+N] does not mean ‘is a noun’). (6) does offer some flexibility; if we wish to make a grammatical statement that applies only to prepositions and verbs, we can say that it applies to items that are [−N]. For the most part, though, it seems best to leave justification of (6) until relevant phenomena are encountered.1 The final feature we shall deal with here is called VFORM, and it deals with the inflectional forms of verbs: (7) VFORM: {BSE, FIN, PRP, PSP} BSE refers to the infinitival (or base) form, FIN to finite forms, PRP to the present participle and PSP to the past participle. A little more will be said about VFORM later, and some extra values will be introduced. We shall finish this section by saying a bit more about notation. The ‘equals’ notation introduced earlier is quite commonly employed within linguistics, but GPSG generally does not use it. The following formats are used instead: (8) a. b. [CASE ACC] c. [ACC] d. [+PLU] (8a, b) are really alternative forms of the same thing; (8c) is permissible only where only one attribute can have a particular value; (8d) is confined to Boolean features. A category is a set of attribute-value pairs; a plural NP might be stated as in (9). (9) {[+N], [−V], [BAR 2], [+PLU]} Usually, though, convenient abbreviations will be used, e.g. (10) (10) N2 [+PLU] It is essential to be familiar with different ways of writing the same thing. It may seem annoying that this is done, but it would be even more annoying if one had to write (9) all the time instead of (10). And if (9) was never used, it would be impossible to appreciate how feature-based principles of GPSG (to be studied later) could work. 1. We can add here that not all feature-based approaches use a system like (6); Japanese Phrase Structure Grammar (JPSG) uses an attribute POS ‘part of speech’, with values that include {V, N, P}; a verb is then represented as [POS=V]; see Gunji (1987: pp. 8–9).
< previous page
page_39
next page >
< previous page
page_40
next page >
Page 40 The final attribute we shall introduce in this section is SPEC (short for SPECifier, cf. 3.3). This is intended to cover various minor categories, such as articles: these are specifiers of nouns, so we shall describe them as [SPEC N], abbreviated as SpecN: (11) SPEC: {N,…} This feature is not found in Gazdar et al. (1985), which in fact contains no rules to generate articles! They actually propose a slightly different analysis of determiners from this. Our LP-statement that zero-level categories precede their sisters would now be formulated as (12). (12) [BAR 0] Y We shall generalize this in 4.4, when we shall also make use of a useful convention, as follows. If some attribute name with no value appears in a grammatical statement (e.g. an LP-statement), the meaning is any category containing a value for that attribute. So [VFORM] means ‘any category with a feature VFORM’. Prefaced by a tilde (‘~’), the meaning is any category containing no such feature: ~[VFORM] means ‘any category with no feature [VFORM]’. It may be helpful to divide features into two kinds. Morphosyntactic features are expressed in specific lexical items or forms of lexical items; VFORM would be a good example, as would PLU. Construction features label a particular syntactic construction; we have not met any clear examples yet, but one would be the Boolean feature INV, which appears on sentences that involve subject-verb inversion, and will be studied in more detail later. So in (13), the S-node would be labelled [+INV]. (13) What have you done? Features such as BAR belong to neither type. This classification of features (which is taken from Zwicky 1987a) is purely for the sake of clarity, and to help give an idea of the different ways in which features are used. The two classes of feature are treated identically by GPSG theory. 4.2 The status of S We can now revisit the problem of the analysis of the category S, which we have so far left entirely outside our versions of X-bar theory and of feature-based syntax. We can also consider Sbar, the constituent consisting of a complementizer and an S. Linguists have in fact taken a variety of approaches to how S should be regarded. In GB, at least until very recently, S has been viewed as the maximal projection of INFL (short for INFLection), a constituent that marks a clause as finite or non-finite
< previous page
page_40
next page >
< previous page
page_41
next page >
Page 41 and also carries agreement with the subject.2 In English, INFL can be realized as infinitival to, or as the empty do seen in (14). (14) Do you take sugar? Most commonly, however, INFL is realized as part of the verb’s inflectional morphology rather than as a separate element (INFL and the verb are then somehow merged by a transformation). No such analysis is available in GPSG, which does not permit this kind of abstractness. Jackendoff (1977) argued that S was the maximal projection of V, and hence was V3 (with VP as V2 and VG as V1); cf. Emonds (1985), who sees V as the only category allowing for projection to a [BAR 3] level, so S is V3 and VP is V2. The idea that V is the head of S is common to most unification-based frameworks, because so much of the information on the verb is also regarded as being information about the entire clause (e.g. finiteness and frame information). This will perhaps only become clearer in the course of the following chapter, where the idea of ‘head’ is examined in more detail. GPSG also adopts this analysis, and we can point out that it is not quite so different from the GB analysis as is usually assumed. Both regard the finiteness-bearing element as the head of a clause, and both would take to as the head of to leave in (15) (though they would disagree about its categorial status). We shall look at how GPSG would analyze such examples later. (15) I wish to leave. If S is a projection of V, the question arises of its level and its relation to VP (which we have been calling V2). In answering this question, we can observe the behaviour of infinitival embedded Ss and VPs in (16).3 (16) a. [To leave now] would be best. b. [For John to leave now] would be best. The point is that these Ss and VPs behave very similarly to each other, as (16) shows: both can occur as subjects, for instance. Also, both can appear in relative clauses (cf. (17)); both can appear as complements to certain verbs (cf. (18)); and they can be coordinated with each other (cf. (19)). (17) a. I’ve bought a book [to read]. b. I’ve bought a book [for you to read]. (18) a. I’d prefer [to stay]. b. I’d prefer [for you to stay]. (19) [To write a novel] and [for the world to give it critical acclaim] is John’s dream. 2. We say ‘until very recently’ because Pollock (1989) proposed that INFL be broken into distinct parts, e.g. tense and agreement would now head maximal projections of their own. This proposal of a ‘split-INFL’ has been very influential. 3. We assume that the bracketed constituents in (16) are, respectively, infinitival VP and infinitival S.
< previous page
page_41
next page >
< previous page
page_42
next page >
Page 42 Rather than write lots of rules that just happen to say that VP and S can occur in the same environments, it would naturally be better to have the similarity of distribution follow from some similarity in the description of these items. GB, for instance, would regard all these examples of infinitival VPs as instances of S, with an empty subject NP, known as PRO (an NP node is said to be empty when it does not dominate any lexical items). The similarities seen in (16)–(19) would then be quite natural, since the category of all the bracketed elements is S. Thus (18a) would be structured as (20) (20) I’d prefer S[PRO to stay]S. For instance, Koster & May (1982) argue precisely that the VP analysis of subjectless infinitivals is inadequate because it requires extension of PS-rules to introduce VP and S in the same place, whereas the analysis shown in (20) does not.4 This analysis is not available in GPSG, which does not make use of empty elements as freely as GB does. So some other way must be found of capturing the similarities. We have seen already that the way in which GPSG captures similarities between different categories is by breaking these categories down into features and claiming that the phenomena where the behavioural similarities hold can be stated in terms of features (cf. our LP-statement about [BAR 0] items). So we need to say that VP and S share certain features (features, moreover, that are not shared by other categories such as NP or PP). No doubt there are many ways of achieving this, so we shall just say that GPSG regards both S and VP as being V2, i.e. both are {[+V], [−N], [BAR 2]}. Any cases where both can occur can be stated in terms of this set of features (see the following section for an example). But of course S and VP are not identical in distribution; obviously, S contains a subject while VP does not: (21) a. Hazel left. b. * Left. c. I know that Hazel left. d. * I know that left. To account for the differences between VP and S, we can add a distinguishing feature. It seems natural to base this on the presence vs. absence of a subject, and to propose a Boolean feature SUBJ(ect): (22) SUBJ: {+, −} In full, then, VP would be (23a), and S (23b). (23) a. {[+V], [−N], [BAR 2], [−SUBJ]} b. {[+V], [−N], [BAR 2], [+SUBJ]} Our ID-rule (24a) now needs restating as (24b). (24) a. S N2, V2 b. V2[+SUBJ] N2, V2[−SUBJ] 4. In fact, they discuss Sbar and VP-bar analyses, but this detail is not crucial.
< previous page
page_42
next page >
< previous page
page_43
next page >
Page 43 This analysis of VP and S as both being V2 is due to Borsley (1983), who demonstrated similarities of behaviour between infinitival Ss and infinitival VPs in Welsh. A later paper (Borsley 1984) explores the implications for the analysis of English (though this is written in a slightly different variant of GPSG). At the same time, it must be acknowledged that many languages (including French and German) do not allow subjects of infinitives, so that the data seen in (16)–(19) would not be available. So this analysis of VP and S might then not be justifiable. What, now, about Sbar? It seems reasonable to claim that if V is the head of S because information about finiteness applies to the S as much as to the verb, then it also applies to the Sbar as much as to the S. So there is good reason (which will be reinforced later) for taking Sbar also as a projection of V. Jackendoff (1977: p. 47) took Sbar as equivalent to V3, apparently not distinguishing it from S. However, S and Sbar undoubtedly have distinct distributions, since (among other things) only an S can follow a complementizer: (25) a. I believe that they will lose. b. * I believe that that they will lose. But given the power of features, it is easy to distinguish between S and Sbar, while giving them very similar descriptions. We shall use a feature COMP to distinguish them, taking Sbar as containing a complementizer and S as not. For the time being, we shall regard COMP as a Boolean feature, and also use an atomic category Comp (which we will alter shortly). So S is S[−COMP] and Sbar is S[+COMP]; in full, they are (26a, b) respectively: (26) a. {[+V], [−N], [BAR 2], [+SUBJ], [−COMP]} b. {[+V], [−N], [BAR 2], [+SUBJ], [+COMP]} An ID-rule to rewrite Sbar would be (27a), or the equivalent (27b). (27) a. S[+COMP] Comp, S[−COMP] b. V2[+SUBJ, +COMP] COMP, V2[+SUBJ, −COMP] Perhaps it will be useful if we close this section by displaying the structure we would now assign to a simple example, giving both (fairly) full and (partially) abbreviated versions (28) (next page). 4.3 Subcategorization Now that we have a fully fledged feature-based syntax, we can return to the treatment of Subcategorization (cf. 2.2). As will emerge, this ties in rather neatly with the use of features.
< previous page
page_43
next page >
< previous page
page_44
next page >
Page 44
(28)
a.
b. We have been handling subcategorization so far by the method used in transformational grammars in the 1960s and 1970s, viz. by lexical entries such as those in (29) (concentrating here on verbs). (29) a. eat:—NP b. put:—NP PP c. say:—Sbar Alongside this we also need ID-rules or PS-rules (we will stick to PS-rules here, since the theories being criticized used them, but it doesn’t in fact matter), for example (30). (30) VP V (NP) (PP) (Sbar) The problem with this approach may already be apparent: there is a high degree of redundancy between (29) and (30). For instance, both state that some verbs may have an NP and a PP as sister, and that the NP precedes the PP. Clearly it would be nice if this redundancy could be eliminated. Two distinct solutions to this problem have been proposed. GB supposedly eliminates PS-rules by proposing a Projection
< previous page
page_44
next page >
< previous page
page_45
next page >
Page 45 Principle, which (simplifying greatly) states that the subcategorization properties of lexical items are observed in syntactic representations. (29b) states that put has NP and PP as sisters, so a tree with put is well-formed only if this requirement is adhered to. GPSG, in contrast, maintains rewrite rules but does away with the detailed information in lexical entries. Putting it broadly to start with, in GPSG a subcategorization attribute is used (known as SUBCAT), and a unique value is assigned to every possible frame in which a zero-level category can occur. For instance, suppose our lexical entry for eat simply said that it was a transitive verb, i.e. [SUBCAT TRANS]. Then the fact that transitive verbs (and only they) occur with an NP sister can be stated by an ID-rule such as (31). (31) V1 V0[SUBCAT TRANS], NP Together with our LP-statement (12), this generates the local tree (32). (32) A category can dominate a lexical item if and only if the category is consistent with that item’s lexical entry. So only a verb shown as TRANS, such as eat, could occur under V0[TRANS]; a verb like die (shown as V0[INTRANS]) could not. The problem with this precise way of formalizing subcategorization is that one soon runs out of traditional terms like ‘transitive’. So what GPSG actually does is to use integers as the values of SUBCAT, and to include these in both lexical entries and ID-rules. In order to try and limit confusion, we shall keep as far as possible to the integers used for frames in Gazdar et al. (1985). In the case of ‘transitive’, this means (33). (33) a. V1 V0[2], NP b. eat: V0[2] Since SUBCAT is (besides BAR and PER) the only attribute with integers as values, V0[2] is an unambiguous abbreviation for V0[SUBCAT 2] (or, if you insist, for {[+V], [−N], [BAR 0], [SUBCAT 2]}). It must of course be acknowledged that this system is hardly mnemonic. One possible alternative would be to use stereotypical verbs to stand for each SUBCAT class (e.g. [SUBCAT DIE ] for intransitive verbs), or abbreviated frames (e.g. [SUBCAT NP-PP] for put). We give here a few more straightforward examples: (34) a. V1 V0[1] b. V1 V0[5], NP, NP c. V1 V0[6], NP, PP (35) a. die: V0[1] b. spare: V0[5] c. put: V0[6]
< previous page
page_45
next page >
< previous page
page_46
next page >
Page 46 One shortcoming of the GPSG approach may already have been noted, viz. that it implies a large number of ID-rules. Some of the redundancy in them is removed by the use of separate LP-statements, and more will be removed by some feature principles we shall look at later. But the essence of the objection remains. 4.4 Examples of subcategorization In this section we are going to look at some more examples of subcategorization frames; one consequence will be to increase the set of features at our disposal. Say, of course, occurs with an embedded finite clause, which we can state as in (36). (36) VI V0[9], V2[VFORM FIN, +SUBJ] If we say nothing about the value of COMP here, both S and Sbar will be possible (though it is not quite possible yet to appreciate how this is achieved). It should be noted here, and throughout this section, that we are doing only half the job, i.e. we are ensuring that the sister of say is a finite clause, but we have no mechanism as yet for ensuring that a finite clause contains a finite verb; this missing link will be supplied in the next chapter. Let us turn to another example, rely . This occurs not just before any PP, but specifically a PP with the preposition on, and this fact has to be stated. To do this, we can make use of an attribute PFORM, which specifies the kind of preposition, the values of which can be names of actual prepositions; P2[PFORM ON] thus means a P2 with on . This would imply the ID-rule (37). (37) V1 V0[49], P2[ON] This can be easily extended to other cases, e.g. approve of, agree with, laugh at; a different SUBCAT value is needed for each value of PFORM required by a verb. Put requires not some specific preposition in its complement PP, but any preposition referring to a location. This can be handled by means of a Boolean feature LOC: (38) V1 V0[6], N2, P2[+LOC] Prepositions such as on will be shown in the lexicon as [+LOC], those like during will be [−LOC]. Subcategorization can also be applied to nouns, e.g. enthusiasm can occur with a for-phrase: (39) N1 N0[50], P2[FOR] Consider now the possibilities with plan:
< previous page
page_46
next page >
< previous page
page_47
next page >
Page 47 (40)
a. the plan to return home b. the plan for John to return home This word can occur with an infinitival VP or S; as we saw in 4.2, we can state these two possibilities at one go: (41) N1 N0 [34], V2[VFORM INF] Consider now the possibilities with become: (42) a. He has become very rich. b. He has become a millionaire. c. * He has become given a present. d. * He has become on the run. It occurs only with NP and AP. As we saw in 4.1, nouns and adjectives are both [+N], and we can take advantage of this to avoid writing separate rules for (42a, b) as in (43).5 (43) V1 V0[n], [+N, BAR 2] Become will occur with any phrase having the features {[+N], [BAR 2]}. This supports the decomposition of categories like noun and adjective, though it is still not possible to appreciate just how (43) is responsible for (42a, b). To take an example from German, the fact that helfen ‘help’ takes an NP in the dative would imply a rule such as (44). (44) V1 V0[n], N2[DAT] We shall leave further examples of subcategorization till the next chapter. A few final points can be made now. First, we have reviewed here a formalism for expressing subcategorization facts, one that does not presuppose any particular approach to the distinction between complements and adjuncts. For instance, Gazdar et al. (1985: p. 247) regard the for-phrase with buy as in (45) as a complement, whereas many linguists would regard it as an adjunct. (45) John bought flowers for his wife. Disagreeing with the analysis of particular examples such as this one does not imply rejecting the essence of the GPSG approach. Secondly, the statement of subcategorization facts in terms of ID-rules automatically captures the local nature of subcategorization (see 2.2), without this needing to be stipulated. Thirdly, we can reword our LP-statement about the order of lexical categories and complements as (46). (46) [SUBCAT] ~ [SUBCAT] 5. Here, as elsewhere, we simply use n as the value for SUBCAT when the precise integer used is irrelevant. The rule would in practice have a particular fixed value for SUBCAT, but there is no point in listing a specific value in these examples.
< previous page
page_47
next page >
< previous page
page_48
next page >
Page 48 This means that any category with a feature SUBCAT (i.e. a lexical category) must precede any category with no such feature (i.e. a phrase or group, primarily). This is our final version: as will be seen later, it has slightly wider empirical coverage than our previous version (12) above, to which (46) is largely equivalent (as only lexical categories have a SUBCAT feature). Incidentally, the formulation in (46) may well be relevant to universal ordering principles (cf. 3.3). On the basis of a wide-ranging typological survey, Dryer (1992) argues that the valid ordering generalizations are those stated in terms of the relative order of lexical and phrasal categories, rather than in such terms as head-initial or head-final. (46) precisely says that lexical categories in English precede their non-lexical sisters. 4.5 Some other approaches to subcategorization We made the point in 4.3 that the GPSG approach to subcategorization involves the writing of large numbers of ID-rules, i.e. the facts are stated in the grammar. This involves a fair amount of redundancy, and also runs counter to the general trend within linguistics known as lexicalism, which refers to the statement of linguistic phenomena within the lexicon rather than the grammar. We shall have reason to return to this issue in due course, and will see that GPSG in fact takes a rather conservative attitude in this respect (see 13.3). The point we want to make here is that some other theories have attempted to retain some of the benefits of the GPSG approach, but to do away with the large set of ID-rules. We shall just sketch here the analyses adopted within Head-driven Phrase Structure Grammar (HPSG) (see Pollard & Sag 1987; Sag & Pollard 1989; Borsley 1991: pp. 68–71) and JPSG (see Gunji 1987: pp. 11–15), without seeking to differentiate these. A verb (say) has a lexical entry that contains a SUBCAT feature, but the value of this is a list of the complements of the word. Very general rules generate (say) VPs (which also have a SUBCAT feature), and the matching of a verb to the correct complements is done by general principles, which we might roughly state as follows: Any category on the SUBCAT list of a head but not on the SUBCAT list of its mother must be matched by a sister of the head. This would allow a local tree such as (47) (e.g. for a transitive verb like resemble ):
(47)
< previous page
page_48
next page >
< previous page
page_49
next page >
Page 49 The difference between the SUBCAT list (enclosed within angled brackets) of mother and head verb here is NP, which matches the sister of the head. The tree in (48), where a transitive verb occurs with no object, is ill-formed according to the general principle stated above. (48) The SUBCAT list of the VP in (48) contains an NP, for VPs are viewed as subcategorizing for subject NPs. We might also have the local tree (49).
(49) In (49) the SUBCAT list of the S is empty, for it does not combine with anything else. This approach is of course rather different from that of GPSG; for one thing, it implies having a SUBCAT value on phrasal nodes. More radically, it introduces the notion of attributes with values that are not just atoms like ACC but are lists or sets (cf. that for the transitive verb in (47)). We have sought not to give a proper account of these proposals here, just to convey the flavour of a rather different formalization. They are also very similar to the approach taken within categorial grammar, which might describe an intransitive verb as S/NP (it combines with an NP to form an S). Lexical-Functional Grammar (LFG) takes a slightly different approach (see Kaplan & Bresnan, 1982, especially pp. 210–12). Rules and principles in LFG can refer directly to grammatical functions such as subject and object, and subcategorization statements are formulated in these terms. So one might say that resemble has the subcategorization frame (50) (LFG calls it a ‘lexical form’, and uses a slightly different notation). (50) SUBJECT, OBJECT General principles ensure that a string is well-formed provided all and only the functions mentioned in a predicate’s lexical form occur with it in the string, so there is no direct linking of a lexical form to a specific syntactic rule, and the rules can be fewer and more general. Johnson (1988: pp. 107ff.) discusses various approaches to subcategorization. His general characterization of the technique employed in GPSG is as follows: One can annotate both the lexical entries and the grammar rules with ‘diacritic features’ (encoded as the value of a designated attribute in the attribute-value structure) to indicate which syntactic rules can introduce particular lexical
< previous page
page_49
next page >
< previous page
page_50
next page >
Page 50 items, thus only allowing a transitive verb to appear in the V slot of the rule that expands a VP to a V followed by an NP. (p. 107) It should by now be clear how this applies to GPSG, and that SUBCAT is the diacritic feature referred to. 4.6 The structure of AP At this point it seems worthwhile to step back a little from introducing new theoretical devices and to resume some descriptive work on English structure. So far, we have said very little about the internal structure of adjectival phrases, positing simply the ID-rule (51). (51) A2 A0, (P2) In order to improve on this, we can consider some further examples of APs: (52) a. similar to his brother b. fond of his mother c. very happy d. so tall e. so very fond of himself f. too tall in some respects It will be convenient to divide our discussion into items preceding and following the adjective. First, we can note that adjectives have subcategorized complements, e.g. the to-phrase with similar (52a) and the of -phrase with fond (52b). The choice of preposition here depends on the specific adjective, so we must be dealing with complements. But in some respects in (52f) is not in any way selected by the adjective, and so appears to be an adjunct: it can be added in almost any AP (though there are semantic restrictions on what can occur in a P2 here). So we need to have P2 generated as a sister of the adjective (for complements), and also at a higher level (for adjuncts). The complement P2 usually precedes the adjunct:6 (53) a. similar to his brother in certain ways b. ? similar in certain ways to his brother Secondly, the adjective can be preceded by an intensifying adverb such as very (52c) or by a degree word such as so (52d); both can occur, in which case the degree word comes first; cf. (52e) and (54).7 (54) * very so fond of himself 6. Though it must be admitted that (53b) is not so bad. 7. There is quite a lot of different terminology in this area; many linguists would regard very as a degree modifier. Here we will stick to more traditional labels.
< previous page
page_50
next page >
< previous page
page_51
next page >
Page 51 How can we decide on the precise constituent structure here? Consider (55a). So here is a pro-form for the phrase in (55b), so (on our standard assumption that the antecedent of a pro-form must be a constituent), we would conclude that (55b) must form a constituent in (55a). (55) a. John is very similar to his father, but Bill is less so. b. similar to his father Further, in (56)–(57), so seems to be a pro-form for both of the different (b) phrases in the respective (a) sentences. (56) a. John is very similar to his father in some ways, but Bill is less so. b. similar to his father in some ways (57) a. John is very similar to his father in some ways, but is less so in others. b. similar to his father So we have evidence for a constituent that is smaller than AP, and one that can occur more than once within the same phrase (cf. (55)–(57)). The obvious candidate is A1 ({+N], [+V], [BAR 1]}), so a typical structure would be (58) (next page). Note that we are regarding degree words like so as specifiers of adjectives (cf. 2.4 on specifiers of nouns).
(58)
< previous page
page_51
next page >
< previous page
page_52
next page >
Page 52 The kinds of rules envisaged now would be as in (59). (59) a. A2 (SpecA), A1 b. A1 Adv, A1 c. A1 P2, A1 d. A1 A0[SUBCAT n ], P2[TO] e. A1 A0[SUBCAT n ], P2[OF] f. SPEC [BAR 1] g. Adv A1 P2 4.7 Further reading On syntactic features, see Radford (1988: pp. 145–56), Borsley (1991: Ch. 4) and Horrocks (1987: pp. 169–73); this section of Horrocks’ book also deals with subcategorization, which is likewise discussed in Sells (1985: pp. 87–8) and Gazdar et al. (1985: pp. 31–5). See also Gazdar and Pullum (1981: pp. 115– 19), but using a slightly different notation for ID-rules. On the structure of AP, see Radford (1988: pp. 241–46).
< previous page
page_52
next page >
< previous page
page_53
next page >
Page 53 5 The Head Feature Convention So far we have a grammar that consists of ID-rules and LP-statements, and that is able to make use of syntactic features in stating linguistic rules. In this chapter we introduce an important general principle governing the distribution of features. This idea of constraining the distribution and passing-around of features by means of general principles plays an important role within GPSG, and it has even been said that GPSG is in fact a theory of how syntactic information ‘flows’ within a structure. (Sells 1985: p. 80) This is an exaggeration, but an insightful one. 5.1 Introducing the Head Feature Convention Let us begin by observing that, if complex linguistic rules are just restated using features, there has not necessarily been any gain in generality. For instance, if we wish to state in an ID-rule the structure of a finite S, specifying that it contains a finite VP, then, using atomic categories such as Sfin (for ‘finite S’) we could write (1). (1) Sfin NP, VPfin
< previous page
page_53
next page >
< previous page
page_54
next page >
Page 54 The corresponding rule for infinitival S (Sinf) would be (2). (2) Sinf NP, VPinf It is clear that there is considerable redundancy here. But re-casting these rules by the use of features does not provide much of an advantage: (3) a. S[FIN] NP, VP[FIN] b. S[INF] NP, VP[INF] There are two problems here. First, the rules state the facts about the structure of S twice, viz. that it consists of NP and VP. Secondly, it is purely coincidental that the value of VFORM is the same on both S and VP in (3); a rule such as (4) is linguistically bizarre but is no more complex than (3). (4) S[INF] NP, VP[INF] Before exploring a solution, we can consider another comparable problem. In 4.4 we introduced the feature PFORM, e.g. rely is subcategorized for a PP with [PFORM ON]. To ensure that this value is also found on the preposition we would need the rule (5). (5) PP [ON] P [ON], NP And for every value of PFORM we should need a different rule, each one of which would say the same thing (except for the particular preposition). It would clearly be good if we could do away with such specific rules as (3) and (5) and just use rules such as (6). (6) a. S NP, VP b. PP P, NP But if we do this, we shall need some sort of mechanism to ensure that the features match in an actual tree, i.e. that we have (7a) and not (7b).
(7) a. b. So we shall introduce ‘some sort of mechanism’ to enforce matching of features (i.e. identity of attributes and values) in a local tree such as (7a). A possible answer would be to make use of variables in rules to ensure that a feature has identical values on distinct nodes:1 (8) S[VFORM ] NP, VP[VFORM ] Rule (8) is an improvement over (3a, b) since the structure of S is described only once, and the use of the same variable on left- and right-hand side ensures that the value of 1. It is common to use Greek letters for variables.
< previous page
page_54
next page >
< previous page
page_55
next page >
Page 55 VFORM (whatever it is) is identical on the S and VP nodes in any local tree generated by (8). Such explicit linking of features is employed in Definite Clause Grammars (Pereira & Warren 1980), where upper-case letters indicate variables in rules: (9) sent (s (NP, Tense, VP)) np (NP), vp (VP, Tense). This rule states that the sentence and verb phrase share the same value for Tense. However, the approach seen in (8) and (9) is still less than optimal, for it implies explicit reference to attributes in rules, and such attributes may be very numerous. In searching for a better answer, we can ask between which nodes this matching has to take place. In each of our examples, a feature on the mother node has to match with that on one of the daughters, but it is not just any daughter. An S matches with a VP, and a PP with P. In other words, there is a matching of features between the mother node and its head daughter. This would appear to be a very general phenomenon, one that turns up in plenty of other circumstances: e.g. number of noun phrase and head noun must match, as must finiteness of VP and head verb. So a good candidate for a general principle of grammar, applying to all languages and not just English, would be: The Head Feature Convention (HFC) (preliminary version) A node and its head must share the same features This will be progressively refined as we proceed. For the moment, we can say that it requires the matching PFORM value on mother and head in (7a), but excludes (7b) as ill-formed. An immediate question is whether the HFC applies to all features. In fact it does not; it applies only to a subset of features, known as head features, and so needs to be reworded as: The Head Feature Convention (HFC) (revised version) A node and its head must share the same head features Most of the features we have encountered so far are head features. To illustrate a feature that is not a head feature, we must look at some more data. Take expressions such as a book of John’s, where the post-nominal PP must be possessive (cf. *a book of John). A Boolean feature POSS is used for possessives, and a rule such as (10) might be written. (10) N1 N1, PP[+POSS] But [+POSS] is realized on the NP, not on the preposition that is head of the possessive PP, so it is reasonable to say that POSS is not a head feature. Let us now write tree (7a) in a slightly fuller way (11).
< previous page
page_55
next page >
< previous page
page_56
next page >
Page 56
(11) The values of BAR on mother and head plainly differ here, so we might be led to conclude that BAR is not a head feature. But, for reasons that will emerge in 5.2, we do want BAR to be a head feature and so subject to the HFC. How, then, can we reconcile the well-formedness of (11) with the HFC as stated? Suppose we state more fully the relevant ID-rule: (12) P[BAR 2] P[BAR 0], NP The difference is that BAR is explicitly mentioned in (12), whereas PFORM is not. The HFC, then, does not apply to all head features; it applies only to those not explicitly mentioned in a rule. In fact, we can generalize this slightly and reformulate the HFC as follows: The Head Feature Convention (HFC) (further revised version) A node and its head must share the same head features, where this is possible The addition, ‘where this is possible’, means ‘where this is not excluded by some explicit statement of the grammar’ (e.g. an ID-rule); the meaning will become clearer later, but our discussion of BAR and PFORM should give an idea of what is going on here. (11) is now compatible with our new version of the HFC: the PP node and its head do not share the same value for BAR, because they cannot—rule (12) stipulates that these values must differ. The term sometimes used to refer to the matching of features between a node and its mother is percolation. It might be said that the VFORM value on VP percolates to the dominating S. But the HFC is not itself directional; it requires features to match, it does not state that a feature in one place is copied to another (i.e. it is neutral between a top-down and a bottom-up interpretation). This is an appropriate place to point out that a task left half-finished in 4.4 has now been completed. We said there that specifying that say takes a finite clause left open the means for ensuring that a finite clause contains a finite verb. But we can now see that the HFC solves this problem: [VFORM FIN] will percolate between the S and V nodes. 5.2 Extending the applicability of the HFC If our ID-rule for S is simplified to (6a), it will combine with the HFC to sanction trees such as (13).
< previous page
page_56
next page >
< previous page
page_57
next page >
Page 57
(13) a. b. But suppose we now rewrite both rule and trees giving the details of all the features, as in (14), where (14b) corresponds to (13a), and (14c) to (13b). (14) a. {[+V], [−N], [BAR 2], [+SUBJ], [−COMP]} {[−V], [+N], [BAR 2]}, {[+V], [−N], [−SUBJ]}
b.
c. In (14b, c) the [BAR 2] and [VFORM] occur on the VP by virtue of the HFC. The value for SUBJ (which, though this is not obvious at present, is a head feature) does not match because the provisions of (14a) override the HFC. VP has no value for COMP; in fact, this is not a head feature. So much of the information found in the trees need not be specified in the rule, for the HFC can do a lot of work in ensuring feature matching. But, as may have been noticed, the features V and N are specified on both the mother S and the daughter VP in (14a). It would at least be in the spirit of things to let the HFC determine matching between these as well. On the other hand, it could be said that it is precisely the matching of N and V between mother and head in (14a) that identifies the VP as the head. So if we remove them from the specification of the VP, we will have to find some other way of showing which daughter is the head. What is done in GPSG is to permit ID-rules to contain on their right-hand side a symbol H, which stands for a category that has the status of head. So our rule for S will be (15). (15) S N2, H[−SUBJ] This also sanctions (14b, c): the HFC now ensures matching for all of V, N, BAR and VFORM between S and VP; SUBJ is outside the scope of the HFC here, as before, as it is specified in the rule. It is important to note that H does not appear in trees—
< previous page
page_57
next page >
< previous page
page_58
next page >
Page 58 it is not a syntactic feature at all, and therefore cannot be referred to by other parts of the grammar, such as LP-statements.2 It is simply there to identify the head for the purposes of a particular ID-rule and the operation of the HFC in corresponding local trees. We can see a further consequence of the HFC and our use of H if we reconsider recursive rules, such as (16), which can be rewritten as (17). (16) N1 N1, P2 (17) N1 H, P2 The HFC will ensure that the mother and head share all their head features, so all we need to say in the rule is which node is the head. The values for V, N and BAR (etc.) then necessarily match. One advantage of treating BAR as a head feature is that it makes recursive rules easy to state. If we now think about the consequences for the GPSG version of X-bar theory, we can see that the claim is that mother and head will have the same value for BAR unless a special statement is made to prevent this. So in a sense the basic situation is for head and mother to have the same bar-level, not for the level to descend by one. If we wish to have a head with a lower bar-level than the mother, this has to be specified in the rule (cf. rule (12)). With this in mind, we can revisit the treatment of verbal projections. We have so far made use of V2, V1 and V0, partly for reasons of parallelism with nominal projections and partly because we have just been assuming all combinations of type and level to be found. But it can now be suggested that V1 is not needed; rather we can use rules such as (18). (18) a. VP V[BAR 0, SUBCAT 2], N2 b. VP H, AdvP (18b) is considerably neater than the rules we employed in Chapter 2, since all adverbials are now sisters of VP. Also, do so can be analyzed as a pro-form for VP, rather than for both VP and V1. So we would now assign trees such as (19) (next page). The values for V, N and VFORM must match on the S, the two VP nodes and the V0. It should be simple to convert our previous rules introducing V0 to the format found in (18a). From now on, we shall mostly use a slightly different, and more convenient, notation for local trees. The mother will be written on the left, and the daughters will be indented on lines below, the top-to-bottom order indicating left-to-right order in an ordinary tree, for example (20). (20) S NP VP 2. For the reason for not using H in LP-statements, see Shieber (1984: pp. 146–7). And for an argument in defence of the practice, see Fodor & Crain (1990: p. 630n).
< previous page
page_58
next page >
< previous page
page_59
next page >
Page 59
(19) So our earlier monstrosity (14b) can now be rewritten more clearly as (21). (21) {[+V], [−N], [BAR 2], [+SUBJ], [−COMP], [VFORM FIN]} {[−V], [+N], [BAR 2]} {[+V], [−N], [BAR 2], [−SUBJ], [VFORM FIN]} We can conclude this section by suggesting that the following constraint can be imposed: Every ID-rule introduces a head. This is the main content of X-bar theory within GPSG. Rules such as (22) (from 2.4) are now excluded.3 (22) AP V NP But a rule such as (23) (with a non-maximal projection as a sister of the head) will not be excluded by any principle of GPSG. (23) VP H[SUBCAT n ], N This is not necessarily a bad thing, for the constraint in question is very unrestrictive. Kornai & Pullum (1990) discuss this constraint, which they term ‘Maximality’, and conclude that It seems intuitively clear that non-maximal complements could always be replaced by maximal complements with missing daughters, so that Maximality has no consequences at all. (p. 31) 3. There is a slight problem here, in that nothing said so far prevents one from writing a rule that includes on its right-hand side an H that has extra feature specifications so that in fact no features are percolated to the head from its mother. This can be informally excluded if we require a head to share at least some features with its mother, as is implicit in this whole discussion.
< previous page
page_59
next page >
< previous page
page_60
next page >
Page 60 In fact they see the concept of ‘head’ as the main content of X-bar theory, rather than the idea of barlevel (ibid. pp. 42ff.).4 5.3 The notion of ‘head’ GPSG is of course not the only grammatical framework that makes use of the notion of head; for instance, the idea of head or governor is a central concept in dependency theory. There has, however, been relatively little general discussion of this concept, or of criteria for settling disputes about which item in a construction is the head. So in this section we shall look at some of the ideas contained in the notion ‘head’. The first concept that the idea of a head involves is a semantic one, comparable to the lexical relation of hyponymy. X is the head of a combination XY (where the order of X and Y is irrelevant) if XY describes the kind of thing described by X. So the noun is the head of an NP (as those penguins describes a kind of penguin), and the verb is the head of VP (as kill those penguins describes a kind of killing, not a kind of penguin). By a parallel argument, it might be said that the VP is the head of NP+VP. Another semantically based idea is that the head is generally the functor in a function-argument structure (see 2.6). In a VP, for instance, the transitive verb is the functor, while the object NP is the argument. Equally, the VP is the functor taking the subject NP as argument. A further concept is that the head is the item for which other items are subcategorized. This also marks out verbs and nouns as heads of their respective phrases, but has no relevance to NP+VP, since subcategorization applies only to lexical items/categories5 In a related, if slightly separate, way, the head may determine the morphosyntactic shape of its sister(s): a German verb or preposition may require a specific case on its accompanying NP. A head can also be said to be the morphosyntactic locus, i.e. the bearer of morphosyntactic marks of relations between a phrase and other units. So the plurality of an NP is marked on the noun, the finiteness of a VP on the verb, and of an S on the VP. And the finiteness of an Sbar is marked on the S. The head may also determine concord (agreement) features on items dependent on it. In English, a noun determines number agreement on a demonstrative, while French nouns determine gender concord on adjectives. In languages with verb-object agreement, the object determines concord on the verb, suggesting that the NP is head of V+NP. 4. Actually, rules like (23) could be excluded if desired by requiring non-head sisters to be either maximal projections or minor categories, cf. 2.4. 5. This last remark applies to standard approaches to subcategorization, but not to HPSG, where phrases can be subcategorized (see 4.5).
< previous page
page_60
next page >
< previous page
page_61
next page >
Page 61 As this last example illustrates, different conceptions of headedness may lead to different decisions about which is the head of a particular construct. One attempt to weigh the importance of different criteria leads to the following conclusion: Unless there is very good reason for doing otherwise, the morphosyntactic locus should be identified as the head in syntactic percolation. (Zwicky 1985: p. 10) This seems in fact to be a reasonable characterization of the GPSG approach. The verb, for instance, is seen as the head of VP, the VP of S, and S of Sbar, and in each case the head is the morphosyntactic locus: if the Sbar is finite, this is marked progressively on S, VP and V. Morphosyntactic locus is precisely the notion of head relevant to the HFC: a phrase and its head must agree in terms of finiteness, etc., which is really just another way of saying that characteristics of a phrase that indicate its link with the rest of the sentence will tend to be marked on the head of that phrase. Even here, though, there is room for differences of opinion: Hudson (1987: pp. 121–4) argues that the determiner is the morphosyntactic locus and, indeed, head of a noun phrase (a position that has also been adopted by some GB grammarians). A distinction is sometimes drawn between semantic head and syntactic head (or structural head). For instance, Napoli (1989: pp. 78–9) suggests that in the Italian example (24), the syntactic head is matto but the semantic head is Giorgio. (24) quel matto di Giorgio THAT MADMAN OF GIORGIO ‘that madman Giorgio’ It is also often claimed that certain semantically very general words are less crucial to a clause, even when they are morphosyntactic loci. Again we can cite Napoli (1989: p. 9), who suggests that in (25) the predicate (in her sense) is not be but happy. It is happy that is the semantic head, and many languages would have no equivalent of be in their translation of (25).6 (25) The man who marries young is happy. We shall meet a case later (in 10.3) where it might be useful in GPSG terms to regard the adjective as head of a copula+adjective construction. For the most part, however, the study of syntax can proceed without reference to the notion of semantic heads. It has sometimes been suggested to extend the notion of head from syntax to morphology (e.g. Williams 1981; Selkirk 1982: pp. 19ff. and 61ff.). The idea of a head in compounds is a traditional one; it would include taking shelf as the head of bookshelf, which seems natural both in terms of our semantic hyponymy-based definition and in terms of morphosyntactic locus (since plurality is marked on shelf). The concept has been extended to derivational morphology by taking most 6. For a discussion of the semantic ‘emptiness’ of be in sentences like (25), see Lyons (1968: pp. 322– 3, 388–90).
< previous page
page_61
next page >
< previous page
page_62
next page >
Page 62 affixes as heads: the affix may be said to determine the category of the derived word. In a gender language, a nominalizing suffix may determine the gender of the derived word; e.g. in German, nouns ending in -heit are feminine, those ending in -tum are neuter. Zwicky, however, warns against extending the notion of head as morphosyntactic locus to morphology (1985: p. 22). And Bauer has considered the arguments and concluded that ‘‘heads have no place in morphology” (1990: p. 30). It might be thought that the notion of morphological head is equally abandoned by Di Sciullo & Williams (1987: p. 26), who propose that a word may have two heads, each being a head with respect to a particular feature. This idea of a relativized head in effect robs the notion of head of its content. Most unification-based approaches make use of the notion of head and of some principle corresponding to the HFC. In some cases, this leads to an even more drastic reduction in the content of grammatical rules. For instance, Gunji (1987: p. 14) employs the very general rule (26) in JPSG for Japanese. (26) M CH ID/LP format is not used, and (26) is intepreted as ‘a mother consists of a complement followed by a head’. When coupled with the means of handling subcategorization discussed earlier (cf. 4.5), this does the work done by a mass of ID-rules introducing lexical categories in GPSG. 5.4 Summary We shall conclude by restating many of our rules in a way that takes advantage of the simplification offered by the HFC: (27) a. S N2, H[−SUBJ] b. VP H, AdvP c. N2 SpecN, H[BAR 1] d. N1 H, P2 e. N1 A2, H f. A2 SpecA, H[BAR 1] g. A1 Adv, H h. A1 P2, H i. A1 H[BAR 0, SUBCAT 52], P2[TO] (28) a. N2 V2 b. N2 P2 c. [SUBCAT] ~ [SUBCAT] d. Spec [BAR 1] e. A2 N1 P2 f. Adv A1 P2
< previous page
page_62
next page >
< previous page
page_63
next page >
Page 63 A further way of simplifying rules that introduce lexical categories, like (27i), is still to come, so we shall not restate such rules quite yet. 5.5 Further reading On the Head Feature Convention, see Horrocks (1987: p. 184–5), Sells (1985: pp. 105–7), Gazdar et al. (1985: pp. 50–55). A very technical discussion can be found in Gazdar et al. (1985: pp. 94–99). See also Gazdar & Pullum (1981: pp. 111–15): this paper shows that ID/LP, the notion of ‘head’ and the GPSG approach to subcategorization all support each other. The discussion of ‘head’ in 5.3 is based on Zwicky (1985) and Hudson (1987).
< previous page
page_63
next page >
< previous page
page_64
next page >
Page 64 6 Feature instantiation and related topics In this chapter we are going to look more closely at the relation between ID-rules and local trees, and to introduce some further principles that mediate this relation by describing the distribution of features. Once these principles have been mentioned, we shall have covered most (not all!) of the descriptive devices used in GPSG, and will be in a position to pursue some further description of English. 6.1 Extension of categories As has been silently assumed over the last couple of chapters, there is now a less than direct relation between ID-rules and local trees (quite apart from the ordering issue). ID-rules contain relatively few feature specifications, leaving the HFC to fill many of these in and so generate fully specified trees. For instance, we might write the general rule (1a), which is responsible for a tree such as (1b). (1) a. P2 H[BAR 0], N2 b.
< previous page
page_64
next page >
< previous page
page_65
next page >
Page 65 The matching of [N] and [V] features between mother and head is ensured by the HFC, whereas the [+PLU] feature on the N2 is not required by any grammatical principle at all. The [PFORM ON] must match by virtue of the HFC, but nothing said so far requires it to be present. In all these cases, the tree contains features not specified in the rule. So how do they get there? And how do we ensure that only appropriate features are added? Let us concentrate for the moment on the first of these questions. The approach taken in GPSG is that the categories present in a local tree can be extensions of the categories in the ID-rule responsible for generating that tree. For the moment, a fairly simple definition of extension suffices. An extension of a category is basically a superset of it, or: A category X is an extension of a category Y iff all the feature specifications in Y are in X. The features on the category on the left-hand side in (1a) are given in full in (2a), those on the mother in (1b) are given in (2b). (2) a. {[−N], [−V], [BAR 2]} b. {[−N], [−V], [BAR 2], [PFORM ON]} It should be clear that (2b) is an extension of (2a). Likewise the N2s in (1a, b) are shown in full in (3a, b) respectively. (3) a. {[−N], [−V], [BAR 2]} b. {[−N], [−V], [BAR 2], [+PLU]} What occurs in a tree, then, is not confined to what is specified by a rule. Features can be added, so that what appears in a tree is an extension of what is given in the rule. Such addition of features is known as feature instantiation; features added in this way are said to be instantiated. In contrast, features that appear on a category because an ID-rule requires their presence are said to be inherited by that category. On the N2 in (1b), the [+PLU] is instantiated, while [BAR 2] is inherited from (1a). In (1b), [PFORM ON] has been instantiated on both mother and head, and it is the HFC which ensures that only a tree with this feature instantiated on both rather than just one is well-formed. There are of course other extensions of P2, but only those with [PFORM ON] will enable (1b) to meet the HFC. Feature instantiation is therefore subject to general feature principles; not every extension of a category leads to a well-formed local tree. It is now convenient to explain further how feature instantiation interacts with underspecification of categories in rules introducing lexical categories. In 4.4 we provided (4), the rule for a verb like say. (4) VP V0[9], V2[VFORM FIN, +SUBJ] The sentential complement of say can occur with and without a complementizer:
< previous page
page_65
next page >
< previous page
page_66
next page >
Page 66 (5) a. John said that he would be late. b. John said he would be late. Both these possibilities are allowed by (4), as there are two extensions of the non-head daughter, those in (6). (6) a. V2[FIN, +SUBJ, +COMP] (=Sbar) b. V2[FIN, +SUBJ, −COMP] (=S) We also argued for ID-rule (7) for become, to capture the fact that it occurs with NP or AP. (7) VP V0[n], [+N, BAR 2] In a tree, the subcategorized-for phrase will be an extension of [+N, BAR 2], i.e. one of the possibilities in (8). (8) a. {[+N], [+V], [BAR 2]} (=AP) b. {[+N], [−V], [BAR 2]} (=NP) For instance, in (9), become occurs with an NP. Once more, underspecified rules combine with feature instantiation to enable more general statements to be made (e.g. a single lexical entry for become). (9) He has become a millionaire. There is a further point concerning feature instantiation that we should attempt to clarify here. A tree in GPSG consists of a number of local trees, and every node (except the root and the leaves) can be said to look in two directions: it is both the mother of a local tree and the daughter in another local tree. The question of whether a feature is inherited or instantiated on a node really makes sense for local trees only, since the answer can vary for the two local trees a node is part of. To make this more concrete, consider (10a), the tree for which might be given as (10b). (10) a. The man laughed.
b.
< previous page
page_66
next page >
< previous page
page_67
next page >
Page 67 Let us concentrate on the VP node in (10b); it is part of the following two local trees: (11) a. {[+V], [−N], [BAR 2], [+SUBJ], [VFORM FIN] {[−V], [+N], [BAR 2], [−PLU]} {[+V], [−N], [BAR 2], [−SUBJ], [VFORM FIN]} (=VP) b. {[+V], [−N], [BAR 2], [−SUBJ], [VFORM FIN]} (=VP) {[+V], [−N], [BAR 0], [SUBCAT 1], [VFORM FIN]} The ID-rules responsible are as in (12). (12) a. S N2, H[−SUBJ] b. VP V0[SUBCAT 1] In (11a), [−SUBJ] is the only feature inherited by the VP; the other features are all instantiated and (because of the HFC) must be the same as on the mother S node. In (11b) all the features on the VP are inherited from rule (12b), with the exception of VFORM, which is instantiated. So there is no single answer to a question such as “is the [+V] feature on the VP inherited or instantiated?”. GPSG rules and principles, then, can be said to license or admit local trees, which are then assembled together to form an entire phrase marker. 6.2 Feature Co-occurrence Restrictions The conclusion reached in the last section about some extensions of categories being ill-formed is in fact correct, but was illustrated only with regard to local trees. What we can now show is that some extensions are ill-formed simply in their own terms, irrespective of the tree they occur in. For instance, (13) and (14) are also extensions of P2. (13) {[−N], [−V], [BAR 2], [VFORM FIN]} (14) {[−N], [−V], [BAR 2], [+SUBJ]} Example (13) describes a finite P2, (14) a P2 containing a subject: both entities as mythical as the unicorn. Hence if we allow instantiation to take place freely, we will end up with non-occurring, and obviously nonsensical, categories. Other monstrosities that might turn up include finite nouns, adjectives in the nominative, Ss subcategorized for an object N2, etc. Clearly we have to introduce some mechanism to exclude these. A similar situation is encountered in semantics, where a statement such as “anything human is also animate” might be used to exclude the possibility of an inanimate person. Intuitively, it is fairly clear what is wrong with (13): only verbs (or projections of verbs) can be finite (or have any kind of value for VFORM). Likewise, only Ss can be [+SUBJ] (see (14)). Concentrating on (13), we need a statement that says that, if a feature VFORM occurs on a category, that category must also be [+V, −N]. This would identify (13) as an ill-formed category. Statements of this type are known in GPSG
< previous page
page_67
next page >
< previous page
page_68
next page >
Page 68 as Feature Co-occurrence Restrictions (FCRs, for short); they constrain the kinds of features that can occur together in a category. In this case, we would have (15). (15) [VFORM] [+V, −N] Rule (15) can be understood as “any category with a value for VFORM must be [+V, −N]”. This means that (13) is not well-formed, whereas (16) is permissible. (16) {[+V], [−N], [BAR 2], [−SUBJ], [VFORM FIN]} A category that meets all FCRs is said to be a legal category. Those feature specifications that can appear in legal extensions of a category are said to be free feature specifications (they are the ones not excluded by FCRs or other principles not yet met with). Example (14) is ruled out by the FCR (17). (17) [+SUBJ] [+V, −N, BAR 2] That is to say, ‘‘if a category is [+SUBJ], it must be V2”. Like all FCRs, the domain of (15) and (17) is a single category: FCRs cannot require a mother with certain features to have a daughter with certain features (or vice versa). Let us look at some further examples of FCRs. (18) states that any category with [PFORM] must be (a projection of) a preposition. (18) [PFORM] [−V, −N] All the FCRs seen so far have been of the form “X Y”, where both X and Y are sets of features. But other possibilities also occur. For instance, only [+SUBJ] categories (S and Sbar) can have a value for [COMP], and vice versa. Rather than state this by means of two implications, it can be stated in a single biconditional (19). (19) [COMP] ≡ [+SUBJ] That is, a category has a value for COMP iff it is [+SUBJ]. An interesting example we can take is the distribution of SUBCAT. We want to exclude categories such as those in (20) occurring in trees. (20) a. {[+V], [−N], [BAR 2], [−SUBJ], [SUBCAT 2]} b. {[+V], [+N], [BAR 1], [SUBCAT n ]} c. {[+V], [−N], [BAR 0]} Intuitively, only lexical categories such as V0 should have a SUBCAT value; moreover, they must have a SUBCAT value. This can be captured in the FCRs shown in (21). (21) a. [BAR 0] ≡ [SUBCAT] & [N] & [V] b. [BAR 1] ~[SUBCAT] c. [BAR 2] ~[SUBCAT] (21a) says: “[BAR 0] iff [SUBCAT] and [N] and [V]”, thus excluding all the categories in (20). The use of [N] and [V] in (21a) is required because (as will be seen later)
< previous page
page_68
next page >
< previous page
page_69
next page >
Page 69 SUBCAT also occurs on some categories that have no value for BAR. (21b, c) illustrate that FCRs can make negative specifications (i.e. “if X, then not Y”); they are needed alongside (21a) because in certain circumstances higher-level categories may have no specification for N and V.1 The FCRs in (21) include within their scope the requirement that any category with values for [V] and [N] and also specified for SUBCAT must be [BAR 0]. We can make use of this to further simplify rules for introducing lexical categories. In 4.4, we gave the rule (22a) for a verb like rely . By exploiting the HFC, we could simplify (22a) (using VP instead of V12) to give (22b). (22) a. V1 V0[49], PP [ON] b. VP H[BAR 0, SUBCAT 49], PP [ON] But (22b) can be made even simpler by allowing (21a) to be responsible for the presence of [BAR 0] on the head: (23) VP H[49], PP [ON] Among the local trees sanctioned by (23) is (24). (24) {[+V], [−N], [−SUBJ], [BAR 2], [VFORM FIN]} {[+V], [−N], [BAR 0], [SUBCAT 49], [VFORM FIN]} {[−V], [−N], [BAR 2], [PFORM ON]} On the mother, VFORM has been instantiated; the other features are inherited from the rule. On the head daughter, only SUBCAT is inherited from (23), the other features being instantiated; of these, the category would be ill-formed by (21a) if [BAR 0] had not been instantiated. The HFC ensures matching of V, N and VFORM between head and mother. We can now introduce another piece of terminology: an ID-rule is said to be a lexical ID-rule if its head includes SUBCAT. It can now be appreciated more fully what “where this is possible” means in our statement of the HFC (see 5.1). It means “unless excluded by FCRs”, as well as “unless inherited from an ID-rule”. So the head in (24) cannot be [BAR 2] because of the clash with (21). As a last example of an FCR for the time being, we can make use of the fact that finite verbs (i.e. [VFORM FIN]) are past or present tense (see (25a, b)). If we use a Boolean feature PAST, we can posit the FCR in (25c). (25) a. He leaves. b. He left. c. [PAST] [FIN] This captures the fact that only finite verbs have a value for PAST: English does not have past-tense infinitives, for instance. 1. For the most part, the effect of (21) is equivalent to a simpler FCR ‘‘[BAR 0] ≡ [SUBCAT]”, and unfortunately the reasons for the complications cannot be appreciated quite yet. 2. Recall from 5.2 that we have abandoned the V1 category.
< previous page
page_69
next page >
< previous page
page_70
next page >
Page 70 It may be useful here to summarize the kinds of statement that can be made in FCRs: (26) a. X Y b. X&Y Z c. X Y&Z d. X Y Z e. X ~Y f. X≡Y The symbol in (26d) means ‘or’. X, Y and Z here can be either attributes or attribute-value pairs (e.g. (25c) is an instantiation of (26a), with X being the attribute PAST and Y being the attribute-value [VFORM FIN]). It is possible to combine the kinds of statement made in (26), e.g. (26g): (26) g. X & Y ~Z. Before we leave FCRs for the time being, we can make a point about their status in universal grammar. Some FCRs have the appearance of good candidates for universality (e.g. (15) and (18), and perhaps all of those listed here). Others, which will be met in due course, are likely to be language-specific. This distinction, however, has no relevance to their interpretation in a particular grammar. 6.3 Feature Specification Defaults We now come onto another principle governing feature instantiation. We can approach it by considering the occurrence of CASE on English pronouns:. (27) a. I hate them. b. I gave the book to her. c. I would prefer for her to stay. d. Him, he’s a complete head-case. Of the two forms, nominative (NOM) occurs only on the subject of a finite verb, while accusative (ACC) occurs on the objects of verbs and prepositions, as the subject of an infinitive (which is how her would be analyzed in (27c)), and in topic position (as in (27d)). One way to account for the distribution of ACC would be to include it explicitly in all relevant ID-rules, e.g. (28). (28) a. VP H[2], N2[ACC] b. P2 H[BAR 0], N2[ACC] It is plain, though, that this addition will have to be made to an awful lot of rules. But if we let CASE be freely instantiated, then we are allowing NOM and ACC to occur in the wrong places. Suppose that instead we note that NOM is the exceptional value, and require it to be specified on subjects of finite verbs by some explicit grammatical
< previous page
page_70
next page >
< previous page
page_71
next page >
Page 71 statement (though we cannot actually state this until we have some mechanism for handling agreement: see 9.2), and at the same time leave ACC to be instantiated elsewhere. As part of this approach, we would have to ensure that NOM could not be instantiated, i.e. that NOM occurs only where inherited from a rule, whereas ACC can (like other values encountered so far) be instantiated. What is implied here is a concept of defaults, the idea that ACC is the default (or unmarked) value for CASE, which occurs whenever no specific value for CASE is required by a rule.3 The mechanism for stating such notions is a Feature Specification Default (FSD for short), in this instance, (29). (29) [+N, −V, BAR 2] ≡ [ACC] This is interpreted to mean that an N2 must be [CASE ACC] unless some specific statement requires the contrary; and anything with the feature [CASE ACC] must be an N2 unless the contrary is specified. (29) differs from an FCR in not being absolute: it does not exclude the category [CASE NOM]; it just says that this occurs only when required. We can supplement (29) with a statement that the default is “not NOM”, i.e. NOM cannot be instantiated unless required: (30) ~[NOM] This is interpreted to mean that [CASE NOM] occurs on a category only when required to by a rule or other grammatical statement. (30) is not a conditional, but (29) is, showing that FSDs may have the same format as FCRs. It is therefore necessary to distinguish between these two kinds of statement notationally, as in (31). (31) a. FSD: ~NOM b. FSD: [+N, −V, BAR 2] ≡ [ACC] c. FCR: [PFORM] [−V, −N] We have now ensured that, when CASE is instantiated on a category, its value will be ACC. Gazdar (1987: p. 43) notes that conditional FSDs are harder to interpret than ones like (31a), and suggests that defaults should be seen as properties of categories not of specific features. We should add that it seems perfectly reasonable to regard ACC as the default value for CASE, since (in colloquial English at least) it frequently shows up where one might expect NOM (see (32a–c)), though the reverse does not happen, except in cases of hypercorrection such as (32d), which can be attributed to prescriptive pressure. (32) a. Him and me did it. b. * Him did it. c. Who wants to know? Me. d. ? between you and I 3. The idea of defaults is a commonplace one in linguistics, e.g. one might say that suffixation of -s is the default realization of ‘plural’ in English, one overridden by statements of irregular cases such as men.
< previous page
page_71
next page >
< previous page
page_72
next page >
Page 72 We shall be seeing more FSDs later on. But we should now say a little about how they are interpreted and how they relate to other feature principles. It was slightly misleading to say above that FSDs apply when no specific value is specified by rule. A more accurate statement is that FSDs apply when no specific value is specified by ID-rule or some principle of feature instantiation (such as the HFC or an FCR). Let us say that a feature specification is privileged if it follows from some ID-rule or a general principle; a privileged feature will occur in every well-formed extension of a category in a local tree. We can now say that a category is well-formed vis-à-vis FSDs if for every feature either that feature is privileged or any relevant FSD is true.4 Though useful, FSDs are in fact a source of problems for GPSG. Their interaction with other feature principles (e.g. HFC) can become extremely complex, and consequently it may be very hard indeed to decide whether a particular local tree is well-formed. GPSG is therefore not always as perspicuous as one would like. 6.4 Concluding remarks This chapter has accomplished more than it may have seemed, since we have shown how the relation between schematic ID-rules and fully featured local trees is handled. Feature instantiation accounts for this relation, but the HFC, FCRs and FSDs (plus some other principles to be seen later) constrain instantiation so that only wanted categories result. A tree for an entire linguistic expression is well-formed iff every local tree it contains is well-formed, and each branch terminates in a lexical item. Lexical entries will generally be fully specified for relevant features (e.g. VFORM on verbs, PLU on nouns), and the various feature principles will ensure that wellformed trees have the full range of features, i.e. that instantiation is usually “maximal”: intuitively, if a feature can be added to a category (modulo FCRs, etc.) then it will be. (33) is a perfectly grammatical extension of the category VP, but would usually not occur in a tree since a value for VFORM would be present by virtue of its presence on the verb and the workings of the HFC.5 (33) {[+V], [−N], [BAR 2], [−SUBJ]} We have actually said very little about lexical entries here (see Gazdar et al. 1985: p. 104); suffice it to say that the lexicon makes available local trees like (34) and that this can occur in a subtree containing V0[1, FIN]. 4. In fact, this simplifies a very complex situation, which we return to later (e.g. in 7.1), but it is adequate for present purposes. 5. In fact, as will emerge when we consider coordination, there are circumstances where (33) might occur in a tree.
< previous page
page_72
next page >
< previous page
page_73
next page >
Page 73
(34) Other unification-based theories generally have equivalents of FCRs and FSDs, though they are often stated in a very different way, or used in a rather different manner. For instance, JPSG employs just two FCRs (Gunji 1987: pp. 222–3), both to handle specific facts about Japanese, and no FSDs at all. 6.5 Further reading Feature instantiation, FCRs and FSDs are discussed in Horrocks (1987: pp. 171–2 and 183–4), Sells (1985: pp. 100–4), and Gazdar et al. (1985: pp. 26–31 and 75–7). Gazdar (1987) discusses some general issues about default mechanisms in linguistics.
< previous page
page_73
next page >
< previous page
page_74
next page >
Page 74 7 More on English structure We have now built up, piece by piece, a fairly sophisticated grammatical framework, consisting of IDrules, LP-statements, FCRs, FSDs and a universal principle, the HFC. Still to come are two other universal feature principles and another descriptive mechanism. Before introducing these, however, it will be best to put our apparatus to some use by enlarging our grammar to cover far more areas of English. This is therefore our goal in the present chapter: to provide further formalized descriptions that capture important points about English syntax. While the details of our analyses are specific to GPSG, the general spirit of them is not and could be captured by many other formalisms. 7.1 Prepositional phrases We have not taken the internal structure of prepositional phrases very seriously up to now, positing only the rule: (1) P2 P0, N2 But there is a good deal more to say about the structure of P2. Consider the examples in (2).
< previous page
page_74
next page >
< previous page
page_75
next page >
Page 75 (2) a. right into the corner b. completely beyond the pale c. so completely beyond the pale d. * so right into the corner As (2c) shows, P2s can contain adverbs preceded by degree words such as so; (2d) would suggest that right is also a degree word, since it cannot co-occur with so. Examples (3), using the pro-form so (which is quite distinct from the degree word so!), suggest that the preposition+N2 sequence forms a constituent. (3) a. John put the book right [into the corner], but the flower less so. b. Ian is completely [beyond the pale], but Peter less so. By familiar logic, this implies a constituent inside P2, and the clear candidate is P1. If we regard degree words as [SPEC P] (abbreviated to SpecP), we might assign the structure in (4) to (2c).
(4) Like APs (see 4.6), PPs can include modifier PPs: (5) a. beyond the pale in many ways b. below his best in some respects c. in some respects below his best To handle these too, we shall need some recursive rules involving P1, giving us (6). (6) a. P2 H[BAR 1], (SpecP) b. P1 H, Adv c. P1 H, P2 d. P1 P0, N2 We shall now turn our attention to the sisters of P0, and see that much more than rule (6d) is called for, since prepositions allow a range of subcategorization possibilities. Consider:
< previous page
page_75
next page >
< previous page
page_76
next page >
Page 76 (7) a. before the fight b. before the fight finished c. right before the fight finished d. so immediately before the fight finished What are often called subordinating conjunctions are most conveniently treated as prepositions that subcategorize for sentential complements. This avoids claiming that many prepositions have homonymous subordinating conjunctions, and captures the fact that the possibilities for specifiers and adverbs are the same in (7) as in (2).1 Before can occur before both N2 and S, during only before N2, and while only before S (except in elliptical contexts such as (8)). (8) While a child, he learned Latin. Prepositions can also occur before P2s: (9) a. from under the table b. towards near the centre Lexical ID-rules for prepositions, then, will need to be at least as in (10). (10) a. P1 H[n], N2 b. P1 H[n], S[FIN] c. P1 H[n], P2 Our LP-statement (11a) handles ordering here; we also need (11b). (11) a. [SUBCAT] ~[SUBCAT] b. [SPEC] P1 We make no ordering statement about P1 vis-à-vis Adv or P2, thus allowing both of (5b, c), and (12a) alongside (12b). (12) a. beyond the pale completely b. completely beyond the pale We have previously seen the need to employ a feature PFORM in order to capture the fact that some verbs subcategorize for a P2 with a specific preposition. The question arises as to whether this feature is instantiated on all [−N, −V] categories. Consider the examples in (13) where the lexical ID-rule introducing the P2 must specify a value for PFORM. (13) a. John relies on his bike. b. Jack is waiting for the bus. c. Andy laughed at Bill. 1. These claims do not necessarily carry over straightforwardly to other languages; for instance, German tends not to use homonymous items here, cf. ‘after’, which is nach before N2 but nachdem before S. In French, we could just say that prepositions can be followed by Sbar, as in après que.
< previous page
page_76
next page >
< previous page
page_77
next page >
Page 77 In each case, the preposition is meaningless, contributing nothing to the semantics of the sentence. The predicate-argument structure for (13a) would just be rely (John, his_bike). Contrast (13) with the examples in (14). (14) a. John put his book on the table. b. Jim is baking a cake for us. c. John drove to Stockport. d. Andy threw stones at the bridge. e. He left after the fight finished. The prepositions in (14) are fully meaningful, and could be replaced by other prepositions with a resulting change of meaning. The rules introducing the P2s in (14) will carry no specification for PFORM. Gazdar et al. (1985: pp. 132–3) propose to capture these differences by claiming that the P2s in (14) (unlike those in (13)) have no specification for PFORM even in a phrase-marker. So PFORM cannot be instantiated unless its presence is required by an ID-rule. This is stated by means of the FSD (15). (15) FSD: [PFORM] [BAR 0] This is a conditional FSD: ‘as a default, [PFORM] implies [BAR 0]’. There will be two lexical entries for (e.g.) at: the one for semantically empty at includes [PFORM AT], the one for semantically full at does not. Only the one with [PFORM AT] will occur in (13c), only the one without in (14d). The local tree for the P1 in (14d) will be (16). (16) {[−V], [−N], [BAR 1]} {[−V], [−N], [BAR 0], [SUBCAT n ]} {[−V], [+N], [BAR 2], [−PLU]} The effect of (15) is to prevent a local tree such as (17a) from being sanctioned by an ID-rule such as (17b) (i.e. one that does not introduce PFORM).
(17)
a. b. VP V0, P2 The structure in (17a) is ill-formed because the PFORM feature is not privileged, yet it appears in the tree on a [BAR 2] category, in violation of (15). Recall our discussion of the semantics of defaults at the end of 6.3: in (17a) PFORM is not privileged, yet (15) does not hold of the category in question. There is, however, an added complication to be mentioned here. Rule (6a) above, if we ignore the optional SpecP, is as in (18a). It makes no mention of PFORM, yet it sanctions local trees such as (18b). (18) a. P2 H[BAR 1]
< previous page
page_77
next page >
< previous page
page_78
next page >
Page 78
(18) b. How are the two categories in (18) compatible with the FSD in (15)? The answer to this depends on a subtle interpretation of FSDs and the way they interact with principles such as the HFC. In 6.3 we introduced the idea of features being exempt from defaults if they were privileged (e.g. inherited from a rule). We now extend this idea by also regarding features as privileged if they co-vary in a local tree by dint of the operation of some feature principle. This means that features can be instantiated in apparent violation of an FSD provided that they are subject to some feature-matching principle like the HFC. Co-variation holds of PFORM in (18b) because its values must match each other, and changing either implies changing the other in harmony with it. So PFORM can be instantiated on both mother and head daughter in (18b), as it is a co-varying and hence privileged feature.2 Perhaps this is an appropriate place to summarize the status of FSDs. A statement made in an FSD (e.g. that feature X does not occur) will be true unless feature X is in some way privileged, which means that it is not subject to the requirements of FSDs. There are three ways in which features can be privileged: (i) they can be inherited from a rule (e.g. an ID-rule may state specifically that feature X occurs on some node); (ii) they can follow from some general principle (e.g. a principle may require feature X to be present on a node because it matches a feature on another node where it is required to occur); (iii) they can co-vary (e.g. feature X on a node can be identical with feature X on that node’s head, whatever the value of X is, and even though neither feature is privileged in other ways). Unfortunately, there is yet a further complication concerning FSDs, to be met with in 11.4. Note that we are assuming here that benefactive for-phrases (see (14b)) do not include a PFORM feature, and are not part of a verb’s frame (contra Gazdar et al. 1985: p. 247; and cf. our remarks at the end of section 4.4). 7.2 The complementizer system Our next topic is a brief examination of complementizers. In GB, the complementizer node is seen as of great importance, being the landing-site for wh-movement, and indeed the head of Sbar. GPSG attaches far less mystique to complementizers, but we should still revise the simple analysis adopted so far. In fact, as we shall see, there are two plausible analyses of complementizers in GPSG; and the one that appears preferable requires a slight adjustment to the idea of headedness. 2. For technical discussion of this, see Gazdar et al. (1985: pp. 103–4).
< previous page
page_78
next page >
< previous page
page_79
next page >
Page 79 We have used an undecomposed category Comp, and a Boolean feature COMP, with (19) the rule for Sbar. (19) V2[+SUBJ, +COMP] Comp, H[−COMP] But the various members of the class of complementizers occur with different kinds of clause, e.g. that with finite clauses and for with infinitives, while whether occurs with either: (20) a. I believe that John has left. b. For John to leave now would be best. c. I wonder whether John will leave. d. I wonder whether to leave. So we clearly need to say something about the relation between the value of VFORM and the presence of individual complementizers. A first step is to change the feature COMP so its values are either individual complementizers or NIL: (21) COMP: {FOR, THAT, WHETHER, IF, NIL} Our previous [−COMP] is now [COMP NIL]; it still distinguishes S from Sbar. But [+COMP] is now replaced by features specifying which complementizer is present; e.g. the complement Sbar in (20a) contains the feature [COMP THAT]. To ensure that [COMP THAT] occurs only with a finite clause, and [COMP FOR] only with infinitives, we can use FCRs: (22) a. FCR: [COMP THAT] [FIN] b. FCR: [COMP FOR] [INF] c. FCR: [COMP] ≡ [+SUBJ] Any node with [COMP THAT] must also be [VFORM FIN]; in fact, only Sbars can be [COMP THAT], since S is [COMP NIL] and only [+SUBJ] categories can have an attribute COMP, by dint of the FCR (22c) (though this appears to offer no way of handling (20d)). What about the category of complementizers? Gazdar et al. (1985: pp. 112–13) extend the notion of SUBCAT so that its values are not just integers but also names of complementizers; so that belongs to the category {[SUBCAT THAT]}. We now have to ensure that the value of COMP on the Sbar node is identical to the value of SUBCAT on the complementizer node. The HFC is of course of no help here, since (a) the complementizer is not the head of the Sbar, and (b) we are not dealing with values of the same attribute. Rule (19) is to be reformulated as (23) (the symbol means set-membership: ‘‘is one of”). (23) V2[COMP ] [SUBCAT ], H[COMP NIL] where {THAT, FOR, WHETHER, IF} We are here forced to use a variable in an ID-rule (see 5.1). In any local tree generated by (23), the value of COMP on the mother and SUBCAT on the non-head
< previous page
page_79
next page >
< previous page
page_80
next page >
Page 80 daughter must be identical and be drawn from the set {THAT, FOR, WHETHER, IF}. Some other formalisms make extensive use of variables in rules to pass values around trees, and GPSG is able to avoid this in most cases by reliance on the HFC (and other principles). So in a sense the use of variables in (23) is unfortunate. It does, however, reflect the fact the complementizer is in some ways the head of Sbar (see Hudson 1987: p. 129), though it is clearly not the morphosyntactic locus in terms of realization of VFORM. The orthodox GB view is that the complementizer is indeed the head of Sbar, which is analyzed as CP (complementizer phrase). A variant on this approach is that of Emonds (1985: Ch. 7), who regards complementizers as being prepositions and so as heads of Sbar (which is actually a prepositional phrase). Consideration of this would take us too far afield, however. What justification is there for regarding complementizers as having (only) the feature SUBCAT? One justification will be noted at an appropriate point later, but a straightforward one we can mention here is that the ordering of complementizer before S now follows simply from our familiar LP-statement (11a) above.3 A slightly different analysis has been proposed by Warner (1989). Taking account of the fact that complementizers are head-like, he suggests that Sbar is multiply headed: both the complementizer and the S are heads. Complementizers have the feature [SUBCAT 49], as well as a COMP attribute whose values are the names of complementizers. He then writes the following rule: (24) V2[COMP ] H[SUBCAT 49], H[COMP NIL] where {THAT, FOR, WHETHER, IF} The S inherits VFORM (etc.) from the mother, and the complementizer inherits the value of COMP. An FCR (which we shall not formulate here) ensures that anything with the feature [SUBCAT 49] has no attributes other than SUBCAT and COMP. This analysis handles the percolation of features very neatly, and avoids the undesirable use of variables to percolate features in (23). But it violates our constraint on every rule introducing a single head, so we shall not adopt it here. However, as we shall see in Chapter 12, it is probable that this constraint should be relaxed anyway. For present purposes, though, we shall keep to the more standard GPSG analysis in (23). We now display the structure of the embedded clause in (20a) in (25) (next page). A verb such as believe can be introduced by the ID-rule (26). (26) VP H[40], V2[+SUBJ, FIN] This allows for either [COMP NIL] or [COMP THAT] to be instantiated on the non-head daughter, thus allowing for both (20a) and (27). (27) I believe John has left. 3. Incidentally, we can now see one reason for the complexity concerning the FCRs for the SUBCAT feature in examples (21) in 6.2: a complementizer has a SUBCAT feature but is not [BAR 0] (in fact, has no value for BAR at all), because it has no values for the N and V features either. Hence the simple FCR (i) is not adequate. (i) [BAR 0] ≡ [SUBCAT]
< previous page
page_80
next page >
< previous page
page_81
next page >
Page 81
(25) Any verbs requiring that before a complement clause (e.g. denote, though judgements vary in these cases) will require a special rule, (28).4 With FCR (22a), there is no need to require that the V2 in (28) be FIN. (28) VP H[n], V2[+SUBJ, COMP THAT] 7.3 More on infinitival VPs We can now turn our attention to infinitival complements, and especially to the analysis of those with to. Looking first at infinitives without to, we can note their occurrence after make: (29) Bob made John wash the dishes. If we analyze wash here as [VFORM BSE] (i.e. the base form of the verb), we would need the lexical ID-rule (30) for verbs such as make. (30) V2 H[n], N2, VP[BSE] Infinitives with to of course occur as complements too: (31) a. John prefers to wipe up. b. John persuaded Bob to help him. c. Jim wishes to leave, d. Jack tried to give it up. The to is obligatory in the examples in (31), and we must therefore account for this. But, as a preliminary, we should consider the constituency of the embedded VPs in (31). Evidence from coordination suggests both that infinitive+complements form a constituent, and also the sequence of to plus this unit (32). 4. For discussion of use vs. omission of that, see Kilby (1984: pp. 172–9).
< previous page
page_81
next page >
< previous page
page_82
next page >
Page 82 (32) a. John persuaded Bob to [help him] and [abandon Joe]. b. John persuaded Bob [to help him] and [to abandon Joe]. The sequence involving to can also be moved around (33a, b), which would suggest the constituency shown in (33c). (33) a. It is to wipe up that John prefers. b. To wipe up is what John prefers. c. John [prefers [to [wipe up]]]. Besides constituency, the other issue is the category of to. A number of proposals exist as to how to should be analysed (see Pullum 1982). One obvious-seeming one would be to regard it as a preposition, like the to in (34a). But infinitival to behaves quite unlike prepositional to. For instance, infinitival to cannot take the specifier right (see 7.1): (34) a. Ken went to London. b. He walked right to the corner. c. * He prefers right to wipe up. Prepositional to does not contract to a preceding verb in colloquial styles (35). (35) a. Len used to smoke. b. Len usta smoke. c. Len is used to problems. d. * Len is usta problems. In fact, Pullum argues that to is a verb, more precisely a non-finite auxiliary verb. Only verbs can occur before base forms of the verb (see (29)); so can to. When a VP is omitted, only auxiliary verbs and to can precede the “omission site”: (36) a. John has lost, and Bill has _ as well. b. John is losing, and Bill is _ as well. c. John wants to win, and Bill wants to _ as well. d. * John wants to win, and Bill wants _ as well. Not can occur before a non-finite verb and to (37). (37) a. Not being a fool, I refuse your offer. b. Not to leave would be best. c. I prefer not to be hurried. d. I prefer not being hurried. e. * Since I not am a fool, I refuse your offer. We shall look in more detail later at just how auxiliaries are analyzed (see 7.5); for now we can at least say more about the analysis of to. If to is a verb (the only member of its SUBCAT type) that takes a base VP as complement, we will need a separate analysis of the phrase including to. Let us say that to has the value INF for VFORM and so does the VP node dominating it. We would then need the rules
< previous page
page_82
next page >
< previous page
page_83
next page >
Page 83 (38a, b) to introduce, respectively, to and a verb such as try. We would then be assigning structures such as (38c). (38) a. VP[INF] H[12], VP[BSE] b. VP H[15], VP[INF]
c. Prefer, which occurs with infinitival VP or Sbar, needs an ID-rule along the lines of (39a), which would give us a tree such as (39b) (next page). (39) a. VP H[14], V2[INF] Rule (39a) ensures that the complement of prefer will be INF. But what ensures that it will also be [COMP FOR]? We used the FCR (22b) in the previous section to state that [COMP FOR] entails INF. We cannot, however, say that INF entails [COMP FOR] on the same node, since INF can occur without [COMP FOR] (see (38c), and the mother of VP[INF] in (39b)). It seems that we are dealing with a default here: anything that is INF and [+SUBJ] is also [COMP FOR], unless specified to the contrary (40). (40) FSD: [INF, +SUBJ] [COMP FOR] A category such as (41) is, however, possible, if [COMP NIL] is inherited from a rule (in this case, rule (24) above). (41) {V2, [+SUBJ], [COMP NIL], [INF]} 7.4 Noun phrases revisited In Chapter 3, we argued for a recursive analysis of the N1 category within noun phrases, thus implying rules such as those in (42).
< previous page
page_83
next page >
< previous page
page_84
next page >
Page 84
(39) (42)
b.
a. N2 SpecN, H[BAR 1] b. N1 H, P2 c. N1 A2, H d. Spec [BAR 1] e. A2 N1 P2 One way in which we can immediately improve these rules is by adding some lexical ID-rules for nouns. Determining valency patterns for nouns is less certain than doing so for verbs, so we shall stick mostly to deverbal nouns, where it can be said with reasonable confidence that complements of the stem verb are also complements of the derived noun (though the relation between the two kinds of frame is nowhere near as straightforward as this suggests—see Randall 1988 for discussion of the inheritance of verbal complements by nouns). The rules in (43) would suffice for death, love, argument and belief respectively (see Gazdar et al. 1985: p. 247).
< previous page
page_84
next page >
< previous page
page_85
next page >
Page 85 (43) a. N1 H[30] b. N1 H[35], P2[PFORM OF] c. N1 H[31], P2[PFORM WITH], P2[PFORM ABOUT] d. N1 H[32], V2[COMP THAT] Two aspects of these need commenting on. First, both the sentences in (44) will be generated using (43c), as there is no need to provide an LP-statement ordering P2s with respect to each other: (44) a. John regretted his argument with Bob about politics. b. John regretted his argument about politics with Bob. Secondly, [COMP THAT] is specified in (43d), since omission of complementizers is far harder after nouns than after verbs: (45) a. I’m fascinated by your belief that demons exist. b. ?? I’m fascinated by your belief demons exist. We leave it to the reader to formulate further lexical ID-rules for nouns. LP-statement (42e) requires adjectival modifiers to precede the head noun, but there are many cases where this requirement is violated, e.g. (46). (46) a. the stars visible b. the rivers navigable c. the people guilty d. the jewels stolen e. the people involved f. all those people present In some cases, the adjective cannot precede the noun (as in *all those present people). In others, the position of the adjective vis-à-vis the noun reflects a semantic distinction. Bolinger (1967) suggests that, with a past participle used adjectivally, pre-nominal position denotes a characteristic of the head and post-nominal implies an action that the head had undergone (47). (47) a. the stolen jewels b. the jewels stolen (47b), but not (47a), can be paraphrased as ‘the jewels that had been stolen’. With adjectives other than past participles, the contrast is between a characteristic property (pre-nominal) and a temporary state (post-nominal): (48) a. the only navigable river b. the only river navigable However, the vast majority of adjectives occur post-nominally only when they have a complement that makes them too “heavy” for pre-nominal position: (49) a. a proud man b. * a man proud
< previous page
page_85
next page >
< previous page
page_86
next page >
Page 86 c. a man proud of his success d. * a proud of his success man Adjectives such as present could be shown in the lexicon as [+POST], and this could be a head feature that was instantiated on the A2 node via the HFC. We could then use the LP-statement (50a) to ensure the order seen in (46f), leaving other adjectives to occur in any position, thus no longer stating (50b) (see 42e). (50) a. N1 [+POST] b. A2 N1 The problem with a paradigm such as (49) is that it is not clear whether the deviance of (49d) is a matter of grammar or of performance, so we shall not formalize a solution here.5 A further property of NPs not yet considered is that they can contain possessive phrases (51). (51) a. John’s book b. this man’s hat c. a book of John’s d. the barbarians’ destruction of Rome e. * a John’s book As (51e) shows, the pre-nominal possessive cannot occur with an article, so we need two separate rules (52). (52) a. N2 SpecN, H[BAR 1] b. N2 N2[+POSS], H[BAR 1] [+POSS] in (51b) indicates a possessive NP, i.e. one with a final ’s . We shall leave till a later section discussion of the internal structure of possessive NPs, and of post-nominal possessives such as (51c) (see 8.4). Instead, we shall turn our attention to the system of English personal pronouns. (53) is a reasonably uncontroversial lexical specification for me . (53) {[+N], [−V], [BAR 0], [+PRO], [CASE ACC], [SUBCAT 30], [−PLU], [PERS 1]} We are treating pronouns, then, as nouns that have no complements (like death; see rule (43a) above). The only new feature seen in (53) is PRO, a Boolean head feature that distinguishes pro-forms in general from other lexical items. Pronouns cannot take complements, but they can be modified by adjectives and relative clauses (54). (54) a. poor you b. lucky me c. I who have nothing 5. Liberman & Sproat (1992: p. 162) suggest that (49d) and comparable examples are ruled out for performance reasons.
< previous page
page_86
next page >
< previous page
page_87
next page >
Page 87 We are not dealing with relative clauses yet, but we should at least allow for examples like (54a, b), by means of the rules in (55). (55) a. N2[+PRO] H[BAR 1] b. N1 H, A2 c. N1 H[SUBCAT 30] (55b, c) are already existing rules; they now allow for a pronoun to be accompanied by an A2. (55a) is intended to exclude the possibility of a specifier with pronouns (56). (56) a. * the me b. * this him c. the me you once knew Examples like (56c) are clearly special, with the pronoun almost being quoted, so we shall leave these aside here. The rules in (52) must be restated so as to apply only to [−PRO]-N2s: (57) a. N2[−PRO] SpecN, H[BAR 1] b. N2[−PRO] N2[+POSS], H[BAR 1] So a non-pronominal N2 is expanded by (57) into a head plus a specifier or possessive, while a pronominal N2 is expanded by (55a) into just a head. Given our rules for the occurrence of CASE, especially the position that ACC can be instantiated but NOM occurs only where required, lexical entries like (53) will suffice to ensure the correct distribution of pronouns. We regard CASE as a head feature, which seems only natural, though it is not done by Gazdar et al. (1985). Possessive pronouns such as my and mine will also be discussed in 8.4. 7.5 Auxiliaries We now turn to the analysis of auxiliaries, viz. the modals and aspectual have and be (passive be and supportive do will not be dealt with yet). The distinctive behaviour of these verbs will not be reviewed here, and their differences from other verbs will mostly be left to a later section (10.1); here we shall be more concerned with deciding on the structure of straightforward clauses containing them.6 The analysis of auxiliaries has long been a controversial point within generative grammar, and even the fundamental issue of which syntactic category they belong to still attracts debate. One approach identifies a separate category of auxiliary, another includes them within the category of verbs. But from the viewpoint of feature-based syntax as used in GPSG, this difference of opinion becomes less crucial. By making use of the flexibility of features, it is easy to refer to both “the class of all verbs 6. The most detailed GPSG description of auxiliaries is Gazdar et al. (1982); but this is written in an earlier-style formalism, and so may be rather hard to follow.
< previous page
page_87
next page >
< previous page
page_88
next page >
Page 88 including auxiliaries’’ and “the class of auxiliaries”. If we posit the Boolean head feature AUX, these two classes can be written as in (58). (58) a. {[+V], [−N], [BAR 0]} b. {[+V], [−N], [BAR 0], [+AUX]} We thus claim that auxiliaries are verbs, but at the same time there is a separate category of auxiliaries. This brings us to the question of the constituent structure of sentences such as those in (59). (59) a. Jim has written a letter. b. Jim is writing a letter. c. Jim may write a letter. d. Jim may have written a letter. e. Jim may be writing a letter. f. Jim may have been writing a letter. g. Jim has been writing a letter. As one test for constituency, we can use the construction known as VP-Preposing (see Andrews 1982), as seen in (60). (60) They said Jim wrote a letter, and [write a letter] he did. The bracketed sequence in (60) has been preposed, and so is revealed as a constituent. Let’s try this (61) with some of the examples in (59). (61) a. They said that Jim had written a letter, and [written a letter] he had. b. They said that Jim was writing a letter, and [writing a letter] he was. c. They said that Jim had been writing a letter, and [writing a letter] he had been. d. * They said that Jim had been writing a letter, and [been writing a letter] he had. (61a–c) suggest that the sequence of non-auxiliary + complements forms a constituent. But (61d) does not necessarily imply that been writing a letter in (59g) is not a constituent, since it may just be the case that auxiliaries do not prepose. Using the coordination test gives us further evidence (62). (62) a. Jim [may write a letter] or [may send a postcard]. b. Jim may [write a letter] or [send a postcard]. c. Jim has [been writing a letter] and [been cooking his lunch]. Finally, we can consider use of the fronted pro-form so (63).
< previous page
page_88
next page >
< previous page
page_89
next page >
Page 89 (63) a. They said that Jim had [been writing a letter], and so he had. b. They said that Jim might [have been writing a letter], and so he might. c. They said that Jim might have [been writing a letter], and so he might have. d. They said that Jim might have been [writing a letter], and so he might have been. In each case in (63), the bracketed sequence is antecedent of so and therefore appears to be a constituent. In other words, our tests are compatible with the structures in (64). (64) a. Jim [has [been [writing a letter]]]. b. Jim [may [have [written a letter]]]. c. Jim [may [have [been [writing a letter]]]]. Each auxiliary occurs with a phrase that contains a verb (auxiliary or not), and the natural conclusion is that this phrase is a VP (i.e. V2[−SUBJ]). So the structure in (64d) might be assigned to (64c).
(64) d. Before we start writing appropriate rules, we can note that each auxiliary selects a specific value of VFORM on its sister VP, which by virtue of the HFC turns up on the head of that VP: modals select for [VFORM BSE], have for PSP, and be for PRP. So we would need rules such as (65). (65) a. VP[+AUX] H[70], VP[BSE] b. VP[+AUX] H[71], VP[PSP] c. VP[+AUX] H[72], VP[PRP]
< previous page
page_89
next page >
< previous page
page_90
next page >
Page 90 Each SUBCAT value picks out a small set of verbs: modals for 70, have for 71, be for 72.7 So we would have the fairly full tree (66b) for the simple example (66a). (66) a. Jim has written a letter.
b. If we are making a recursive analysis of both auxiliaries and adverbials of various categories (see 2.3 and 7.6), then we will assign two structures to a sentence such as (67a) with one of each (67b–c) (next page). (67) a. John had left on Tuesday. And this is correct, for there are two different readings for (67a): that the leaving took place on Tuesday (see (67c)), and that it took place before Tuesday (see (67b)). It must, however, be admitted that for many such examples we shall assign two structures where there is no perceivable ambiguity (as in (68)). (68) John is leaving on Tuesday. As a final point in this section: how do we explain the strict ordering of auxiliaries as in (69)? (69) a. * Jim is having left. b. * Jim has could leave. It may be that there is no general syntactic phenomenon here. These examples may well be deviant for purely idiosyncratic lexical reasons (e.g. aspectual have has no present participle form), or for semantic reasons which make a syntactic account superfluous. For instance, McCawley (1988b: pp. 222ff.) argues that progressive be must take a complement that refers to an activity or process, rather than a state, so that (69a) contravenes this, since ‘having left’ is a state. He further suggests that it is possible to find examples of progressive be followed by perfective have, as in (70): 7. In fact, we shall later give a much more generalized version of (65c), but the present one is adequate for now.
< previous page
page_90
next page >
< previous page
page_91
next page >
page_91
next page >
Page 91
(67)
b.
c.
< previous page
< previous page
page_92
next page >
Page 92 (70) Whenever I see you, you’re always just having returned from a vacation. If such examples really are well-formed, a syntactic explanation for the deviance of (69a) is on the wrong track. 7.6 Adverbs and other adverbials Our final topic in this chapter is adverbials, where we look at both the internal make-up of adverb phrases and their external distribution. We make a terminological distinction between adverbs (a syntactic category represented by words like foolishly) and adverbials (a functional term that in addition to adverbs covers adverbial uses of P2s and N2s—see later).8 We can note, first, that adverbs are something of an anomaly from a categorial point of view: our Boolean features N and V combine to permit only four categories (noun, verb, preposition, adjective), yet adverbs are clearly not minor categories like specifiers or complementizers. This is because they can be modified, and can head phrases of their own. How, then, can adverbs be fitted into our system? One obvious possibility is that they can be seen as a subclass of one of our already existing categories (just as auxiliaries are a subclass of verbs). The natural candidate is the adjective, as even traditional grammarians have observed that: A correspondence often exists between constructions containing adjectives and constructions containing the corresponding adverbs. (Greenbaum & Quirk 1990: pp. 151–2) Many adjectives have corresponding adverbs with a regular morphological and semantic connection: frequent(ly), stupid(ly), rare(ly), intelligent(ly). There is a generalized morphological rule: adding - ly to an adjective results in an adverb with the meaning ‘in the manner of the adjective’.9 Of course, morphological relations exist between different categories in many respects, but it is not usual for so many members of one major category to be derived from another; there are many deverbal nouns, but there are also plenty of non-derived nouns. Moreover, many speakers use adjectives as adverbs without - ly-suffixation (71). (71) The defence played marvellous. This lack of even a morphological distinction between adjectives and adverbs is perfectly regular and standard in German (cf. schnell ‘quick’, ‘quickly’). Adjectives and adverbs take the same range of modifiers (themselves perhaps adverbs) and specifiers: 8. On this terminological distinction, see Trask (1993: pp. 9–10). 9. Compare also the suffixation of -ment in French, though this is rather less generalized.
< previous page
page_92
next page >
< previous page
page_93
next page >
Page 93 (72)
a. He is very/rather tall. b. He runs very/rather quickly. (73) a. He is so tall. b. He runs so quickly. Of course, adjectives and adverbs have very different distributions, but it should be clear by now that such differences can easily be captured by use of features. A Boolean head feature ADV will distinguish the two classes as follows (adjectives in (74a)). (74) a. {[+N], [+V], [BAR 0], [−ADV]} b. {[+N], [+V], [BAR 0], [+ADV]} We shall use abbreviations as follows: Adj2 ‘adjective phrase’, Adv2 ‘adverbial phrase’, A2 ‘adjective or adverbial phrase’ (and correspondingly at other bar levels). We can now turn to the internal structure of Adv2. If we just use our existing rules for A2 (see 4.6), we shall have structures such as (75). The ID-rules for Adj2, of course, should not mention the feature ADV.
(75) One clear difference between adjectives and adverbs is that the latter usually do not take subcategorized complements (see Jackendoff 1977: p. 78) (76). (76) a. John is fearful of the dark. b. John entered the room fearfully (*of the dark). But a few adverbs do permit subcategorized complements (77). (77) a. John reached this conclusion independently of Fred. b. Jim plays similarly to Alan. Among the lexical ID-rules needed are those in (78). (78) a. A1[+ADV] H[n] b. A1[+ADV] H[n], P2[PFORM OF]
< previous page
page_93
next page >
< previous page
page_94
next page >
Page 94 We now examine the external distribution of AdvP; this is, however, an exceedingly complex and difficult area, and we can here do little more than scratch the surface. We can note first that a very small number of verbs subcategorize for AdvP: (79) a. Jack worded the letter carefully. b. * Jack worded the letter. (80) a. Joe behaved badly to Kay. b. * Joe behaved to Kay. This is handled quite straightforwardly, e.g. the lexical ID-rule (81) is for verbs such as word. (81) VP H[202], N2, AdvP Other than this, adverbs divide into a number of types. Words such as probably, allegedly, apparently and unfortunately are often described as sentential adverbs. Intuitively, they modify the whole clause rather than just the verb, and can be paraphrased by a higher clause with an adjective, as in (82). (82) a. Mary probably missed the bus. b. It is probable that Mary missed the bus. We shall call these items ad-Ss; note their distribution (83). (83) a. Probably the enemy will have destroyed the village. b. The enemy probably will have destroyed the village. c. The enemy will probably have destroyed the village. d. ?? The enemy will have probably destroyed the village. e. * The enemy will have destroyed probably the village. f. The enemy will have destroyed the village, probably. g. * The enemy will have destroyed the village probably. This paradigm is based on McCawley (1988b: p. 632); he marks (83d) as dubious, but many speakers find it not too bad. Note the contrast between (83f, g): final position is possible for an ad-S only if there is a comma or intonation break. (83a, f) suggest that ad-Ss can occur as daughters of S, implying the rule (84). (84) S H, AdvP Rule (84) is responsible for both orders if no ordering is imposed by an LP-statement. In the case of (83b), evidence from coordination (85) shows that the sequence from probably to village forms a constituent. (85) The enemy both probably has destroyed the village and certainly has killed the hostages. This sequence is presumably a VP, and (83c, d) also suggest that ad-Ss can occur inside VP, via (86) (but seemingly only in front of the head).
< previous page
page_94
next page >
< previous page
page_95
next page >
Page 95 (86) VP H, AdvP So the structure assigned to (83c) might be as in (87).
(87) This would all be fine were it not that there are other adverbs with rather different distributions, such as intentionally, obediently and expertly. Compare the paradigm in (88) with (83). (88) a. * Intentionally the enemy will have destroyed the village. b. * The enemy intentionally will have destroyed the village. c. The enemy will intentionally have destroyed the village. d. The enemy will have intentionally destroyed the village. e. * The enemy will have destroyed intentionally the village. f. The enemy will have destroyed the village intentionally. g. * The enemy will have destroyed the village, intentionally. Such words (which we shall call ad-Vs) seem to occur only as daughters of VP, not of S. So (88f) would be structured as in (89).
< previous page
page_95
next page >
< previous page
page_96
next page >
Page 96
(89) There is still a lot more to be said about adverb types. For instance, McCawley (1988b) distinguishes between intentionally and completely. The semantic contrast between the pair in (90) is also intriguing. (90) a. Louisa rudely answered Patricia. b. Louisa answered Patricia rudely. McConnell-Ginet (1982) gives the following account: [(90a)] can be construed as saying that Louisa’s rudeness consisted in her having answered Patricia (who perhaps is of such a high position that etiquette dictates she should not be addressed at all), whereas [(90b)] locates the flaw in her manner of answering. (p. 159) The whole issue of adverbial semantics is very tricky: Martin (1987: p. 118) treats adverbs as functions from predicates to predicates, but it is clear that this simple approach does not work for all adverb types. Rather than tackle such questions, we shall instead address the fact that phrases that are not of the category AdvP can often function as adverbials, such as P2s and N2s: (91) a. Louisa answered Patricia in a rude manner. b. They destroyed the village with the best of intentions. c. He left in a hurry. (92) a. I saw John that day. b. I’ve seen him lots of places. c. John arrived that very minute.
< previous page
page_96
next page >
< previous page
page_97
next page >
Page 97 Some of the examples in (92) are from Larson (1985), who refers to bare-NP adverbs (though here we shall call them adverbial N2s). He points out that there is a semantic constraint that only nouns that are inherently temporal or locative (or whatever) can occur here, but that there are lexical idiosyncracies too. Compare (92b, c) with (93). (93) a. * John arrived that occasion. b. * You lived some location near here. Larson suggests that words that can occur as heads of bare-NP adverbs must be marked in the lexicon in some way to show that this is possible. However, McCawley (1988a) points out that the situation here is far more complex than Larson makes out, since it is not just a matter of looking at the head noun (cf. the contrast between the two sentences in (94)). (94) a. We went there last Christmas. b. * We went there Christmas. We should add that there are also differences between British and American English here: whereas British speakers would prefer (95a), Americans would be more likely to say (95b). (95) a. I’ll do it on Saturday. b. I’ll do it Saturday. Let us think about how we could describe sentences like (91)–(92). The simplest answer is of course to say as little about them as possible. Suppose we revise our ID-rule for introducing AdvP to (96), (96) VP H, [BAR 2, +ADV] and let [BAR 2, +ADV] be instantiated as Adv2, N2 or P2. To prevent it from being instantiated as a projection of V, we can have the FCR in (97). (97) FCR: [+V, −N] ~[ADV] To distinguish (92) from (93), words like day can be lexically marked as [+ADV] (equivalent to Larson’s [+F]-feature), while words like occasion are [−ADV]. Then (92a) would give rise to a clash of features somewhere in the tree. Unfortunately, we do have to introduce a feature ADVTYPE (with values S and V) to distinguish ad-S from ad-V (98). (98) a. S H, [BAR 2, +ADV, ADVTYPE S] b. VP H, [BAR 2, +ADV] Since ad-Ss can occur inside VP, (98b) does not specify ADVTYPE. Ordering of adverbials, however, depends on category rather than having the feature [+ADV] (99). (99) a. * John on Tuesday left. b. * John with great haste answered.
< previous page
page_97
next page >
< previous page
page_98
next page >
Page 98 This fits in with the claim of McCawley (1988a: p. 585), that adverbial N2s have the same distribution as adverbial P2s rather than of adverbs. We suggest the LP-statements in (100). (100) a. [ADVTYPE S] VP b. VP [−V, BAR 2, +ADV] (100b) claims that adverbial N2s and P2s must follow VP; it has to mention [+ADV] in order to avoid clashing with statements about the ordering of subcategorized complements within VP (e.g. told him to go, which has N2 before VP). A last point is that the FSD in (101) is imposed. (101) FSD: [+ADV] [BAR 0] Thus [+ADV] will not be instantiated on a [BAR 2]-category licensed by an ID-rule such as (102) (which introduces a verb like seem). (102) VP H[BAR 0], A2 We make no pretence that this section offers an adequate account of adverbs; we just hope that it illustrates their complexity and makes a start at describing it. 7.7 Further reading On prepositional phrases, see Radford (1988: pp. 246–53) and Gazdar et al. (1985: pp. 131–5). On the structure of complement clauses and infinitives, see Baker (1989: pp. 73–8) and Greenbaum & Quirk (1990: pp. 346ff.). Complementizers are briefly discussed in Gazdar et al. (1985: pp. 112–14). On adverbs, see Radford (1988: pp. 138–41), McCawley (1988b: Ch. 19), Baker (1989: Ch. 11) and Greenbaum & Quirk (1990: pp. 147ff.).
< previous page
page_98
next page >
< previous page
page_99
next page >
Page 99 8 The Foot Feature Principle In this chapter we examine another general principle for passing features around in a tree, known as the Foot Feature Principle (FFP). This resembles the HFC in requiring identity of features between a node and its mother in a local tree in certain cases. But it operates slightly differently from the HFC, and also the set of features it applies to is not the same as the set of head features. We shall illustrate the workings of the FFP by looking at English relative clauses and wh-questions. But in this chapter we shall be concerned only with a subset of these, viz. subject relatives and subject questions, where the relativized or questioned element plays the role of subject: (1) a. the man who saw you b. I wonder who did it. c. Who did it? Examples such as those in (2) will therefore be left to a later chapter, because they involve extra problems over and beyond the effects of the FFP. (2) a. the man who you saw b. I wonder what he did. c. Who were you talking to?
< previous page
page_99
next page >
< previous page
page_100
next page >
Page 100 8.1 Relative and interrogative pro-forms We can begin by discussing briefly the description of the various interrogative and relative pro-forms. Some, as seen in (1), are pro-N2s: they can never be modified by adjectives (unlike ordinary personal pronouns), so we can simply regard them as belonging to the category N2. The Boolean features R and Q will be used to distinguish relative and interrogative forms; a relative has the feature [+R] and an interrogative the feature [+Q], thus giving lexical entries such as those in (3). (3) a. who: {[+N], [−V], [BAR 2], [+Q]} b. who: {[+N], [−V], [BAR 2], [+R]} c. what: {[+N], [−V], [BAR 2], [+Q]} d. which: {[+N], [−V], [BAR 2], [+Q]} e. which: {[+N], [−V], [BAR 2], [+R]} In fact, Q and R are simply abbreviations for more complex features, but for the moment it will be easier if we just use these shorter forms, and see how their distribution can be accounted for. Which can also function as a pro-form for specifiers: (4) a. Which team won? b. I left because I was bored, which reason being enough for me. We might describe this as in (5). (5) a. which: {SpecN, [+Q]} b. which: {SpecN, [+R]} As a first approximation, the structures assigned to some simple examples would be as in (6).
(6)
a.
< previous page
page_100
next page >
< previous page
page_101
next page >
Page 101
(6)
b.
c. d. Note that for the interrogative examples (6b–d) we have assumed an ordinary N2–VP structure. In the case of (6c-d), there is nothing much more to say about them, as [+Q] has been instantiated on an N2 or SpecN; the grammar simply allows these items to be interrogative or to be non-interrogative, and there is no need to require them to be [+Q]. If [+Q] is not instantiated here, the result will be a declarative such as John did it or My team won. In the case of the embedded question in (6b), things are slightly different, since the possibility of an embedded question depends on the matrix verb (7).
< previous page
page_101
next page >
< previous page
page_102
next page >
Page 102 (7) a. I believe John/*who did it. b. I asked who/*John did it. Clearly we need to say something about the distribution of [+Q] here, and we shall do so in 8.2. For the relative example (6a), we take the relative clause to be an S, and again to have N2–VP structure. We do need an extra ID-rule here: (8) N1 H, S Of course, not just any S will do here, it must be one with a relative pronoun: (9) * the man the boy saw you So again we must say something about the distribution of [+R]: this is a problem we shall tackle in 8.3. 8.2 The distribution of the interrogative feature We can take first the distribution of the interrogative feature. Verbs such as ask and inquire that take an embedded interrogative can be subcategorized for an S[+Q] and introduced by the ID-rule (10). (10) VP H[43], S[+Q] But we must now ensure that we end up with a tree like (11a) and not (11b).
(11) a. b. In other words, an S[+Q] must have N2[+Q] (or, at least, some [+Q] item) among its daughters. It would of course be possible to ensure this by means of an explicit ID-rule such as (12). (12) S[+Q] N2[+Q], H[−SUBJ] But, as you may have gathered by now, writing a rule like (12) is just the kind of thing that GPSG seeks to avoid, by making use of general principles of one kind
< previous page
page_102
next page >
< previous page
page_103
next page >
Page 103 and another. So let us assume that (12) is on the wrong track, and try to find some general principle at work here. We can immediately note that the HFC is of no relevance in this case, since N2 is not the head of S. We therefore need to invent a new class of features that will be subject to different percolation principles from those that apply to head features. If we are not dealing with head features, we can go to the other anatomical extreme and speak of foot features: Q and R are foot features, but none of the other features met so far are. Recall that we want the ID-rule (13) to be responsible only for local trees (14a, b), not for (14c, d). (13) S N2, H[−SUBJ] (14) a. S N2 VP b. S[+Q] N2[+Q] VP c. S N2[+Q] VP d. S[+Q] N2 VP So [+Q] must be instantiated either on both S and N2, or on neither. This leads us to: The Foot Feature Principle (FFP) Any foot feature instantiated on a daughter in a local tree must also be instantiated on the mother in that tree and vice versa . In other words, instantiated foot features on mother and daughters in a local tree must be identical. There are two points that we should draw attention to concerning the FFP. First, it applies to instantiated foot features only, and has no consequences for foot features inherited from a rule. Secondly, it applies only to local trees, and its application is sometimes masked in an entire phrasemarker, as will be seen below. It should be clear that the FFP does indeed allow (14a, b) to be generated, but not (14c, d). Returning now to tree (11a), it might be argued that, because of rule (10), which explicitly introduces [+Q] on the sister of V0, [+Q] has not in fact been instantiated on this S node. This is where the limiting of the FFP to local trees applies, (11a) represents the merging of lots of local trees, two of which are shown in (15).
< previous page
page_103
next page >
< previous page Page 104 (15)
a. b.
next page >
page_104 VP
V0 S[+Q]
S[+Q]
N2[+Q] VP [+Q] is indeed inherited, not instantiated, in (15a), but it is instantiated in (15b). So (11a) is permitted, because the FFP applies to local trees. It must, however, be admitted that what goes on in these cases is not exactly crystal-clear just from examining a tree like (11a). So the FFP ensures that if [+Q] is instantiated on the S, it must also be instantiated on a daughter. What ensures that it is instantiated on the N2, and not on the VP? This is accomplished not by the FFP, but by an FCR: (16) FCR: VP ~ [Q] A VP cannot have a value for Q. This seems to be correct for English, which has no interrogative VPs (17). (17) * Reading which book are you? To finish this section, we can consider example (18a), with tree (18b). (18) a. I asked which team won.
b. The FFP ensures not just that [+Q] is instantiated on both S and N2 (as already seen), but also that it is instantiated on both N2 and SpecN. The idea, then, is that somewhere inside the S[+Q] must come a [+Q] lexical item, and the FFP is responsible for the [+Q] being percolated in the appropriate places.
< previous page
page_104
next page >
< previous page
page_105
next page >
Page 105 8.3 Relative clauses If the feature [+R] characterizes relative clauses, these will need to be introduced by the ID-rule (19). (19) N1 H, S[+R] In a local tree specified by this rule, [+R] on the S is an inherited, not an instantiated feature, hence the FFP (which applies to instantiated features only) does not require it to occur on the mother N1 node. In fact, it cannot occur there, since then it would have been instantiated on the mother but not on the daughter. The kind of structure envisaged would be as in (20).
(20) The FFP forces [+R] onto one of the daughters of the S, and this must be the N2 as it cannot be the VP, on account of the FCR (21). (21) FCR: VP ~ [R] This and the FCR (16) excluding VP[+Q] are very similar, and they really form a single restriction—that VPs cannot be interrogative or relative. As with interrogatives, the relative pronoun may be embedded some way inside the relative clause: (22) a. the book the cover of which disgusted me b. a man the father of whom is very rich Note that we are here still concerned only with subject relatives (i.e. the phrase containing the relative pronoun is subject of the relative clause). (22a) would have the structure shown in (23) (next page). It is the FFP that is responsible for the chain of [+R] nodes here.
< previous page
page_105
next page >
< previous page
page_106
next page >
Page 106
(23) 8.4 Genitives In this section, we consider the syntax of possessive (genitive) NPs in English. We shall make use of a feature POSS, and see whether this can be treated as a foot feature. A further property of NPs not considered fully in 7.4 is that they can contain possessive phrases: (24) a. John’s book b. this dog’s tail c. a book of John’s d. the politician’s preparation of the speech e. * a John’s book f. * John’s the book In 7.4, we posited two separate rules, where the Boolean feature POSS indicates a possessive N2 (25). (25) a. N2 SpecN, H[BAR 1] b. N2 N2[+POSS], H[BAR 1] Before considering the internal structure of N2[+POSS], we should examine the post-nominal prepositional possessive, as seen in (24c), and in (26).
< previous page
page_106
next page >
< previous page
page_107
next page >
Page 107 (26)
a. a friend of Martha’s b. this car of Phil’s c. a portrait of John’s From now on we shall use the label ‘‘genitive” rather than “possessive”, as “possession” describes only a part of the meaning of these phrases; see in this connection Lyons (1986). First, there is what is called the free R type, which permits almost any kind of relation between genitive and head noun (e.g. John’s team can be the team John supports, owns, plays for, etc.); both kinds of genitive are possible here. Secondly, there are what might be termed relational types, where (27a, b) and (26a) are all synonymous. Thirdly, with nouns like picture and portrait, there is a semantic contrast between (26c) and (27c), with the latter only having the interpretation where the N2 is the subject of the portrait, not its owner or painter. (27) a. Martha’s friend b. a friend of Martha c. a portrait of John The rules in (28) seem called for. (28) a. N1 H[ n ], P2[OF] b. N1 H[ n ], P2[OF, +POSS] Rule (28a) is for nouns like friend and portrait, (28b) is for friend only; in each case we claim that the of -phrase is a complement. Where the postposed genitive bears a free R relation, there is evidence that it is a sister of N1 (29). (29) a. this [car] of Phil’s, and that one of Ben’s b. I prefer this [car of Phil’s] to that one c. I prefer this [blue car] of Phil’s to that one of Bob’s All the bracketed sequences are possible antecedents of one, so all must be constituents. It is just about possible to have more than one postposed genitive (as in (30)), which would suggest the recursive rule (31). (30) this book of John’s of yours (31) N1 H, P2 [+POSS] The structures assigned to some of our examples would be as in (32).
< previous page
page_107
next page >
< previous page
page_108
next page >
Page 108
(32)
a.
b.
c. Now we can turn to the question of how [+POSS] is realized. If it appears on the P2 in (32b, c), it clearly has to appear also on the N2 inside this phrase. So we might need the rules in (33) (of which (33b) is taken from Gazdar et al. 1985: p. 248). (33) a. P2[+POSS] H[BAR 1, +POSS] b. P1[+POSS] H[49], N2[+POSS] A preposition with the feature [SUBCAT 41] is of . However, rather than the specific rules in (33), it would obviously be nice to handle the occurrence of POSS via general principles. In (33b), POSS matches on mother and a non-head daughter, so we cannot treat it via the HFC, but we can perhaps regard it as a foot feature, subject to the FFP (this is suggested by Gazdar et al. 1985: p. 167, and adopted, with reservations, by Zwicky 1987b). So the POSS specifications in (33) are simply not needed. What, now, about the [+POSS] on N2? Note, first, that it is realised (leaving pronouns aside for now) as a clitic ’s attached to the last word of the N2, which need not be a noun (34). (34) a. the man I was talking to’s hat b. the woman over there’s dress c. the man who is talking’s proposal Also, if the N2 ends in an inflectional /s/, the POSS clitic is suppressed (35).
< previous page
page_108
next page >
< previous page
page_109
next page >
Page 109 (35) a. the two kids’ ideas b. * the two kids’s ideas c. a friend of my children’s ideas d. * a friend of my children’s’s ideas e. a friend of my children’s We adopt here the analysis of Zwicky (1987b) (slightly adapted). He regards the POSS clitic as an inflectional suffix, which is attached by a perfectly regular rule to each word that can occur at the end of an NP. The fact that POSS can occur only on the last word of an N2 is captured by the LP-statement (36), i.e. POSS must follow all its sisters; (36) X [+POSS] This also prevents multiple instances of POSS, since, if one POSS precedes another, the first will not follow all its sisters. The kind of structure to be assigned would be as in (37). It is the FFP that is responsible for the percolation of POSS here.
(37) It may have been noticed that (37) contains an example of a [+POSS] item preceding its sister N1. We can handle this simply if awkwardly as follows. Rule (25b) can be revised as (38) and we add an FSD (39). (38) N2 N2[+POSS, −LAST], H[BAR 1] (39) FSD: [+POSS] [+LAST] Unless otherwise specified, anything [+POSS] is also [+LAST]. (36) is altered to (40). (40) X [+LAST] LAST is not a head feature, and so is not subject to percolation. English also has two paradigms of possessive pronoun (41).
< previous page
page_109
next page >
< previous page
page_110
next page >
Page 110 (41)
a. my book b. her book c. a book of mine d. a book of hers Note that generally the possessive and ordinary pronouns are in complementary distribution (42). (42) a. * a book of me b. * a friend of me Both types of possessive pronoun are [+POSS] (see the previous section), and we shall assume here that the postposed ones are also [CASE ACC]. Lexical entries for my and mine are shown in (43). (43) a.{{[+N], [−V], [BAR 0], [SUBCAT 30], [+PRO], [CASE NULL], [+POSS], [−PLU], [PERS 1]} b. {{[+N], [−V], [BAR 0], [SUBCAT 30], [+PRO], [CASE ACC], [+POSS], [−PLU], [PERS 1]} Adjectives and other modifiers are generally excluded here, so we propose the rule (44). (44) N2[+POSS, +PRO] H[30] The kind of structures to be assigned, then, would be as in (45).
(45)
a.
< previous page
page_110
next page >
< previous page
page_111
next page >
Page 111
(45) b. 8.5 Further reading The FFP is discussed in Sells (1985: pp. 107–12) and Gazdar et al. (1985: pp. 80–83). There is some discussion of relatives and interrogatives in Gazdar et al. (1985: pp. 153–8) (but note that some of this deals with types we have not encountered yet—we have only covered subject relatives and questions so far). On genitives, see McCawley (1988b: pp. 385–93), and Greenbaum & Quirk (1990: pp. 102–7).
< previous page
page_111
next page >
< previous page
page_112
next page >
Page 112 9 The Control Agreement Principle In this chapter we are concerned with an area of language totally ignored until now, viz. agreement (e.g. subject-verb agreement in English). The chapter will be structured as follows: we shall first illustrate agreement and discuss how agreement might be represented in trees, before proposing a general feature principle to capture the relevant phenomena. We shall then extend the analysis to cover a number of phenomena that are not intuitively obvious cases of agreement. 9.1 Subject-verb agreement It is an elementary observation that subject and finite verb in English agree in number and person: (1) a. I am / *is / *are desperate. b. John is / *am / *are wealthy. c. John writes novels. d. * John write novels. e. John and Mary live / *lives in Didsbury.
< previous page
page_112
next page >
< previous page
page_113
next page >
Page 113 What can we say about the form of the verb in (1c)? It would not be correct to say that it is the third person singular form of the verb (even though this is a traditional way of referring to it). For one thing, this phraseology makes it impossible to handle a language where verbs agree with both subjects and objects. Rather, writes is the verb form that shows that it has a third person singular subject. So we are arguing against describing writes by means of the set of features in (2) (ignoring SUBCAT). (2) {[+V], [−N], [BAR 0], [VFORM FIN], [PER 3], [−PLU]} Instead, we want somehow to include in the description the claim that it agrees with a third singular N2. This can indeed be done, and we shall use the attribute AGR for this purpose. But the kinds of values taken by AGR are different from those used so far. All the features employed to this point have been atom-valued: the values are non-decomposable objects such as ± or an integer or FIN or ACC. But in the case of AGR (and some other features to be met later on) we need something more complex. If a verb agrees with a third singular N2, this can be expressed by saying that the value of AGR on that verb is “third singular N2”. But this is not an undecomposable atom, rather it is itself a category that (like all categories in GPSG) consists of a feature bundle. In other words, AGR is a category-valued feature: its value is not an atom but a category. In the case of writes, the specification of AGR will be (3a), which can be abbreviated as (3b). (3) a. [AGR {[+N], [−V], [BAR 2], [PER 3], [−PLU]}] b. [AGR {N2 [PER 3, −PLU]}] The feature bundle for writes (incorporating (3b) will now be (4). (4) {[+V], [−N], [BAR 0], [VFORM FIN], [AGR {N2 [PER 3, −PLU]}]} And am might be described as in (5). (5) {[+V], [−N], [BAR 0], [VFORM FIN], [AGR {N2 [PER 1, −PLU]}]} If AGR is a head feature, it will be percolated in its entirety from the V0 to the dominating VP node. The topmost local tree in the structure for (1c) will then be (6). (6) S N2 [PER 3, −PLU] VP [VFORM FIN, AGR {N2 [PER 3, −PLU]}] It makes equal sense, then, to claim that the VP agrees with the subject as to say that the verb agrees with the subject. This way of speaking is quite deliberate: to say “X agrees with Y” is better than to say “X and Y agree’’, since the former phraseology captures the directionality of agreement. More formally, we can speak of the controller and the target for agreement, and state that the target agrees with the controller. In English, the subject is the controller and the VP is the target.
< previous page
page_113
next page >
< previous page
page_114
next page >
Page 114 (6) suggests how agreement facts can be handled in GPSG: the value of the AGR feature on the target (the VP) must be identical to the features on the controller (the subject N2). Naturally, we want this to follow from some general principle, rather than build it into an ID-rule using variables, such as (7). (7) S , VP [AGR ] In the next section we shall see that such a rule is quite unnecessary. 9.2 The Control Agreement Principle GPSG goes a little beyond the notions developed in the previous section by supplying a semantic basis to what counts as controller and what as target. As we saw in 2.6, model-theoretic semantics is built around the concepts of functions and arguments; and GPSG makes the important claim that functors agree with their arguments (i.e. the argument is the controller, and the functor is the target). So in (8), the VP (functor, target) agrees with the subject (argument, controller). (8) Elizabeth reigns. There is therefore no need to specify controller and target in construction-specific terms, since this follows automatically from the semantic properties of items, which need to be stated independently anyway. In formulating a general principle about agreement, then, we can simply rely on it being known which is the controller and which is the target. There is one further point to be cleared up before we come to the general agreement principle. There is another feature (to be met in a later chapter) to which this principle applies, so we should not formulate it simply in terms of AGR alone. Instead, a class of control features is isolated, with AGR as one member of this class. In the previous section, we made the following informal statement: the value of the AGR feature on the target must be identical to the features on the controller. We can now state this more formally: The Control Agreement Principle (CAP) (preliminary version) The control features on a target category must be identical to the features on the controller. This means that the local tree (6) above is well-formed, while (9) is not. (9) S N2 [PER 3, −PLU] VP [VFORM FIN, AGR {N2 [PER 3, +PLU]}]
< previous page
page_114
next page >
< previous page
page_115
next page >
Page 115 So the ill-formedness of (1d) above is accounted for. The CAP, like the HFC and FFP, constrains feature instantiation, i.e. it limits the relation between IDrules and trees. AGR can be instantiated on VP nodes, but it is the CAP that determines which projections are actually legal ones, i.e. what values can be instantiated on AGR. The CAP is a putatively universal principle, and makes no claims about the domain of agreement in particular languages. For instance, it would be conceivable for a language to have nothing worthy of the name “agreement” at all, and many languages (e.g. Danish) have no subject-verb agreement. Languages differ greatly in terms of how widely they illustrate agreement phenomena. French adjectives agree with their head nouns in number and gender, while Welsh prepositions may agree with their objects in number and person. For English, however, Gazdar et al. (1985: p. 88) propose (10). (10) FCR: [AGR] [−N, +V] So they claim that if AGR occurs on a category, that category must be a projection of V. The implication is that specifier-noun co-occurrence restrictions (this man vs. *this men) are not instances of agreement. We can close this section by considering how nominative case is assigned to subject of finite verbs. As we saw in 6.3 there is an FSD stating that the default is for a category not to be nominative, in order to allow for accusative to be instantiated in a wide range of cases. Gazdar et al. (1985: p. 94) propose (11). (11) FCR: [FIN, AGR N2] [AGR N2 [NOM]] This can be glossed as: if a category is finite and has an AGR feature with the value N2, then that N2 must be [CASE NOM]. The CAP then ensures that the subject NP is also [CASE NOM]. This allows the local tree (12a) but excludes (12b). (12) a. S N2 [PER 3, −PLU, CASE NOM] VP [VFORM FIN, AGR {N2 [PER 3, −PLU, CASE NOM]}] b. S N2 [PER 3, −PLU, CASE ACC] VP [VFORM FIN, AGR {N2 [PER 3, −PLU, CASE ACC]}] So we might have a tree such as (13) (next page). Note that in (12a) and (13), [CASE NOM] has been instantiated on an N2 in defiance of an FSD. This is a further illustration of the point (made in 7.1) that features are privileged and so exempt from defaults if they co-vary in a local tree because of the operation of some feature principle; the CAP is of course the principle applying here.
< previous page
page_115
next page >
< previous page
page_116
next page >
Page 116
(13) 9.3 Expletive subjects We are now going to look at further examples of constructions where the CAP plays an important role in ensuring that only grammatical sentences are generated. It is not intuitively obvious that agreement is involved in these cases, but we shall see that a CAP-based account is both feasible and motivated. What we shall be doing, essentially, is to exploit a possibility used in our discussion of nominative case at the end of the previous section. If we state that the AGR feature of a VP has a certain value, then the CAP will force the subject N2 to have that value too in a well-formed tree. This was done just now to ensure that finite VPs have nominative subjects, and it can also be used to ensure that special kinds of VP have special kinds of subject. English has two semantically empty pronouns it and there, which are to be distinguished from their nonempty homonyms. These can occur as subjects of certain predicates, and in each case an ordinary N2 is excluded from this subject position, as is the inappropriate empty pronoun:1 (14) a. It/*John/*There seems that the summer has finished. b. It/*The queen/*There appears that nothing ever changes. c. There/*He/*It is a crowd in the square. d. There/*The table/*It is a man waiting outside. Further, where ordinary N2s occur as subjects, it and there are impossible: (15) a. John/*It believes that he has problems. b. He/*There has conquered all difficulties. A grammar must therefore ensure the correct distribution of empty it and there, which are often referred to as expletive elements. 1. Some of these starred examples may be acceptable under odd and irrelevant readings.
< previous page
page_116
next page >
< previous page
page_117
next page >
Page 117 The first point we can tackle is a feature system for projections of nouns. We have used VFORM and PFORM for different kinds of verbal and prepositional projection, so it is natural to posit an attribute NFORM: (16) NFORM: {NORM, IT, THERE } Expletive it is [NFORM IT ], while there is [NFORM THERE ]. Other nouns are [NFORM NORM]. The distribution of these values is partly accounted for by (17). (17) a. FCR: [NFORM] [−V, +N] b. FSD: [NFORM] [NFORM FORM] (17a) states that only nominal projections have a value for NFORM; (17b) states that the default is [NFORM NORM]—other values can occur only if inherited from a rule or required by some feature principle; they cannot be instantiated otherwise. Let us concentrate first on the occurrence of it in structures like it seems that. .. .. Expletive it occurs in other structures where the verb is followed by a clause, but these differ in that it is also possible for the clause to be a subject, unlike with seem: (18) a. It surprises me that you should say so. b. That you should say so surprises me. (19) a. It seems that the problem has been solved. b. *That the problem has been solved seems. Because of the dual possibilities in (18), we leave verbs like surprise to a later chapter (see 10.3). We also leave till the next section the other construction involving seem: (20) a. The summer seems to have finished b. The problem seems to have been solved Instead we concentrate for the moment on examples like (19a). If we think about the lexical ID-rule for seem, (21) would be a reasonable initial approximation. (21) VP H[21], S[FIN] To this must be added some way of ensuring that the subject will have the feature [NFORM IT ]. This could be achieved if we required the VP dominating seem to have [NFORM IT ] as a value for AGR: the CAP would ensure this matching. We shall follow Gazdar et al. (1985: p. 118) in using [+ IT ] as a convenient abbreviation for the specification [AGR N2 [NFORM IT ]]. So let us change (21) by stipulating that the mother is [+ IT ]: (22) VP[+ IT ] H[21], S[FIN] So we shall have a local tree such as (23), and the tree (24).
< previous page
page_117
next page >
< previous page Page 118 (23)
S
page_118
next page >
N2 [NFORM IT ] VP [AGR [NFORM IT ]]
(24) A string such as (25) violates the CAP. (25) * John seems that the summer has finished. Turning now to there-constructions, there is a difference of opinion concerning the constituent structure of the relevant examples, such as those in (26). (26) a. There is a crowd in the square. b. There is a man waiting outside. c. There are three people ill. d. There are unicorns. The debate is over whether the post- be sequence forms a constituent or is (in the case of (26a), say) N2+P2. As (26d) shows, there + be can be followed by a single N2, so whichever rule(s) allow for (26d) will also generate a structure for (26a–c) with the sequence after be forming a constituent. Since this analysis is available anyway, it is pointless to propose an extra analysis of (26a–c). We shall therefore say that be in (26) occurs with a sister N2 and write the ID-rule (27) as a first approximation (see Gazdar et al. 1985: p. 119). (27) VP H[22], N2 To ensure that the subject has [NFORM THERE ], we stipulate that this is the value of AGR on the VP ([ +THERE] abbreviates [AGR [NFORM THERE ]]): (28) VP[+ THERE ] H[22], N2 This is parallel to (22), but it is not sufficient for there, since the verb agrees with the post- be N2: (29) a. There are / *is some books on the table. b. There is / *are a book on the table.
< previous page
page_118
next page >
< previous page
page_119
next page >
Page 119 So our final version (see (30)) of the relevant ID-rule states that the N2 and the AGR on VP have the same value for PLU. (30) VP [AGR {N2 [THERE, PLU]}] H[22], N2 [ PLU] An example tree is shown in (31).
(31) It is often argued that there are even stricter restrictions on the there- construction than specified here, since the N2 after be needs to be indefinite: (32) a. ?? There is the book on the table. b. ?? There is the man outside waiting to see you. Attempts have been made to build this restriction into a syntactic account, but there do exist examples with definite N2s that are fine, for example (33). (33) There is the outline of a human face hidden in this puzzle. This example is taken from Holmback (1984), who argues that the oddness of (32) should be accounted for on pragmatic not syntactic grounds. There are also semantic-based acounts of these restrictions (see Bach 1989: pp. 58–62). We shall therefore not build any requirement about definiteness into (30). It should be added that we have made no attempt here to cover examples such as those in (34). (34) a. In the square there stands a statue. b. There remain only a few minor problems. We have dealt only with cases where there occurs with a form of be. We have stressed that it and there are semantically empty; they do not refer to any entities in the real world. The corollary of this view would be that they simply have no place in semantic representations. So for (19a) we would want to say that seem is a one-place predicate, its argument being a proposition.
< previous page
page_119
next page >
< previous page
page_120
next page >
Page 120 9.4 Infinitival complements again In this section we reconsider the analysis of infinitival complements, and see how the CAP functions to determine their behaviour, though in fact we shall discuss lots of topics in addition to agreement here. We shall be concerned with complements to four classes of verb exemplified in (35). (35) a. Jim tried to stop. b. Jim seems to be rich. c. Jim persuaded Bob to help him. d. Jim believed Bob to be the best candidate. In 7.3 we examined part of the grammar of some of these structures, and assigned the tree (36) to (35a).
(36) However, there is a lot more to say about the paradigm in (35). Let us first consider (35a–b). These may seem rather similar, but note the following differences: (37) a. * There tried to be a man outside. b. * It tried to seem that he had left. c. There seems to be a man outside. d. It seems to surprise them that you should say that. In other words, expletive elements can never appear as subjects of try, but they can appear as subjects of seem (provided that the infinitival clause has an appropriate structure). We also saw the use of it as subject of seem in the previous section. These contrasts have been made much of within transformational grammar, with (35a–b) being assigned very different deep structures. GPSG employs no such construct, and so must account for a paradigm like (37) in some other way.
< previous page
page_120
next page >
< previous page
page_121
next page >
Page 121 Before explaining the solution, let us consider (35c–d). Despite the superficial similarity, there are again important differences of behaviour as shown in (38). (38) a. * Jim persuaded there to be a man outside. b. * Jim persuaded it to surprise them that you said that. c. Jim believed there to be a man outside. d. Jim believed it to have surprised them that you said that. So expletives can appear after believe but not after persuade. There is a long-running dispute concerning the structure of (35d), and in particular the status of the N2 which follows the matrix verb (here, Bob). There is no doubt that in (35d) Bob is understood as the subject of the embedded clause, but that is not necessarily its syntactic position. Most approaches, though not GB, regard this N2 as the object of the matrix verb, and this is accepted within GPSG, giving a structure such as (39).
(39) The tree for (35c) will have identical geometry. In deciding how to handle the facts sketched above, we can look a little more closely at the use of believe. As (38c–d) show, it can occur with an expletive object, but only when the complement VP contains an appropriate verb: (40) a. * Jim believed there to have surprised them that you said that. b. * Jim believed Phil to be a flaw in the argument. c. * Jim believed it to be a crowd in the square. In other words, there is the same relation between the N2 and VP after believe as between a subject N2 and VP. Moreover, the N2 after believe is understood as subject of the embedded verb, so it is not very astonishing to claim that it is in fact the controller of the VP, and so the CAP comes into play. (38c) would include the local tree (41), and the whole tree would look like (42). (41) VP V0 N2 [NFORM THERE ] VP [AGR N2 [NFORM THERE ]]
< previous page
page_121
next page >
< previous page
page_122
next page >
Page 122
(42) The lexical ID-rule for believe will be simply (43). (43) VP H[17], N2, VP[INF] What, now, of persuade? From (38a, b), it would appear that it requires an object that is [NFORM NORM], so we can build this into the lexical ID-rule (44). (44) VP H[18], N2, VP[INF, AGR {N2 [NFORM NORM]}] Note how this has been encoded: the complement VP is [AGR {N2 [NFORM NORM]}]; this will be percolated to the V0 node inside this VP by the HFC, while the CAP ensures that the controller N2 is [NFORM NORM], i.e. an ordinary N2. (38a, b) will now not be generated. We can return now to the contrast between seem and try. We saw that try allows only an ordinary N2 as subject, while seem allows an expletive, as long as it is in harmony with the complement VP. Let us look at what this means in the case of seem, taking (45a) as our example. The basic structure is (45b).2 (45) a. There seems to be a riot.
b. 2. The letters attached to nodes here are for identification, and do not form part of the structure.
< previous page
page_122
next page >
< previous page
page_123
next page >
Page 123 The local tree with VP(c) as its mother is generated by the lexical ID-rule (30) above. The AGR value is percolated to V0(d) by the HFC. If in addition it can be percolated to VP(a) we shall have ensured that the subject of the sentence must be there. But the HFC cannot do this, since VP(c) is not head of VP(b), nor VP(b) head of VP(a). What we shall do is to slightly extend the workings of the CAP. Note that neither VP(b) nor VP(c) actually has a controller in (45b), whereas the CAP as formulated so far assumes that items do have a controller. So here is a revised version: The Control Agreement Principle (CAP) (revised version) The control features on a target category must be identical to the features on the controller. In the absence of a controller, the control features on a predicative category are identical to those on its mother. VP(c) has no controller, so its AGR feature is the same as its mother VP(b) (i.e. the feature is percolated from (c) to (b)). It is also percolated from (b) to (a), giving us the tree (46).
(46) Note that [−PLU] will be percolated along with [NFORM THERE ] as part of AGR, thereby ensuring that seems agrees with a riot . This use of seem is catered for by the lexical ID-rule (47). (47) VP H[16], VP[INF] The rule for try is (48). (48) VP H[15], VP[INF, +NORM] This gives a tree such as (49).
< previous page
page_123
next page >
< previous page
page_124
next page >
Page 124
(49) Thus (50) is excluded, as [NFORM NORM] is passed around till it reaches the subject N2. (50) * There tried to stop. There is no denying that the GPSG account of the phenomena reviewed in this section is rather complex, though it does have the advantage that very little needs to be stipulated in rules and that much follows from general principles (HFC, and especially CAP). There also need to be semantic rules that state the identity of the controller with a given verb (e.g. Gazdar et al. 1985: pp. 216ff.). More important, perhaps, than the precise GPSG formalization is the distinction between different verb classes. Any description of English has to deal with these, so we shall examine it again here, and introduce some new terminology, taken mainly from the transformational tradition. A basic division is that between equi and raising verbs. Equi verbs include try and persuade: an N2 (the subject of try, the object of persuade) fills a slot in the argument structure of both matrix and embedded verb: (51) a. John tried to stop the car. b. John persuaded Bill to help him. In (51b), for instance, Bill is both the person persuaded and the person who does the helping. Both matrix and embedded verb impose selectional restrictions on this N2: (52) a. * John tried to elapse. b. * John persuaded the table to help him. (52b) is odd because only animate entities can help or be persuaded. A term used in GB (and elsewhere) is to describe try and persuade as control verbs. Raising verbs (e.g. believe and seem) are very different. Here an N2 bears a grammatical relation to them (subject of seem, object of believe) but no semantic relation, being an argument of the embedded verb only: (53) a. John seems to be rich. b. Jim believes Bob to be the best candidate.
< previous page
page_124
next page >
< previous page
page_125
next page >
Page 125 (53b) is not about ‘believing Bob’ but about believing a proposition (viz. that Bob is the best candidate). Only the embedded verb imposes selectional restrictions on the N2 in question, hence the examples in (54) are all OK. (54) a. An hour seemed to elapse. b. Jim believed the table to be ready. c. There seems to be a problem. d. Jim believed there to be a problem. The origin of the ‘‘raising” metaphor is perhaps clear: in (54), an N2 can be seen to have been raised from an embedded clause where it belongs semantically to a matrix clause where it belongs syntactically. In GB terminology, positions such as subject of seem are non-thematic, as they can be filled by expletive elements or phrases linked semantically only to some other predicate. We can now return to the issue of selectional restrictions. Suppose we say (for instance) that only animate entities can occur as subject of try: (55) a. * The computer tried to ruin the program. b. * The tree tried to fall onto the car. But if we adopt this animacy restriction in order to exclude (55), we shall automatically exclude sentences with expletives as subject of try, since an expletive is naturally not animate. So it may be that we do not need the complex GPSG apparatus to exclude (37a, b) on syntactic grounds. It is generally agreed that selectional restrictions are semantic in nature, not syntactic, so examples like (55) should be generated by the syntax but disallowed by the semantics. However, it is possible to embed (55) in contexts where they do not sound so bad, but this is not feasible with expletive subjects: (56) a. I dreamt that the computer tried to ruin the program. b. ?? I dreamt that there tried to be a man outside The difference in status between (56a, b) suggests a different treatment for (56a) and (37a): the former is semantically anomalous, the latter really is syntactically deviant. Lastly in this section, we can say a little about the semantics of equi and raising constructions (see also Horrocks 1987: pp. 211ff.). We repeat the sentences in (35), and then give an idea (57) of their semantic representations. (35) a. Jim tried to stop. b. Jim seems to be rich. c. Jim persuaded Bob to help him. d. Jim believed Bob to be the best candidate.
< previous page
page_125
next page >
< previous page
page_126
next page >
Page 126 (57) a. (try′ (stop′ (Jim′)) (Jim′)) b. (seem′ (rich′ (Jim′))) c. (persuade′ ((help′ (him′)) (Bob′)) (Bob′)) (Jim′) d. (believe′ ((best-candidate′ (Bob′))) (Jim′)) For instance, (57a) shows Jim as an argument of both try and stop, while (57b) shows Jim as an argument of just rich: seem here (cf. earlier) takes a proposition as its single argument. In (57c), Bob is an argument of both persuade and help, while (57d) shows believe as a two-place predicate, one argument being a proposition. (57d) would also apply to (58). (58) Jim believed Bob was the best candidate. The different representations are obtained from the syntactic representations by means of meaning postulates, which (simplified) say: (I) The first nominal sister of an equi verb is an argument also of the verb’s clausal argument. (II) The first nominal sister of a raising verb is an argument only of that verb’s clausal argument. So (I) maps (59a) into (57a), while (II) maps (59b) into (57b). (59) a. (try′ (stop′)) (Jim′) b. (seem′ (rich′)) (Jim′) (59a, b) are parallel to each other and to the syntactic structures of (35a, b): it is the meaning postulates that capture the semantic distinctions between equi and raising. 9.5 Reflexives We now turn to a new phenomenon, that of reflexive pronouns. These have to agree with their antecedent in person, number and gender: (60) a. John shaved himself / *myself / *herself. b. I shaved myself / *ourselves / *himself. c. They shaved themselves / *ourselves / *himself. The existence of agreement here naturally makes reflexives candidates for treatment via CAP. Note that there are other aspects of their distribution than agreement: (61) a. * Himself shaved. b. * John said that himself shaved. c. * John said that Mary shaved himself.
< previous page
page_126
next page >
< previous page
page_127
next page >
Page 127 The phenomena seen in (61) (e.g. why John cannot be antecedent of himself in (61c)) have received much attention in GB, but here we shall be more concerned with the paradigm in (60); however, we shall only sketch the GPSG approach to this problem, basing ourselves in part on Hukari (1989).3 For the description of reflexive pronouns, we shall introduce a category-valued feature RE, the value of which is a feature specification matching that of its antecedent.4 So (62) would be lexical specifications for myself and himself: (62) a. {[+N], [−V], [BAR 0], [+PRO], [RE {N2, [PERS 1], [−PLU]}]} b. {[+N], [−V], [BAR 0], [+PRO], [RE {N2, [PERS 3], [−PLU], [GEN MALE]}]} In an example like (63), we need to ensure that the value of RE in (62a) is the same as the set of feature specifications on the antecedent, and that the relevant features are percolated correctly, as in (64). (63) I talked about myself.
(64) The FFP is responsible for percolating RE from the N2 to the P2, as RE is taken to be a foot feature. It cannot go higher in the tree because of (65). (65) FCR: ~[AGR & RE] Nothing can have both AGR and RE features.5 The area within which RE can percolate, i.e. up to the lowest node that is daughter of a node with an AGR feature, is the domain of reflexivization. The antecedent of the reflexive will generally be the node that controls that AGR feature. 3. For a slightly different approach, see Horrocks (1987: p. 186–8). 4. Hukari in fact argues that agreement of reflexives is a semantic, not a syntactic, matter. 5. Hukari (1989) uses SUBJ instead of AGR, but this is confusing because of the Boolean SUBJ feature to distinguish S from VP.
< previous page
page_127
next page >
< previous page
page_128
next page >
Page 128 9.6 Other approaches to agreement There has of course been much linguistic work done on agreement, and in this section we shall say a little about this and how it relates to the CAP and GPSG. Rather than survey a long tradition of work, we shall concentrate on two recent papers. Lehmann (1988: p. 55) offers the following definition: Constituent B agrees with constituent A (in category C) if and only if the following three conditions hold true: a. There is a syntactic or anaphoric relation between A and B. b. A belongs to a subcategory c of a grammatical category C, and A’s belonging to c is independent of the presence or nature of B. c. C is expressed on B and forms a constituent with it. He claims that this definition coincides with the traditional scope of the notion of “agreement”. It excludes the notion of government (as when one says that a German verb such as helfen (‘help’) governs an N2 in the dative, since helfen does not itself belong to the subcategory of “dative”). In the terminology we have used above, A is the controller and B the target. In the case of subject-verb agreement, the subject A belongs to the subcategory “third person singular” (for instance), and this subcategory is expressed on the verb B. Lehmann claims (p. 58) that “all agreement refers to an NP’’, and he distinguishes between internal agreement (agreement of adnominal modifiers) and external agreement (agreement with some NP outside the agreeing term, e.g. subject-verb agreement). It seems fair to say that GPSG is concerned only with external agreement, in view of the FCR that only projections of verbs can have a value for AGR.6 Lapointe (1988: pp. 69–70) proposes the following: The term morphosyntactic cooccurrence will be applied to any set of facts in which the specific morphological form of words appearing in sentences correlates with the presence, absence, or form of other words appearing in the sentences. The term agreement will be applied to those morphosyntactic cooccurrences in which there is an overt controller and in which the form of the controllee depends on universally specified semantic categories of the controller. On the basis of this he lists many kinds of agreement, many of which are not catered for in the account of agreement found in Gazdar et al. (1985)—for instance, adjective and head noun, preposition and object, and possessed noun with possessor. It is again clear that the CAP could not handle all of these without some reformulation. 6. Though it may well be that grammars for other languages than English would not invoke such an FCR—in fact, Zwicky (1986) presents a GPSG analysis of NP-internal agreement in German which makes a wider use of AGR.
< previous page
page_128
next page >
< previous page
page_129
next page >
Page 129 It is important not to draw the simple conclusion that the GPSG approach to agreement is wrong. Terms such as “agreement” are theory-dependent, and there is no a priori requirement that all and only what is treated under this heading by other linguists should fall within the terms of the agreement principles proposed within GPSG. It is sufficient that some clear cases of agreement be handled in an explanatory way, and the theory itself then defines the scope of application of the principles in question and thus defines the concept under consideration. It can certainly be said that GPSG does offer a general account of agreement, and a (rather limited) definition of agreement. But one should not expect everything that has ever been listed under “agreement” to be dealt with under the same heading. 9.7 Further reading The CAP is discussed in Sells (1985: pp. 112–17), Horrocks (1987: pp. 188–97) and Gazdar et al. (1985: pp. 83–94). The existential construction with there is discussed in Baker (1989: pp. 355–65) and McCawley (1988b: pp. 84–8). On expletive subjects, see Gazdar et al. (1985: pp. 116–22). It is difficult to recommend reading on the different types of raising and equi/control constructions that is not couched within a transformational framework. But Palmer (1974: Ch. 7) provides a fairly wideranging description of these phenomena, using the term “catenatives” for verbs that are followed by verbal constructions. Baker (1989: pp. 204–8, 361–4) discusses the same ideas, using the notion of “transparency”: seem is said to be transparent because it simply passes its subject down to its complement. See also Kilby (1984: Ch. 9), McCawley (1988b: pp. 119–32) and Borsley (1991: Chs 10 and 11).
< previous page
page_129
next page >
< previous page
page_130
next page >
Page 130 10 Metarules This chapter introduces the final descriptive device employed by GPSG, the metarule. We also take the opportunity to increase our coverage of English, to include the constructions seen in (1). (1) a. Has John left? b. The vegetables were bought by Bob. c. It bothers me that nothing is happening. We shall deal, then, with inversion in questions, with passives and with further instances of expletive it . 10.1 Subject-auxiliary inversion In 7.5 we presented a partial grammar of auxiliaries, which accounted for their occurrence in sentences such as (2). (2) John may have been writing a letter. The lexical ID-rules in (3) were formulated.
< previous page
page_130
next page >
< previous page
page_131
next page >
Page 131 (3) a. VP[+AUX] H[70], VP[BSE] (modals) b. VP[+AUX] H[71], VP[PSP] (have) c. VP[+AUX] H[72], VP[PRP] (be) Each class of auxiliary determines the VFORM value of its sister VP, and this value is percolated to the head V (auxiliary or otherwise) of that VP by the HFC. We have not yet accounted for the inversion of subject and finite auxiliary in questions:. (4) a. Has John left? b. Has John been writing a letter? c. Is Fred waiting? d. Can you speak Spanish? The precise constituent structure of such examples is a matter for debate. We shall assume it is as in (5).
(5) The VP here has the same internal structure as in the corresponding declarative. The structure in (5) has the advantage of allowing nominative case to be assigned to the subject in the usual way, via the CAP and an FCR (see 9.2); this is possible only if the subject and auxiliary are sisters. It would not be so straightforward if the N2 and VP in (5) formed a constituent. Assuming (5) to be correct, let us see which kinds of ID-rule are needed to generate it. As can be seen from (4) and (5), the auxiliary still determines the VFORM value of its sister VP, even though these are not adjacent in the local tree. So, alongside (3), we would need the set of rules in (6). (6) a. S[+AUX] H[70], N2, VP[BSE] b. S[+AUX] H[71], N2, VP[PSP] c. S[+AUX] H[72], N2, VP[PRP] For instance, (6b) has applied in (5). The rules in (6) are lexical ID-rules, with the auxiliary as head of the S. Now, the approach seen in (6) works in that it guarantees generation of well-formed examples, but it is inadequate in that it fails to capture the generalization that auxiliaries determine the same VFORM value of their complement VP in both declarative and interrogative sentences. This is a pure coincidence if we just use the
< previous page
page_131
next page >
< previous page
page_132
next page >
Page 132 rules in (6); suppose English were different in that auxiliaries selected distinct VFORM values in the inverted construction. The paradigm in (7) would be appropriate in this ‘‘pseudo-English”: (7) a. Has John writing a letter? b. * Has John written a letter? c. Is John write a letter? d. Should John written a letter? e. John has written a letter f. * John has writing a letter g. John is writing a letter h. John should write a letter Pseudo-English can be described equally easily by the rules in (3) plus a suitably adapted set of rules like (6) (this is left for the reader). But it should be clear that pseudo-English lacks the generalization found in real English. What we need is a way of ensuring that the fact that auxiliaries select the same VFORM value in both inverted and non-inverted sentences follows from some general statement, rather than just happening to obtain. We need some mechanism for directly relating the rules in (3) to the rules in (6), but none of the devices of GPSG explained thus far can achieve this. To solve this problem, GPSG employs a means of relating one set of ID-rules to another, or, more accurately, of defining a new set of ID-rules on the basis of rules already existing in the grammar. Such a device is called a metarule (the notion is taken over from descriptions of the programming language ALGOL 68). What a metarule says is roughly as follows: given a set of rules meeting pattern X, there is also a set of rules meeting pattern Y (where Y differs from X in some specified way). There is therefore no need to list the second set of rules explicitly; their existence and form follow from the metarule and the presence in the grammar of the first set of rules. Unless the contrary is stated, features of the first rule-set will reappear in the second rule-set, thus allowing generalizations about rule content to be captured. The syntax for metarules is given in (8). (8) The semantics is, roughly: for every rule meeting the pattern A there is another rule meeting the pattern B (where and are strings of symbols). The downward double arrow can be read elegantly as “gets you”. Rather than continue at such a general level, let us state the metarule for subject-auxiliary inversion (9).
< previous page
page_132
next page >
< previous page
page_133
next page >
Page 133 (9) For any rule rewriting VP, there is another rule that differs only in that the left-hand side is S[+INV] and the right-hand side contains an extra N2. INV is a Boolean head feature to handle inversion, and will be returned to shortly. The W is a metavariable over the right-hand side of rules.1 The idea is that whatever occurs on the right-hand side of the VP rules also occurs on the right-hand side of the S rules (with an additional N2, as specified by (9)). More concretely, (9) maps the set of rules in (3) into the set in (10), which is very similar to those in (6): (10) a. S[+INV] H[70], N2, VP[BSE] b. S[+INV] H[71], N2, VP[PSP] c. S[+INV] H[72], N2, VP[PRP] The regular relation between the rules in (3) and (10) is now captured. Note that our pseudo-English could not be described using the metarule (9) or any other metarule: one would have to list the inverted ID-rules explicitly. The structure in (11) will be typical of those assigned (it is a fuller version of (5) above).
(11) But by no means all relevant aspects of inversion have yet been covered. For instance, (9) makes no reference to the feature AUX, thus allowing for non-auxiliaries to be inverted, which is of course wrong: (12) * Wrote John a letter? We could build such a stipulation into (9), but what Gazdar et al. do (1985: p. 63) is to capture this via an FCR (13). (13) FCR: [+INV] [+AUX, FIN] 1. There is a technical point here, and the right-hand side of an ID-rule must be seen as a multiset, which differs from a set in being able to contain multiple instances of one element; see Gazdar et al. (1985: pp. 52–4).
< previous page
page_133
next page >
< previous page
page_134
next page >
Page 134 Any category that is [+INV] must also be [+AUX] and [VFORM FIN]. The feature bundle for the verb node above wrote in (12) would be [+INV, −AUX] and so violate (13). Gazdar et al. (1985: p. 61) motivate this approach (i.e. separating the requirement that only finite auxiliaries can invert from the inversion metarule itself) by noting that a metarule like (9) would be needed in a number of other languages, whereas the constraint on inversion is parochial to English (e.g. in French and German all verbs invert in questions, not just auxiliaries). A reason internal to English might be that inversion occurs in other constructions, where it is again confined to auxiliaries: (14) a. Never have I heard such nonsense, b. *Never heard I such nonsense. It could be that (13) could be exploited in the description of the construction seen in (14) as well. Lastly in this section, let us consider an account of supportive do: (15) a. John did write a letter. b. Did John write a letter? c. * Did John have written a letter? d. * Did John be writing a letter? e. * Did John can write a letter? Do can occur before the base form of a verb, in both inverted and non-inverted order; we shall simply generate (15a) and ignore the fact that it occurs only with stressed do. As (15c-e) show, do cannot occur before other auxiliaries. Do in this use is another auxiliary verb, needing an ID-rule such as (16). (16) VP[+AUX] H[46], VP[−AUX, BSE] (16) stipulates that the complement of do must be [−AUX], as do cannot be followed by an auxiliary. (9) will map (16) into (17). (17) S[+INV, +AUX] H[46], N2, VP[−AUX, BSE] (15b) will be assigned the structure (18).
(18)
< previous page
page_134
next page >
< previous page
page_135
next page >
Page 135 10.2 Passive This section is concerned with a classic problem for linguistic analysis, viz. the passive construction. We shall see that this too is amenable to description by means of a metarule. The active sentences in (19) each have passive counterparts as in (20). (19) a. John bought the vegetables. b. Jim put the book on the table. c. Jim believed Bob to be the best. d. Jack gave Bill a book. (20) a. The vegetables were bought by John. b. The book was put on the table by Jim. c. Bob was believed by Jim to be the best. d. Bill was given a book by Jack. The relations here are self-evident: the active subject is expressed in the passive in a by- phrase (which is optional), the active object is subject of the passive, a form of be appears, and the main verb is in past participle form. We shall begin with these last two points. Let us say that the verb is in fact a passive participle, and that this is [VFORM PAS] (though in fact there is rather little reason for distinguishing PAS and PSP as values of VFORM). An ID-rule to rewite VP as be plus passive participle might be (21). (21) VP[+AUX] H[n], VP[PAS] Compare (21) with our earlier rule (3c) for be before a present participle (repeated here for convenience). (3) c. VP[+AUX] H[72], VP[PRP] It would obviously be good to collapse rules (3c) and (21), i.e. to have just one rule introducing be. Before doing so, however, we should note that be occurs before other complements besides VP: (22) a. John is very tall (Adj2) b. The cat is in the corner (P2) c. Jim is a genius (N2) It appears, then, that be can occur before any kind of maximal projection. Rather than write a separate rule for each possibility, we can take advantage of the possibility
< previous page
page_135
next page >
< previous page
page_136
next page >
Page 136 of underspecification in ID-rules to write (23).2 (23) VP[+AUX] H[7], [BAR 2] We thus allow for be to occur before any extension of [BAR 2]. However, (23) is not enough as it stands. First, be cannot occur before adverbs: (24) * John is very quickly. This can be excluded by means of an FSD: (25) FSD: [+ADV] [BAR 0] This prevents [+ADV] being instantiated on the non-head daughter in a local tree defined by (23); unless otherwise specified [+ADV] can occur only on a lexical, not a phrasal, category. Secondly, be cannot occur before VP with BSE or FIN or INF as values for VFORM (26). (26) a. * Jack is win the race. b. * Jack is wrote a letter. Given the generality of (23), it would be difficult to build these restrictions into it. Suppose instead that we posit a Boolean feature PRD (for “predicative”), which occurs on the [BAR 2] category in (23), which we now revise as (27). (27) VP[+AUX] H[7], [BAR 2, +PRD] This feature has no effect, unless the [BAR 2] category is instantiated with verbal features and so has a VFORM attribute; in that case, the value of VFORM must be either PAS or PRP. An FCR such as (28) can capture this.3 (28) FCR: [+PRD, VFORM] ([PAS] [PRP]) We now allow for the passive sentences in (20) and (29), but not (26). (29) John was writing. Among the local trees licensed by (27) is (30). 2. We have changed the value of SUBCAT in 23 to [7], in order to bring it into line with the values employed in Gazdar et al. (1985). 3. Note the disjunction in the consequent of (28).
< previous page
page_136
next page >
< previous page
page_137
next page >
Page 137 (30)
VP[+AUX] V0[7] VP[+PRD, PAS] We now consider how this passive VP is expanded. We want it to dominate the sequences in (31) (see (20)) (31) a. bought by John b. put on the table by Jim c. believed by Jim to be the best d. given a book by Jack We could just write separate ID-rules for each of these (32). (32) a. VP[PAS] H[2], P2[BY] b. VP[PAS] H[6], P2[+LOC, P2[BY] c. VP[PAS] H[17], VP[INF], P2[BY] d. VP[PAS] H[5], N2, P2[BY] The P2 here is required to have [PFORM BY ]. Note that we can have the same value for SUBCAT here as in the corresponding active rules, since PAS will percolate to the V0 node, and only the appropriate inflectional form of the lexeme will be insertable into a tree. However, this approach encounters the same problems as we saw with auxiliaries in the previous section, viz. that the relation between active and passive rules is purely coincidental, and no attempt is made to capture the regularity of the relation. In addition, the fact that the by- phrase is optional must be captured by enclosing it in brackets in every single passive rule (and there must be a separate rule for each SUBCAT value that is passivizable). What is needed instead is a way of relating active VP rules to passive VP rules, a way of defining rules like (32) in terms of their active counterparts, i.e. a metarule. The passive metarule can be stated as (33). (33) For every ID-rule that allows VP to dominate N2 plus some other material, there is another ID-rule that allows VP[PAS] to dominate this same material without N2 but with an optional P2[BY]. For instance, (33) maps (34a) into (34b) (see (32b)). (34) a. VP H[6], N2, P2[+LOC] b. VP[PAS] H[6], P2[+LOC], (P2[BY])
< previous page
page_137
next page >
< previous page
page_138
next page >
Page 138 An example tree is shown in (35).
(35) We must now ask what is to stop [VFORM PAS] being instantiated on any V0 node, thus giving examples like (36). (36) * John given the book. This is handled by an FSD (37). (37) FSD: [BAR 0] ~PAS A [BAR 0] category cannot be [VFORM PAS] unless this is due to VFORM being a privileged feature. Let us now consider the analysis of (20c), repeated here. This will be assigned the structure shown in (38). (20) c. Bob was believed by Jim to be the best.
(38)
< previous page
page_138
next page >
< previous page
page_139
next page >
Page 139 Though not shown in (38), the AGR features of the topmost VP will be percolated to its VP[PAS] daughter and hence to the VP[INF] daughter. So Bob will be the controller of the embedded infinitival VP (any reflexive therein would have to agree with Bob). There is one aspect of passives not considered above, viz. the fact that some passive participles are adjectives rather than verbs. For instance, they may be preceded by negative un-, be modifed by very or occur pre-nominally as in (39).4 (39) a. Antarctica was uninhabited by humans for centuries. b. He is very respected by everyone. c. An injured player should be substituted. Such examples will be generated not via our passive metarule, but by ordinary rules introducing adjectives. An example structure is shown in (40) (see Gazdar 1982: pp. 164–5).
(40) The relation between the adjective closed and the verb close would have to be handled by some kind of lexical statement (see 13.2 and 13.3). It has sometimes been claimed (e.g. Pulman 1987) that a shortcoming of the GPSG metarule analysis of passives is that it provides no way of generating prepositional passives as in (41). (41) a. John can be relied on. b. This bed has been slept in. This is because the “missing N2” inside the passive VP is a part of a P2, not a daughter of VP. But this objection falls once it is appreciated that prepositional passives are in fact adjectival (42). (42) a. This bed is unslept in. b. That book is very sought after. c. It is the most alluded to book since his last one. 4. See Wasow (1977: pp. 338ff); Kilby (1984: Ch. 5).
< previous page
page_139
next page >
< previous page
page_140
next page >
Page 140 In this section we have presented a GPSG analysis of the basics of the passive. In section 13.2, we shall consider some alternative analyses of the passive. Before leaving passives for now, however, we should say a little about their meaning, since the synonymy of actives and passives is an important fact to be captured. In 9.4 we used meaning postulates to state the interpretation of equi and raising verbs, and we can use a comparable device here. Suppose we have the active-passive pair (43). (43) a. Jim loves May. b. May is loved by Jim. While (43a) is represented in intensional logic as (44a), (43b) might be represented as (44b) (using an ad hoc way of showing that we have a passive verb). (44) a. (love′ (May′)) (Jim′) b. (love-PASS′ (Jim′)) (May′) A simplified version of the requisite meaning postulate would be: (I) The first nominal sister of a passive verb is interpreted as the last nominal argument of the active verb; the last nominal sister of a passive verb is interpreted as the first argument of the active verb. This accounts for the synonymy of (44a, b). Use of model-theoretic semantics and meaning postulates enables semantic interpretation to be carried out on ordinary syntactic structures. We can illustrate this point further by considering the interaction of passive and raising as in (45). (45) John is believed by Bob to be rich. The surface relations here are very different from the logical ones, since John is in fact the logical subject of be rich. The representation of (45) would be (46). (46) ((believe-PASS′ (rich′)) (Bob′)) (John′) The meaning postulate (I) states that (46) is equivalent to (47). (47) ((believe′ (rich′)) (John′)) (Bob′) Our meaning postulate for raising verbs (see 9.4) applies to (47), stating that John is an argument of rich, giving (48). (48) (believe′ (rich′ (John′))) (Bob′)
< previous page
page_140
next page >
< previous page
page_141
next page >
Page 141 This is the same as for the straightforward sentence (49). (49) Bob believes that John is rich. We have thus obtained the correct interpretation of (45) in a compositional manner, using our ordinary statements for passive and raising structures. 10.3 Extraposition In 9.3 we considered examples such as (50a), to which we assigned the structure (50c) on the basis of the ID-rule (50b). (50) a. It seems that the summer has finished. b. VP[+ IT ] H[21], S[FIN]
c. Now we consider cases where an embedded sentence can occur in subject position as well as after the matrix verb: (51) a. That you should say such a thing bothers me. b. It bothers me that you should say such a thing. (52) a. For him to make such a suggestion strikes me as dangerous. b. It strikes me as dangerous for him to make such a suggestion. (53) a. That you go first appeals to me. b. It appeals to me that you go first. The sentential subject possibility does not occur with seem. Let us begin by considering the structure of the (a) examples. These have sentences in the subject position, and we might think of saying that these sentences
< previous page
page_141
next page >
< previous page
page_142
next page >
Page 142 are dominated by N2 as well as by S, since subjects are regularly N2. This would imply a rule such as (54). (54) N2 S But, unless extra constraints are somehow added, an approach using (54) will introduce sentences wherever an N2 can occur, e.g. (55). (55) a. * John cooked that Fred had lost. b. * John waited for that Bob had finished. c. * John asked for that Mary left’s number. Rule (54), then, will introduce enormous complications. As an alternative, we can simply say that the initial clauses in (51a)–(53a) really are sentential subjects, assigning a structure such as (56).
(56) Of course, this also seems to require an extra rule, viz. (57). (57) S Sbar, H[−SUBJ] But we shall now show that no rule such as (57) is needed. It will be recalled from 9.2 that the subject controls the VP, and hence the value of AGR on the VP must be identical to the features on the subject. We have used features such as [AGR {N2[PER 3, –PLU, CASE NOM]}] to ensure that the subject is (in this case) third person singular nominative. But clearly we could equally use this value for AGR to ensure that the subject is N2, since if the subject is anything else there will be a CAP violation. So we do not need rule (57), or our familiar rule for S, (58). (58) S N2, H[−SUBJ] Instead all we need is (59).
< previous page
page_142
next page >
< previous page
page_143
next page >
Page 143 (59) S [BAR 2], H[−SUBJ] The subject just has to be an extension of [BAR 2]. For sentences like (51a)–(53a), we need to ensure that the subject is S (or Sbar, but we shall speak of S for the sake of simplicity). Since only a subset of verbs allow sentential subjects, we can specify in the lexical ID-rules introducing them that the VP is [AGR S], which will imply that the subject is sentential too. ID-rules for the verbs in (51a)–(53a) could be as in (60).5 (60) a. VP[AGR S] H[20], N2 b. VP[AGR S] H[121], N2, P2[AS] c. VP[AGR S] H[122], P2[TO] The account does require that all verb forms that are [AGR S] are identical to those that agree with a third person singular N2. We shall leave this as something purely coincidental, although it should preferably be a consequence of some general statement. We can now come to the (b) sentences in (51)–(53), which exemplify a construction commonly known as extraposition. An obvious possibility would be to write a series of rules similar to (50b), viz. (61). (61) a. VP[+ IT ] H[20], S b. VP[+ IT ] H[121], N2, P2[AS], S c. VP[+ IT ] H[121], P2[TO], S But it should be clear that there is a massive redundancy between the rules in (60) and those in (61): the forms of the latter are entirely predictable given the forms of the former. It should be no surprise that we can improve our description by employing a metarule, to define (61) in terms of (60). Note that this could not be done in reverse, since verbs such as seem occur in the extraposition construction but not with a sentential subject, whereas all verbs that can take a sentential subject also allow extraposition. The Extraposition metarule is (62) (see Hukari & Levine 1991: p. 114). (61) For every rule meeting the topmost pattern, there is another rule that differs only in that the left-handside item agrees with the expletive it and the right-hand side contains an extra S category. So the rules in (61) do not have to be stated explicitly; 5. Without going into the issue, we shall just assume that as dangerous as in (52a) is a P2 with [PFORM AS ], though as takes a wider range of complements than most prepositions; see Emonds (1985: pp. 264–79).
< previous page
page_143
next page >
< previous page
page_144
next page >
Page 144 they can be defined on the basis of the rules in (60). Semantically, the it is empty and will not show up in representations of the meaning of extraposed sentences (see 9.3 on seem). It may have been noticed that the pattern in (62) makes no reference to the mother category being a VP. This is unnecessary in the account of Gazdar et al. (1985: p. 118), because of (63) (example (10) in 9.2). (63) FCR: [AGR] [−N, +V] However, the question arises of the analysis of data such as (64). (64) a. That Len should resign is obvious. b. It is obvious that Len should resign. A host of adjectives participate in this alternation (e.g. easy, clear, etc.), but usually they restrict the VFORM possibilities in the embedded clause (65). (65) a. It is easy for Len to resign. b. * It is easy that Len should resign. Gazdar et al. (1985: p. 247) give the rule (66) (slightly simplified here). (66) A1[AGR S] H[25] But the mother in a local tree licensed by this rule cannot be well-formed because of the FCR (63). To cater for (64b), we need a rule such as (67). (67) A1[+ IT ] H[25], S[FIN] It seems that we must revise (63) so that it allows projections of either adjective or verb to carry AGR: (68) FCR: [AGR] [+V] Given (68), we can in fact allow (67) to be generated from (66) by the metarule. We here follow Hukari & Levine (1991: pp. 114–15) in stating the extraposition metarule so that it applies to Adj1s as well as VPs. For (64b), rule (67) appears to suffice, as long as we allow [+ IT ] to be percolated up to A2 and VP as in (69).
< previous page
page_144
next page >
< previous page
page_145
next page >
Page 145
(69) The problem is that A2 is not head of the VP (though it might be claimed that obvious is the semantic head of VP, rather than be), so the HFC cannot be used to percolate [+ IT ]. However, any alternative involving special statements about the be in examples like (64) should be avoided. It is intuitively clear that it is the adjective, not the verb be, that determines that the subject must be expletive it . The key here is the extended version of the CAP given in 9.4, containing the clause ‘‘In the absence of a controller, the control features on a predicative category are identical to those on its mother”. The A2 in (69) has no controller, so its control features are the same as those on its mother, thus enabling feature percolation between A2 and VP (it is instructive to compare (69) with tree (46) in 9.4). 10.4 The power of metarules Metarules are potentially a very powerful device, which risks expanding the set of ID-rules enormously. Their power can be limited by imposing the following constraint: Metarules map from lexical ID-rules to lexical ID-rules. So only rules that introduce extensions of SUBCAT can act as input or output to metarules. This constraint was not enforced in earlier versions of GPSG, and some very powerful metarules were made use of then. It will be seen that the three metarules proposed in earlier sections of this chapter all meet this constraint. It is tempting to think of metarules as similar to the transformations of all varieties of transformational grammar (including GB). Both are intended to capture generalizations across construction types, and indeed linguists once posited
< previous page
page_145
next page >
< previous page
page_146
next page >
Page 146 Inversion, Passive and Extraposition transformations, corresponding to the metarules proposed here. But metarules and transformations are very different kinds of animal. Transformations map one tree structure into another, and allow for the generation of non-context-free languages. But metarules are really a means of abbreviating rule-sets, and do not increase the generative power of the grammar in a comparable way (see Gazdar et al. 1985: pp. 65–7 for discussion). 10.5 Further reading On metarules, see Horrocks (1987: pp. 177–82), Sells (1985: pp. 90–96), Gazdar et al. (1985: Ch. 4). The specific metarules discussed here are dealt with in Gazdar et al. (1985) as follows: Inversion (pp. 60–65), Passive (pp. 57–60), Extraposition (pp. 117–19). The English passive is discussed in Baker (1989: pp. 209–24) and Kilby (1984: Ch. 5).
< previous page
page_146
next page >
< previous page
page_147
next page >
Page 147 11 Unbounded dependencies In 8.2 and 8.3 we gave an account of a subset of English wh-questions and relative clauses, specifically types such as: (1) a. Who won the match? b. the man who talked to me In this chapter we look at a far wider range of questions and relatives, and introduce an important new feature. We shall also discuss some other construction types. All such types feature what we shall call a fronted element (here the wh-form). In 8.1 we used the Q and R features to describe interrogative and relative proforms, adding that these were in fact simplifications. In 9.1 we introduced the notion of category-valued features, and we can now make use of this idea to give fuller descriptions of the pro-forms in question. Let us posit the category-valued feature WH, which signals that an item is a wh-proform. One of its values will be N2, which will have features for number and also have the feature WHMOR (which specifies the precise kind of wh-morphology). The only values of WHMOR we shall employ are R (for relative pronouns) and Q (for interrogative ones). We shall continue to use [+R] and [+Q] as abbreviations, respectively, for [WH N2[WHMOR R]] and [WH N2[WHMOR Q]]. So in place of (2a) (taken from 8.1) a fuller version would be (2b). (2) a. who: {[+N], [−V], [BAR 2], [+Q]} b. who: {[+N], [−V], [BAR 2], [WH {N2[WHMOR Q]}]}
< previous page
page_147
next page >
< previous page
page_148
next page >
Page 148 However, in rules and trees we shall generally keep to expressions like N2[+Q]. Note that now it is WH that has to be regarded as a foot feature. In addition, we now replace the FCRs (3a, b) introduced in 8.2 and 8.3 with the more general one (4). (3) a. FCR: VP ~[R] b. FCR: VP ~[Q] (4) FCR: VP ~[WH] The kinds of wh-construction covered in this chapter are exemplified in (5). (5) a. Which team did they beat? b. Who did you give the book to? c. Who do you think I just saw? d. the man who I talked to e. the man who I expected to win f. the house which I thought you said John was going to buy Unlike the examples in (1), those in (5) cannot be treated as straightforward instances of N2 + VP. This is because the wh-phrase is not understood as the subject of the immediately following verb. It may be object of the next verb (5a), or it may be related to some verb in an embedded clause (5c, f). In fact, the relation between the wh-form and its “understood” position may be of any distance. Hence these phenomena are often called unbounded dependencies. The examples in (6) give more of an insight into the nature of this dependency. (6) a. On what does he rely? b. * At what does he rely? c. Where do you suggest he put it? d. * What do you suggest he put it? For instance, (6a, b) show that rely still requires a P2 with on even when this P2 is fronted rather than being in its canonical object position. The name of unbounded dependency is intended to distinguish such constructions from local phenomena like passives, where the relation between active and passive verbs can be stated in a metarule that maps one set of ID-rules to another. The object of an active verb relates to the subject of the same verb in passive form; there is no link between a passive subject and some deeply embedded verb. The idea that there is a distinguishable set of unbounded dependency constructions (extending beyond questions and relatives) has been developed within the transformational paradigm; we shall see that GPSG makes a distinctive analysis of such structures.
< previous page
page_148
next page >
< previous page
page_149
next page >
Page 149 11.1 Slash categories in relative clauses We shall begin by looking at how relative clauses involving unbounded dependencies are handled, and will start by considering in detail example (7). (7) the man who you saw In 8.3, considering (8a), we proposed rule (8b), and assigned structure (8c). (8) a. the man who saw you b. N1 H, S[+R]
c. The FFP can again be used to ensure that [+R] appears on both the S node and its daughter N2 in the tree for (7), but the rest of the structure of this S is problematic. Notice the paradigm in (9). (9) a. * the man who you saw Jim b. You saw Jim. Comparing (7) with (9a), we can see that inside a relative clause saw occurs without an object, whereas usually it can occur with one. If we choose an obligatorily transitive verb, the contrast is even starker: (10) a. the man who Jim resembles b. * the man who Jim resembles Bob c. * Jim resembles d. Jim resembles Bob. In fact it is a general property of relative clauses that (to put it in as uncommitted a way as possible) something that one would expect to be present in the relative is
< previous page
page_149
next page >
< previous page
page_150
next page >
Page 150 absent. It is often said that relatives contain a gap, because an expected constituent is missing. The existence of a gap creates problems for the generation of examples such as (7) and (10a). Specifically, it is not immediately clear how the verb can occur in what is not its usual subcategorization frame. Suppose we consider the sequence after the relative pronoun in (7) and (10a), viz. you saw and Jim resembles . These are like ordinary clauses except for the big difference that they have an (object) N2 missing, i.e. there is a gap. The examples (5d–f) above also have an N2 missing from an otherwise clause-like sequence after the relative pronoun. Here we indicate the position of the gap by G: (11) a. the man who [I talked to G ] b. the man who [I expected G to win] c. the house which [I thought you said John was going to buy G ] Let us now see what we can make of the notion “a clause with a missing N2”. Like all characteristics of linguistic objects, the fact that something is missing from a constituent can be encoded in GPSG by means of a feature. So we could posit a category-valued feature GAP and say that a clause with a missing N2 was S[GAP N2]. However, GPSG does not in fact use the attribute-label GAP, but employs the name SLASH instead, as in S[SLASH N2], or more fully as in (12). (12) {[+V], [−N], [BAR 2], [COMP NIL], [+SUBJ], [SLASH N2]} The reason for this name dates back to the earliest versions of GPSG, when SLASH was not treated as a feature in its own right, and indeed the theory of features was still relatively undeveloped. A set of slash categories was proposed, with “S with missing N2” notated as “S/N2”. In fact, it is convenient to use “S/N2’’ as an abbreviation for (13) (next page).1 What we have done so far, then, is to propose that (7) has a structure such as (13). We do need an extra rule, however: (14) S N2, H/N2 We shall see in due course that we cannot do away with (14) entirely, but we can generalize it a little. Let us now turn our attention to the internal structure of the S/N2 in (13). This S with an N2-gap contains a VP with an N2-gap: (15) S/N2 N2 VP/N2 We do not need to write special ID-rules for such local trees: we can rely on feature instantiation principles to ensure that [SLASH N2] occurs on both S and VP. We shall in fact claim that SLASH is both a foot feature and a head feature (it is the only feature 1. Note that this is quite different from the slash notation used in categorial grammar and briefly mentioned in 4.5.
< previous page
page_150
next page >
< previous page
page_151
next page >
Page 151
(13) belonging to both types, but there is nothing to stop this from being the case). Both HFC and FFP operate in (15), then. We come now to the internal structure of VP/N2. Recall that we want to ensure that transitive verbs (of [SUBCAT 2]) will occur here, despite the absence of an overt object. The ordinary lexical ID-rule for this class of verb will generate a local tree like (16a) not (16b). (16) a. VP V0 N2 b. VP V0 The problem now is to reconcile (16a) with the fact that the VP appears to consist of just a verb, with no object. To solve this, we can return to our notion of an N2-gap. Let us say that an N2-gap is not literally nothing in terms of phrase-markers, but is rather an N2 that dominates no lexical material. For these purposes we can make use of the notion of an empty string2 and suggest that it looks like this (17). (17) Of course, we need to stop such empty N2s being generated all over the place. In fact, we need to limit their appearance to unbounded dependency constructions. This can be done as follows. First, the internal structure of VP/N2 will be (18). 2. The idea of empty strings (strings with zero symbols) is standard within formal language theory; see Partee et al. (1990: p. 434).
< previous page
page_151
next page >
< previous page Page 152 (18)
page_152
next page >
VP/N2
V0 N2/N2 The FFP requires the [SLASH N2] instantiated on the mother in this local tree to be instantiated on a daughter too. Despite being a head feature, SLASH cannot occur on V0 because of (19). (19) FCR: [SUBCAT] ~[SLASH] If a category contains a feature SUBCAT, it cannot have the feature SLASH. This is quite reasonable, since the idea of “a verb with a missing N2” is incoherent. Secondly, we can allow the empty string to be inserted under a category that is [+NULL], and constrain the occurrence of this feature by means of (20). (20) a. FSD: ~[NULL] b. FCR: [+NULL] [SLASH] NULL occurs only when required by an ID-rule (it can never be privileged by virtue of co-variation since it is neither a head nor a foot feature). And any [+NULL] category must have a value for SLASH. This means that our V2/N2 will look like (21).
(21) But in view of (20a), how does [+NULL] get licensed in (21)? Clearly we need an ID-rule that allows an N2 to be [+NULL]; and in fact we need a whole string of them to introduce N2-gaps for each N2 in a lexical ID-rule. Rather than write all such rules specifically, we can use a metarule to create them automatically. The metarule in question is Slash Termination Metarule 1 (there is a second such rule, to be met later): (22) For every lexical ID-rule introducing a [BAR 2] category, there is another such ID-rule differing only in that the [BAR 2] category is [+NULL]. (20b) ensures that this category also has a value for SLASH, so we need not state this in (22). Note that we have not mentioned in (22) that the [BAR 2] category must be N2, since (as we shall see) there are gaps of other categories. (22) maps a rule such as (23a) into (23b). (23) a. VP H[2], N2 b. VP H[2], N2[+NULL]
< previous page
page_152
next page >
< previous page
page_153
next page >
Page 153 The structure of (7) will now be (24).
(24) Examples such as (9a) and (10b) will not be generated, since they would contain lexical N2s dominated by N2[+NULL]/N2, and only the empty string can fill this position. A more complex example such as (25a) would be analysed as in (25b) (next page). The existence of this chain of [SLASH N2] categories is ensured by the HFC and FFP 11.2 More on slash categories in relatives Cases where the gap is some category other than N2 are handled in a similar way,3 for example (26) (see page 155). However, there are a couple of further points to be made here. Firstly, in addition to ID-rule (27a) (introduced in 11.1) we seem to need rule (27b). (27) a. S N2, H/N2 b. S P2, H/P2 It should be clear that there is something wrong in positing two such similar rules; moreover, the fact that in each case the category of the first element on the right-hand side is identical to the value of SLASH on the second element is purely coincidental. 3. This phenomenon where the fronted item is more than just a relative pronoun is sometimes known as pied piping.
< previous page
page_153
next page >
< previous page Page 154 (25) a.
page_154
next page >
the man who you think you saw
b. Suppose that instead we write one general rule (28). (28) S [BAR 2], H/[BAR 2] An S can consist of any [BAR 2] category followed by an S with [SLASH {BAR 2}]. What now ensures that these [BAR 2]s are instantiated in the same way? We can approach the answer by noting that the feature bundles here have to agree (cf. (27)). As we saw in Chapter 9, the CAP captures agreement facts in GPSG, and so could be used to ensure agreement in local trees generated by (28). So we shall say that, like AGR, SLASH is a control feature (cf. 9.2); note that both control features are categoryvalued. And the [BAR 2] category in (28) is the controller, while the S is the target. We repeat here the CAP (in its simpler form, since that is all that is needed now): The Control Agreement Principle (CAP) The control features on a target category must be identical to the features on the controller. So in a local tree licensed by (28), any features instantiated on the [BAR 2] item must also be instantiated as values of the SLASH feature on the S node.
< previous page
page_154
next page >
< previous page Page 155 (26) a.
page_155
next page >
the man on whom we rely
b. Thus a local tree such as (29) is permitted. (29) {[−N], [+V], [BAR 2], [+SUBJ], [COMP NIL]} {[+N], [−V], [BAR 2]} {[−N], [+V], [BAR 2], [+SUBJ], [COMP NIL], [SLASH {[+N], [−V], [BAR 2]}]} But (30) is a CAP violation. (30) {[−N], [+V], [BAR 2], [+SUBJ], [COMP NIL]} {[+N], [−V], [BAR 2]} {[−N], [+V], [BAR 2], [+SUBJ], [COMP NIL], [SLASH {[−N], [−V], [BAR 2]}]} Secondly, we need to ensure that in (26b) a preposition appropriate to the verb is chosen, as (31) * the man at whom we rely
< previous page
page_155
(=S) (=N2) (=S/N2) (=S) (=N2) (=S/P2) in (31)
next page >
< previous page
page_156
next page >
Page 156 In other words, the fronted [BAR 2] and the gap must agree in value for PFORM. This is now perfectly simple, as the value for SLASH can include [PFORM ON]. So we will have the local tree (32). (32) S P2[ON] S/P2[ON] The CAP would be violated if the values for PFORM here varied. The string of [SLASH P2] categories in (26b) is in fact a string of categories with [SLASH P2[ON]], and only where an element is of the type α[+NULL]/α is an empty string possible. So a fronted on -phrase must correspond to a gap with [PFORM ON]. Thus nothing special needs to be said in the grammar about such examples.4 The use of the CAP here implies that there will always be agreement between the category of the fronted element and that of the gap. It has been claimed, however, that this agreement is not always found. Example (33) is from Kaplan & Bresnan (1982: p. 242).5 (33) That he might be wrong he didn’t think of. Here, the fronted item is an Sbar, while the gap is N2. Such examples would need to be treated as special cases in GPSG—but in fact virtually all speakers reject examples like (33). The other point we wish to discuss in this section is why examples like (8a) are handled in the way seen in 8.1 and not via the slash mechanism. The reason is that metarules (including Slash Termination Metarule 1) apply only to lexical ID-rules (cf. 10.4). So the metarule just referred to would not apply to the ID-rule (34a) to give a new rule (34b). (34) a. S [BAR 2], H[−SUBJ] b. S [BAR 2, +NULL], H[−SUBJ] If the grammar contains no rule such as (34b), and there is an FSD stating that the default is for a category not to have a value for NULL, then the tree (35) (next page) is ill-formed. Instead the simpler analysis given earlier suffices. In addition, (35) is excluded because the S/N2 has a head daughter without a SLASH feature, thus violating the HFC. However, we now have a problem, viz. examples like (36a) where the fronted item relates to the subject of an embedded clause. (36b) might be a possible tree for (36a). But this is ill-formed, since no rule sanctions the local tree dominated by the lower S/N2. At the same time, we cannot analyse (36a) just like (36c), where we can say that who won the race is just N2+VP. 4. Note that reliance on CAP also accounts for matching of case between fronted element and gap in the man whom I saw . 5. This is not an example of a wh-construction, but it would be handled via the slash mechanism in GPSG.
< previous page
page_156
next page >
< previous page
page_157
next page >
Page 157
(35) (36)
a.
the man who I think won the race
b. c.
the man who won the race
< previous page
page_157
next page >
< previous page
page_158
next page >
Page 158 We really do seem to need the chain of slash categories in (36b), but they have to “bottom out” in some other manner. What GPSG does here is to posit a further metarule, Slash Termination Metarule 2: (37) For every lexical ID-rule introducing a finite S, there is another rule introducing VP whose left-hand side differs in having [SLASH N2]. For instance, (37) maps (38a) into (38b). (38) a. VP H[40], S[FIN] b. VP/N2 H[40], VP[FIN] (38b) will be instrumental in generating (39).
(39) The local tree (40) is of special interest in (39).
< previous page
page_158
next page >
< previous page
page_159
next page >
Page 159 (40)
VP/N2 V0 VP[FIN] Here we have a slash category that does not dominate a slash category. First, this does not contravene any special requirements about the distribution of slash categories. Secondly, it does not violate either the HFC or FFP. The HFC does not force [SLASH N2] onto the V0 node because of the FCR (19), to the effect that [SUBCAT] implies no value for [SLASH]. And the FFP is inapplicable since it applies only to instantiated foot features, and the [SLASH N2] is not instantiated on the mother in (40), being inherited from (38b). One of the advantages of this analysis is that it provides a neat account of the so-called “complementizer-trace” (or that- trace) facts: (41) a. the man who I think won the race b. * the man who I think that won the race c. the race which I think (that) John won Paradigms such as this (especially the ungrammaticality of (41b)) have been made much of in GB, which has sought to give a universal-based account. In GPSG, the sequence that won the race in (41b) will not be generated by (38b), since VPs do not contain complementizers. It might be added that, if we adopted an analysis along the lines of (36b), there would be no automatic explanation for the deviance of (41b). In fact, there are good reasons to believe that the complementizer-trace phenomena should not be explained on the basis of some universal principle of grammar. First, examples comparable to (41b) are grammatical in Dutch (here we turn to interrogatives, but this makes no difference in principle): (42) Wie vertelde je dat gekomen was? WHO SAID YOU THAT COME WAS ‘who did you say (that) had come?’ Secondly, even English shows some dialectal variation here (cf. Sobin 1987): apparently some speakers do accept examples like (43a). Perhaps such speakers include rule (43b) in their grammars. (43) a. Who did you say that kissed Harriet? b. S/N2 H[−SUBJ, SLASH NIL] A sentence with a missing N2 can consist of just a VP with no value for SLASH. However, it remains to be seen what the implications of such a rule would be (it might be needed for Dutch as well as for some varieties of English).
< previous page
page_159
next page >
< previous page
page_160
next page >
Page 160 11.3 Other relative types So far we have dealt only with finite relative clauses containing a relative pronoun such as who. However, it is also possible for relatives to contain no relative pronoun at all, or for them to contain that, or for them to contain a non-finite clause: (44) a. the man (that) I was talking to b. a plumber to fix the sink We shall examine non-finites later in this section, and begin with zero and that- relatives. These differ from wh-relatives in a number of ways. First, pied piping is never possible: (45) * the man to (that) I was talking Secondly, a zero relative is not possible in a subject relative clause:6 (46) a. the man who was talking b. the man that was talking c. * the man was talking Thirdly, the gap in the relative clause must be an N2: (47) * the table I put the book There has been considerable discussion about the category status of relative that: is it a relative pronoun, a complementizer, a conjunction, or what? This dispute is surveyed by Van der Auwera (1985), who suggests that it is a ‘‘highly pronominal relativizer” (whatever that means). We shall not go into the arguments here, but simply note that the data seem to follow rather easily in GPSG if one assumes that to be a complementizer (or, at least, if one assumes it not to be a relative pronoun). For an example like (44a) the ID-rule (48) suffices. (48) S[+R] ([SUBCAT THAT]), H/N2 A relative clause can consist of an optional complementizer and an S with an N2 gap; recall from 7.2 that complementizers simply bear the feature SUBCAT. So (44a) would be analyzed as (49a–b) (next two pages). For subject relatives with that we need a separate rule (50). (50) S[+R] [SUBCAT THAT], H[−SUBJ] This will result in structures such as (51) (see page 162). If the gap is in an embedded clause (as in (52)), rule (48) is still appropriate and the slashed category is handled in a way already seen (cf. 11.2). (52) the man (that) I believe has won 6. (46c) is of course a well-formed S but not a well-formed N2.
< previous page
page_160
next page >
< previous page
page_161
next page >
page_161
next page >
Page 161
(49)
a.
< previous page
< previous page
page_162
next page >
page_162
next page >
Page 162
(49)
b.
(51)
< previous page
< previous page
page_163
next page >
Page 163 Now we turn to infinitival relatives (53). (53) a. a man to fix the sink b. a book to read carefully c. * a man who/that to fix the sink d. a man from whom to buy tickets e. * a man who to buy tickets from f. a man to buy tickets from g. a book for you to read h. * a man from whom for you to buy tickets It will be seen that generally only a zero relative is allowed; but a wh-pronoun is possible (in (53d)) if a preposition is pied piped. For plus a subject is possible, but is not compatible with a wh-pronoun. We suggest the rules in (54). (54) a. S[+R] H[12], VP[BSE] b. S[+R] H[BAR 2, INF]/N2 c. S[+R] P2[+R], H[−SUBJ, INF]/P2 (54a) is for (53a), and (54b) for (53b) and (53g). (54c) applies in the case of (53d), and assigns the structure (55a), and for (53g) we will have (55b) (next page).
(55) a. In (55b), the V2[INF]/N2 node introduced by (54b) has been instantiated as in (55c). (55) c. {[+V], [−N], [BAR 2], [COMP FOR], [+SUBJ], [VFORM INF], [SLASH N2]} That is, this is an infinitival Sbar with an N2-gap. The FCRs and other devices discussed in 7.2 ensure the well-formedness of the local tree it dominates. Alternatively, this category can be instantiated as infinitival VP with N2-gap, as in (53b).7 7. The FSD numbered (40) in 7.3 ensures that, if this V2 is instantiated as a sentential category, it will be not just an S but an Sbar containing for, thus accounting for the ill-formedness of *a book you to read .
< previous page
page_163
next page >
< previous page
page_164
next page >
Page 164
(55) b. 11.4 Wh -questions The analysis of wh-questions presents little that is new compared with relative clauses and subject questions (cf. 8.2). For embedded questions, an S[+Q] will be introduced by a lexical ID-rule, and this will be expanded by the rule (56). (56) S [BAR 2], H/[BAR 2] So structures such as (57) will be assigned by already existing rules.
(57) (Note the [+Q] that agreement forces on to the slash value of the S/N2; this will be important shortly.) Similarly, the generation of the sentences in (58) is straightforward.
< previous page
page_164
next page >
< previous page
page_165
next page >
Page 165 (58) a. I wondered who she was talking to. b. I know what you said you were doing. c. I wonder what he thinks happened. d. I asked which team he supported. The problem arises really with regard to main clause interrogatives, since we have to ensure the correct occurrence of subject-auxiliary inversion: (59) a. What is happening? b. Who did you talk to? c. * Who you talked to? It might seem easy to produce a tree such as (60).
(60) This uses the output of the subject-auxiliary inversion metarule (cf. 10.1). But how can we stop (59c) from being generated? (59c) is fine when part of an embedded question but not as a matrix one. We cannot require that the ID-rule responsible for the topmost local tree in (60) specify [+INV] on its head, since questions need not be [+INV]—cf. (57) and (58). What we need to say is that matrix questions require inversion, whereas embedded ones disallow it. Note that this applies to yes-no questions as well: (61) a. Has he left? b. I wonder if he has left c. * I wonder if has he left Before proposing a solution, let us note another problem, viz. the fact that main clauses must contain finite verbs, a point left uncaptured so far: (62) a. * John be tall b. * John writing a book It seems clear that we need to have available the notion of “main clause”. To this end we shall introduce a Boolean head feature MAIN, and take the initial symbol of the grammar (more precisely, the initial category) as Smain, an abbreviation for (63). (63) {[+V], [−N], [BAR 2], [+SUBJ], [COMP NIL], [+MAIN]}
< previous page
page_165
next page >
< previous page
page_166
next page >
Page 166 The following statements capture the distribution of this feature: (64) a. FSD: ~[MAIN] b. FCR: [+MAIN] [FIN, +SUBJ, BAR 2, COMP NIL] c. FCR: [+MAIN, SLASH[+Q]] [+INV] [MAIN] cannot occur unless it is a privileged feature; it occurs only on finite Ss; if it occurs on a slashed category with a Q feature, that category must also be [+INV]. To account for the presence of [+MAIN] on a root node, we do unfortunately need extra rules (65). (65) a. [+MAIN] [BAR 2], H[−SUBJ] b. [+MAIN] [BAR 2], H/[BAR 2] Example (59c) would have the tree (66).
(66) But this is ill-formed, as the category Smain/N2[Q] is not [+INV] and so violates (64c). What, now, about (61c)? The tree will be (67).
(67) The problem here is the embedded S, where [+INV] has been instantiated on S and V0 in spite of the FSD [−INV]. It might be thought that this is fine, because the values co-vary by dint of the HFC (cf. 7.1). But here we encounter another aspect of the workings of defaults, viz. that lexical heads (i.e. [BAR 0]) are subject to
< previous page
page_166
next page >
< previous page
page_167
next page >
Page 167 defaults even if they co-vary with a mother (cf. Gazdar et al. 1985: p. 103). So (67) is, rightly, excluded.8 The use of a feature such as MAIN would not be confined to English. Uszkoreit (1987: p. 60) uses the Boolean feature MC to distinguish main and subordinate clauses and verbs in German. He exploits it in the LP-statements (68). (68) a. V0[+MC] X b. X V0[−MC] Verbs precede their sisters in main clauses, and follow them in subordinates (see 11.6 for an example of the kind of constituency assigned). We should add that the sentences in (69), where the fronted element is not a P2 or N2, will be handled automatically by our present apparatus, with the category of fronted item and gap matching by dint of the CAP. (69) a. How tall is he? b. How fast can he run? For instance, (69a) receives the structure (70).
(70) One further point needs to be made about (64c). This was restricted to the claim that main clause questions must be inverted, because it would not be correct to state that all slashed main clauses must show inversion. Exceptions to this are topicalized structures such as (71a) and exclamatives such as (71b). (71) a. This particular book, everyone should read. b. How tall he is! These contain a fronted element followed by a clause with a gap, and do not show inversion. 8. In fact Gazdar et al. propose that lexical heads can escape defaults in certain circumstances, but this extra proviso appears unnecessary, cf. Warner (1987).
< previous page
page_167
next page >
< previous page
page_168
next page >
Page 168 11.5 Tough-movement In this section we consider the construction illustrated in (72). (72) a. John is easy to please. b. Linguistics is simple for most people to understand. c. He is very pleasant to talk to. d. Prolog is tough for most people to master. Here an adjective is followed by an infinitival Sbar or VP that has an N2-gap inside it. We shall use the transformational term for this construction, “Tough-movement”; but Gazdar et al. (1985: p. 150) use the more prosaic “missing-object constructions”. The ID-rule (73) will account for the examples in (72). (73) Adj1 H[42], V2[INF]/N2 (72a, b) will be analysed as shown in (74a, b) (see next page for (74b)).
(74) a. Note that we have allowed the V2 in the rule to be instantiated as Sbar or VP, thus allowing for both subjectless and subjectful complements of the adjective. However, the fact that we have used the slash category approach in describing tough-movement implies that it is indeed a case of unbounded dependency. It clearly differs from questions and relative clauses in that the gap must be in an infinitival clause (as specified in rule (73)), whereas these other constructions impose no such requirement. The question also arises as to how (un)bounded the dependency is. The slash feature can be percolated to an embedded clause, thus allowing the generation of examples such as (75). (75) a. John was easy to convince Bill to see. b. Kim is easy for us to make Sandy accept.
< previous page
page_168
next page >
< previous page
page_169
next page >
Page 169
(74) b. Speakers have varying judgements on these, and they are certainly not universally accepted. We may conclude that tough-movement is not a straightforward example of an unbounded dependency.9 11.6 Empty categories The idea of including empty categories in syntactic representations is by no means confined to GPSG. In transformational grammar, all movement transformations leave behind an empty category, known as a trace (so that the S-structure of a wh-question will contain a trace where it would have an empty string in a GPSG analysis). GB would extend the use of traces to passive constructions, which do not contain a gap in standard GPSG analyses of them. In 4.2, we noted that GB posits an empty NP (termed PRO) in the embedded subject position of examples such as (76), whereas GPSG has nothing corresponding to this. (76) I’d prefer to stay. Another possible use of an empty category would be in the subject position of a sentence in a language such as Spanish which (by virtue of having rich verbal 9. Jones (1983) discusses various problems with the transformational analyses of this construction, showing that it is even more of an anomaly within a transformational framework. Hukari & Levine (1991) propose a slightly different GPSG analysis, involving a new feature GAP, which differs subtly from SLASH and is unrelated to our informal use of GAP above.
< previous page
page_169
next page >
< previous page
page_170
next page >
Page 170 inflection) does not require subject pronouns. For instance, (77a) would in GB receive the structure seen in (77b). (77) a. Quiero una cerveza. I-WANT A BEER
b. Gunji (1987) uses slashed categories to handle the (comparable but slightly different) situation regarding null subjects and objects in Japanese. He would assign (78b) to the example (78a), which has a null object of unspecific interpretation.10 (78) a. Tarō ga nagutta. ‘name’ nom HIT ‘Taro hit someone.’
b. It will be seen that this use of a slashed category is rather different from that made above. In this connection, Gunji writes: I assume that gaps in Japanese are freely generated as lexical items, reflecting a difference between English and Japanese: the latter allows free gaps, while, in the former, gaps only appear in unbounded dependency constructions. (1987: p. 156, n23) We can also look at some further evidence for the existence of empty categories (null structure) in syntactic representations. Irish has null subjects rather like the Spanish example just cited. One piece of evidence that these have a real presence in 10. Note that we have adapted (78b) fairly drastically to make it more in keeping with the structures assigned throughout this work; our aim is just to say a little about Gunji’s use of slashed categories.
< previous page
page_170
next page >
< previous page
page_171
next page >
Page 171 phrase-markers is that they can be coordinated with lexical N2s (79) (example from McCloskey & Hale 1984: p. 501). (79) Ní bheinn-se nó d’athair mór anseo. NOT BE(1sg) OR YOUR GRANDFATHER HERE ‘Neither I nor your grandfather would be here.’ We shall look at coordination in the next chapter, but here we can just note that, in order to handle (79) as a straightforward example of coordination, it would be necessary to posit an empty N2 with which d’athair mór ‘your grandfather’ could be coordinated. Since GPSG does not standardly posit gaps in such examples, they might require a revision of the theory. Jacobson (1982) also argues for the existence of null structure. The paradigm illustrated in (80) has been cited fairly often. (80) a. I gave Mary the book for Christmas, b. * I gave Mary it for Christmas. c. I gave it to Mary for Christmas. d. I looked up the information in the almanac. e. * I looked up it in the almanac. f. I looked it up in the almanac. There appears to be a constraint that an object pronoun cannot be separated from its verb (except perhaps by another pronoun). The filter might be formulated: (81) *VP[V X A pronoun Y]VP This means that any tree containing a VP meeting the appropriate pattern is ill-formed. Here, X and Y are variables over sequences of nodes, and can be nothing; A is a variable over nodes, and cannot be nothing. So the pattern excluded is: a VP containing a verb followed by any sequence of nodes followed by some node followed by a pronoun followed by any sequence of nodes. The reason for the complexity is that we want to exclude also structures where more than one node intervenes between verb and pronoun. Now consider some further data involving tough-movement (82). (82) a. It’s hard to tell those children the stories. b. * It’s hard to tell those children them. c. ? Those children are hard to tell the stories. d. * Those children are hard to tell them. e. They are hard to tell the children. (82b) falls straightforwardly under (81). (82c) is not perfect, but it does seem to be better than (82d). Now, (82d) appears not to be captured by (81), since the pronoun immediately follows the verb. But in a theory (like GPSG or GB) that uses empty categories the structure of (82d) will actually be (82f) (using G for ‘‘gap”). (82) f. Those children are hard to VP[tell G them]VP.
< previous page
page_171
next page >
< previous page
page_172
next page >
Page 172 This does fall under (81), with the G (an N2 dominating the empty string) filling the variable A in (81). (82e) is fine, its structure being: (83) They are hard to VP[tell the children G ]VP Here it is a gap, not a pronoun, that is separated from the verb. So we have some interesting evidence in favour of empty categories occurring in syntactic representations. However, it is not clear that anything along the lines of (81) is a permitted construct in GPSG. Lastly, we can give an example of a perhaps rather unexpected use of empty categories in German main clauses. Uszkoreit (1987: pp. 79ff.) argues that items in the initial position have a matching empty category in the canonical position. So (84a) would get the structure (84b) (which we adapt in minor ways to make it fit into the categories used here). (84) a. Das Buch wird Peter lesen. THE BOOK WILL PETER READ ‘Peter will read the book’
b.
11.7 Island constraints This section contains a brief look at a topic that has been at the centre of much research within transformational grammar, often under the heading of constraints on movement. If “Move ” says (roughly) that anything can move anywhere, one has to account for the limitations on it that can be easily observed (85)–(86). (85) a. John is believed G to be a genius b. * John is believed G is a genius (86) a. Who do you believe that Jim will marry G ? b. * Who do you believe the rumour that Jim will marry G ?
< previous page
page_172
next page >
< previous page
page_173
next page >
Page 173 In each case, we have used G to show the source of the moved item; recall that GPSG would not posit an empty category in (85). The transformational account would be that, in the case of (85b), nothing can be moved out of a tensed clause, and, in the case of (86b), nothing can be moved out of both an N2 and a clause at the same time. These are claimed to be universal restrictions, not confined to English. The claim that nothing can be moved out of a tensed clause can be stated more picturesquely as “Tensed clauses are islands” (hence the frequent term island constraints). Island constraints have received rather little attention in GPSG (though cf. Gazdar 1982: pp. 174–8). This may well be because of the view that “some island constraints account for data whose problematic status is merely an artefact of the postulation of movement rules” (ibid., p. 174). For instance, nothing special needs to be said in GPSG to block (85b) However, we should at least say something about how such phenomena might be treated in GPSG. Some constraints may be rather difficult to capture, so we shall choose one that is rather straightforward to illustrate things. English allows preposition stranding: (87) a. To who are you talking? b. Who are you talking to? But many languages do not, e.g. French: (88) a. A qui parlez-vous? b. * Qui parlez-vous à? The question is how to exclude (88b). Note that (87b) would be handled by having a [SLASH N2] feature that was percolated down through S, VP and P2 till we reached an N2/N2. French certainly allows for [SLASH N2]s, but does not appear to allow them to occur inside P2. So we can exclude (88b) by preventing [SLASH N2] from being percolated from VP to P2. The obvious way to achieve this is by means of an FCR (89) to the effect that a P2 category cannot be [SLASH N2]. (89) FCR: [−N, −V, BAR 2] ~[SLASH N2] This FCR, which accounts neatly for (88b), would occur in the grammar of French but not of English. The idea, at least, is that other island constraints could be stated in a similar way, by using various GPSG mechanisms. 11.8 Further reading Unbounded dependencies are dealt with in Horrocks (1987: pp. 198–207), Sells (1985: pp. 119–27), Borsley (1991: Ch. 12) and Gazdar et al. (1985: Ch. 7). The last of these deals with relative clauses and interrogatives on pp. 153–8, and with
< previous page
page_173
next page >
< previous page
page_174
next page >
Page 174 tough-movement on pp. 150–2. Baker (1989: pp. 78–90) deals in an elementary way with indirect questions and “missing NPs’’ (and even uses the slash notation); on Tough-movement, see pp. 224–30. It is also worth looking at how unbounded dependencies and gaps are dealt with in Prolog-based definite clause grammars (e.g. Pereira & Shieber 1987: pp. 117ff). For more on the grammar of relative clauses, see McCawley (1988b: Ch. 13), Baker (1989: pp. 234–61). For a brief account of GB work on empty categories, see Cook (1988: pp. 159–66; and pp. 38–42 on null subjects). Island constraints are discussed in McCawley (1988b: Ch. 15) and Borsley (1991: Ch. 13).
< previous page
page_174
next page >
< previous page
page_175
next page >
Page 175 12 Co-ordination This chapter considers co-ordination, and so extends our grammatical coverage to sentences such as those in (1). (1) a. Jack and Jill went up the hill. b. Either Fred or Mary will do it. c. Bob, Jack and Geoff were there. d. He is tall and handsome. e. He drives quickly but very carefully. f. It is raining and I am fed up. Co-ordination is often used in linguistics as a test for constituency, but in fact its linguistic description is rather tricky. We shall see that GPSG is able to handle many of the facts in a maximally general way, by reliance on unspecified ID-rules and feature percolation principles. We shall initially simplify our task by dealing with only two-element co-ordinations, i.e. where there are just two conjuncts, as the items that are co-ordinated are known (so including (1a) but not (1c)). We shall also restrict ourselves to examples with a single co-ordinating conjunction (such as (1a) but not (1b)). This enables us to concentrate on general principles at the expense of English-specific details; we shall extend our description to other cases in 12.3.
< previous page
page_175
next page >
< previous page
page_176
next page >
Page 176 12.1 Simple co-ordination We can begin by deciding on the appropriate structures to be assigned to a simple example: (2) John bought a book and a pen. First, it should be clear that the sequence a book and a pen forms an N2, and so do both a book and a pen. Words like and are traditionally referred to as co-ordinating conjunctions, but we do not need to invent a new category, for we just regard them as belonging to a category specified by a value of SUBCAT. So the category of and is just [SUBCAT AND]. We might then assign the structure (3) to the object N2 in (2).
(3) We have also employed here an attribute CONJ, which indicates the presence of a co-ordinating conjunction; note that the sequence and a pen is also analysed as an N2. Comparable analyses (with appropriate change of category) would be made for the examples in (1). Let us now consider the kinds of rules needed to generate trees like (3). The fundamental generalization that underlies all work on co-ordination is what may be termed the Law of Co-ordination of Likes: Only items of the same category can be co-ordinated The examples given so far will illustrate this, but we should add that violations lead to ill-formedness: (4) a. * John bought a pen or in London. b. * John sings very badly and songs. c. * He is tall and quickly. It would seem fairly straightforward to write a set of ID-rules to generate only well-formed examples: (5) a. N2 N2, N2[CONJ y] b. Adj2 Adj2, Adj2[CONJ y] c. P2 P2, P2[CONJ y]
< previous page
page_176
next page >
< previous page
page_177
next page >
Page 177 But (of course) this is quite contrary to the general spirit of GPSG, since we are writing a lot of specific rules (and far more than those in (5) would be needed). Also these rules make the matching of category between mother and daughters a pure coincidence: we could equally well be proposing rules like (6). (6) N2 Adj2, P2[CONJ y] It seems clear that we should seek some way of generalizing the rules in (5), though of course still excluding examples such as (4). What is needed is some way of ensuring that in a local tree for a co-ordinate structure the mother and daughters share the same category (e.g. all are N2). We already have a mechanism for achieving such matching, viz. the Head Feature Convention; here is the version we gave in 5.1: The Head Feature Convention (HFC) A node and its head must share the same head features, where this is possible Let us see if we can revise the HFC so that it applies in co-ordinate structures. First, we can note that there is evidence that all the conjuncts of a node are its heads. For instance, conjuncts are all morphosyntactic loci (see 5.3); if a VP is finite and it is a co-ordination, both conjuncts must be finite: (7) a. I reject the suggestion that he was guilty and was a criminal mastermind. b. * I reject the suggestion that he was guilty and be a criminal mastermind. If a verb requires a specific preposition in a complement, then, in the case of a co-ordination of P2s, that preposition must be the same in both conjuncts: (8) a. John relies on his luck and on his common sense, b. * John relies on his luck and in his common sense. So suppose we just say that all conjuncts are heads. This may seem a bit surprising, but co-ordination is in some ways sui generis, so it is reasonable enough to revise our usual ways of thinking a little. We must also make a very small revision to the HFC: The Head Feature Convention (HFC) (revised version) A node and its heads must share the same head features, where this is possible So let us replace the rules in (5) with a very general ID-rule, (9a); we also need (9b). (9) a. X H, H[CONJ y] b. X[CONJ ] [SUBCAT ], H The X in (9) is not itself a category name, but just refers to a minimally specified category. We have previously used [BAR 2] in ID-rules to license a feature bundle in
< previous page
page_177
next page >
< previous page
page_178
next page >
Page 178 a local tree that is an extension of [BAR 2]. So (9b) licenses any extension of CONJ on the mother, and (9a) licenses anything at all on the mother (subject of course to FCRs, HFC, etc.). The in (9b) is a variable, to ensure matching of the values of CONJ and SUBCAT; its worth will become clearer in 12.3. In 7.2, we saw how use of explicit variables in rules could be avoided. In fact, Warner (1989) is, in a parallel way, able to remove the variables from (9b) by distinguishing a Boolean feature CONJUNCT (which states whether a constituent is a conjunct or not) and a feature CONJFORM (with values AND, etc.). The conjunct is given the feature [SUBCAT 50], and can now be considered as a head, like its sister: (10) X[+CONJUNCT, CONJFORM] H[SUBCAT 50], H[−CONJUNCT] This may be a preferable analysis, as it makes the variables unnecessary and handles feature percolation in a more consistent way via the HFC.1 But here we shall stick to the analysis seen in (9), as this is the one assumed in most GPSG work on co-ordination. Rule (9a) licenses a local tree such as (11), (11) N2 N2 N2[CONJ AND] while (9b) licenses (among others) (12). (12) N2[CONJ AND] [SUBCAT AND] N2 It is the HFC that is responsible for the matching of N2 here. CONJ is not a head feature, and so escapes the effects of the HFC. As another example, consider the structure for (13a) (see (8a)), i.e. (13b) (13) a. on his luck and on common sense
b. 1. It may well be that explicit use of variables can be disallowed in grammatical rules, then. This would be a welcome result.
< previous page
page_178
next page >
< previous page
page_179
next page >
Page 179 Rely requires a complement that is P2[ON], and the tree in (12) can therefore occur with rely . (14) (see (8b)) is ill-formed because the HFC would be violated.
(14) We can give one last example to show the power of the GPSG approach: (15) a. Jim has written ten novels and published none.
b. Here the HFC percolates the value of VFORM (as well as the VP category) between mother and conjuncts. We have written two very simple ID-rules for co-ordination, leaving virtually all the work to be done by the HFC, which ensures matching of features among conjuncts and between conjuncts and mother. The Law of Co-ordination of Likes follows straightforwardly, with no special stipulations needed. We need to make only one further addition at this point. To ensure that CONJ appears only where required, we propose (16). (16) FSD: ~[CONJ] The default is for a category not to have a value for CONJ, which appears only when required by a rule, thus preventing examples like (17).
< previous page
page_179
next page >
< previous page
page_180
next page >
Page 180 (17)
a. * John bought and a book. b. * John and and Bill left. One further consequence of our analysis is that we must slightly amend the constraint on ID-rules (proposed in 5.2): Every ID-rule introduces a head. We can just alter this to: Every ID-rule introduces at least one head. As we saw in 7.2, there are independent reasons for allowing multiple heads. 12.2 Co-ordination of unlikes However, the Law of Co-ordination of Likes is not correct if it is interpreted to mean that conjuncts must be identical in absolutely all features, as can be seen in (18) and (19). (18) a. I bought an apple and some pears. b. They seem to like you and me. c. He lived in Manchester once and now lives in Salford. (19) a. He is cowardly and a bully. b. To write a novel and for the world to give it critical acclaim is John’s dream. c. He works quickly and in a very intensive manner. In (18a) the conjuncts differ in the values for PLU, in (18b) for PER, and in (18c) for PAST. The examples in (19) are even more problematic, for they involve co-ordination of, respectively, Adj2 with N2, VP with Sbar, and Adv2 with P2. We shall first develop an account of (18) and then extend it to (19), the latter data having been recalcitrant to most previous approaches to co-ordination. Concentrating on (18a), we can see that the HFC as stated in the previous section will not apply properly to the co-ordinate structure here, since the conjuncts have different values for PLU. The problem also arises as to what value (if any) there should be for PLU on the mother N2 node. The GPSG solution to these problems is as follows. Where a node has more than one head daughter, we take the intersection of the head features on these daughters, i.e. the set of features that appear on all the daughters. If we take the co-ordinate structure in (18a), an apple has the feature bundle in (20a), some pears has that in (20b), and the intersection of their head features is shown in (20c):
< previous page
page_180
next page >
< previous page Page 181 (20) a. {[+N], [−V], b. {[+N], [−V], c. {[+N], [−V], It is this intersection that is (ignoring CASE) (21).
page_181
next page >
[BAR 2], [ACC], [−PLU]} [BAR 2], [ACC], [+PLU]} [BAR 2], [ACC]} required to be on the mother node. Thus the structure for (18a) would be
(21) And (20c) is perfectly OK on the sister of the V0 node in (21), because bought requires an object N2. The intention is that, when conjuncts agree in head features, these appear on the mother; when they disagree, the mother has no value for the attribute in question. Unfortunately, it is difficult to offer a simple rewording of the HFC that takes this new possibility into account, so we present the following version (from Sag et al. 1985: p. 131): The Head Feature Convention (HFC) (final version) (i) The head feature specifications on each head are an extension of the head features of the category created by taking the intersection of the mother with the free feature specifications on that head. (ii) The head feature specifications on the mother are an extension of the head features of the category created by taking the intersection of the heads with the free feature specifications on the mother. Recall from 6.2 that the free feature specifications on a category are those that can legitimately be instantiated on it; the HFC in effect applies only to free features, this being the meaning of our earlier qualification to it, imposing identity of features “where this is possible”. Let us try to explain the parts of the revised HFC in turn. Part (i) states that the (head) features on a head daughter in a tree are an extension of the intersection of (a) the features on the mother and (b) the free features on that head. This might seem circular, but the effect is to state that what occurs on the
< previous page
page_181
next page >
< previous page
page_182
next page >
Page 182 head is a legal extension of the intersection of what is inherited on the head and what occurs on the mother. Part (ii) states that the (head) features on the mother are an extension of the intersection of what can legally be on the mother and what occurs on the heads. These two principles are not to be taken as ways of constructing a category but as ways of checking whether a local tree is well-formed. We should add that the revised HFC applies just the same if there is a single head, for the intersection is just equivalent to that single head category! Let us try to illustrate this by means of a closer look at tree (21) and, in particular, the local tree headed by the N2 node that is a sister of V0. In feature terms, this will be (22) (ignoring features like CASE). (22) {[+N], [−V], [BAR 2]} {[+N], [−V], [BAR 2], [−PLU]} {[+N], [−V], [BAR 2], [+PLU], [CONJ AND]} Both heads here are legal extensions of the mother, while the mother represents the intersection of the heads. Now we can consider (19a), where Adj2 and N2 are co-ordinated. The features of these two categories are given in (23). (23) a. {[+N], [+V], [BAR 2], [−ADV], [+PRD]} (=Adj2) b. {[+N], [−V], [BAR 2], [−PLU], [+PRD]} (=N2) By the new version of the HFC, the mother should bear the intersection of these features, viz. (24). (24) {[+N], [BAR 2], [+PRD]} This satisfies the conditions for the complement of be, which (see 10.2) has to be (an extension of) {[BAR 2], [+PRD]}. So the structure of (19a) will be (25).
(25)
< previous page
page_182
next page >
< previous page
page_183
next page >
Page 183 Thus so-called “co-ordination of unlikes” is handled in the same way as straightforward co-ordination, by dint of recognising that the use of syntactic features enables a very flexible approach to likeness of category. Just as singular and plural N2s are distinct yet share the property of being N2s, so predicative Adj2s and N2s are distinct yet share the property of being predicative [BAR 2] categories. We correctly predict that all kinds of predicative [BAR 2] categories can be co-ordinated after be: (26) a. He is guilty and on the run. b. He was recommended to us and very well qualified. c. I am optimistic and expecting to win. The verb become occurs only before predicative N2s and Adj2s (see 4.4): (27) a. He has become rich. b. He has become a millionaire. c. * He has become given a present. d. * He has become on the run. The lexical ID-rule (28) was posited. (28) VP H[n], [+N, BAR 2, +PRD] So we predict that only [+N] categories can be co-ordinated after become (29). (29) a. He has become a millionaire and very miserable, b. * He has become rich and on the run. It must be noted that this approach does not allow for (say) N2 and Adj2 to be co-ordinated all over the place (30). (30) * He bought a book and very happy. This is because the complement of bought must be N2, yet the intersection of N2 and Adj2 is only {[BAR 2], [+N]}, not [−V] as well. We can now turn to our earlier example (19b), repeated here as (31). (31) To write a novel and for the world to give it critical acclaim is John’s dream. We first gave this example in 4.2, where we saw that it illustrated the similarity of VP and S (or Sbar), and formed part of the evidence for the claim made by some linguists that infinitival VPs have an empty subject and are in fact of category S. On this hypothesis, (31) would not involve co-ordination of unlikes. GPSG does not posit an empty subject in (31): instead the conjuncts are V2[+SUBJ] and V2[−SUBJ], so the mother node of the co-ordination is (32), and the example is accounted for quite straightforwardly. (32) {[+V], [−N], [BAR 2]} Lastly, we can consider (19c), repeated here as (33).
< previous page
page_183
next page >
< previous page
page_184
next page >
Page 184 (33) He works quickly and in a very intensive manner. Here we have Adv2 and P2 co-ordinated. But, as we saw in 7.6, the ID-rule introducing adverbials just requires {[BAR 2], [+ADV]} on the category in question, and this requirement will be met in the tree for (33), part of which will be (34).
(34) There is, however, a slight problem here. Sag et al. (1985: p. 143) propose to introduce features such as MANNER, LOC and TEMP on adverbials. So (33) would involve co-ordination of an Adv2[+MANNER] with a P2[+MANNER]. This is because, in most examples of adverbial co-ordination, the adverbials are of the same semantic type. In (33), for instance, both are manner-adverbials, and cf. also (35). (35) They wanted to leave tomorrow or on Tuesday. Many examples contravening this rule are deviant: (36) a. * He works quickly and in the City. b. * He left on Tuesday and in a hurry. However, it is possible to find examples of semantically unlike adverbials being co-ordinated that do not sound so bad: (37) a. I always see him at the same time and in the same place. b. They want to leave tomorrow or by plane. c. Joe left on Tuesday but in a car. So we shall keep the simpler analysis with no use of MANNER, etc. features. The essential insight of the GPSG account of co-ordination is the following (see Sag et al. 1985: p. 119): If an ID-rule introduces a category , then any conjunct of is a superset of . For instance, the conjuncts may have extra features such as PLU or ±V, but they must at least have the features of the mother.
< previous page
page_184
next page >
< previous page
page_185
next page >
Page 185 12.3 Co-ordination schemata In 12.1 and 12.2 we concentrated on general principles governing co-ordination. We now turn to language-specific properties concerning the distribution of co-ordinating conjunctions, in order to show that these too can be described neatly within GPSG. We shall be led to make some slight alterations to the rules for co-ordination given earlier. We can deal first with co-ordinations containing precisely two conjuncts. The examples in (38) show the possible distribution of co-ordinating conjunctions here.2 (38) a. both London and Paris b. either lasagne or pizza c. wealthy but unhappy d. * both John or Mary e. * either Bob and Ben f. * both tall but handsome The paradigm in (38) shows that there are dependencies between the first and second conjunctions: both goes with and, either with or, and zero with but. As above, we can take the values of CONJ to be the names of conjunctions plus NIL (for the zero conjunction seen in the first conjunct in (38c)). The rules we need are, first (39). (39) a. X[CONJ NIL] H b. X[CONJ ] [SUBCAT ], H where {AND, BOTH, BUT, EITHER, NEITHER, NOR, OR} These ensure that a string like (38c) will be analysed as in (40).
(40) (39b) not only uses a variable, but also includes an explicit listing of what the possible values of that variable can be (because we do not want here to have the value NIL). It should now be noted that some of the examples we gave in 12.1 were simplified in that they omitted the node with [CONJ NIL] as one of its features. 2. Examples like (i) are being ignored for now, as these are not limited to just two conjuncts, cf. (ii). (i) Margaret and Dennis (ii) Margaret, Dennis and John
< previous page
page_185
next page >
< previous page
page_186
next page >
Page 186 To generate the topmost local tree in (40) we can use not a single rule but a rule schema, which abbreviates a set of rules: (41) X H[CONJ 0], H[CONJ 1] where {, <EITHER, OR>, } This means that, for instance, a [CONJ BOTH ] can be accompanied by a [CONJ AND], but not by a [CONJ OR]. In (40), 0 is NIL, and 1 is but. (38d–f) will not be generated. (41) is an abbreviation for the three rules in (42). (42) a. X H[CONJ BOTH ], H[CONJ AND] b. X H[CONJ EITHER], H[CONJ OR] c. X H[CONJ NIL], H[CONJ BUT] Where the first conjunct has an explicit conjunction, there will be a tree such as (43), for (38a).
(43) We now turn to cases where two or more conjuncts can occur, though we shall not bother to give examples of more than three conjuncts. Once again, there are dependencies among co-ordinating conjunctions: (44) a. gin and rum and vodka b. gin, rum and vodka c. neither wealthy nor handsome nor pleasant d. laughed, cried or kept silent e. laughed or cried or kept silent We assume that these are to be given a flat analysis, with all the conjuncts being daughters of the topmost node, e.g. (45) (next page) for (44b). For these we need a separate rule schema, known as the Iterating Co-ordination Schema: (46) X H[CONJ 0], H [CONJ 1]+ where {< AND, NIL>, , , , } This works rather like (41), except that the Kleene plus in (46) means that there can be any number of instances of H[CONJ 1] but there must be at least one.
< previous page
page_186
next page >
< previous page
page_187
next page >
Page 187
(45) There is some idiolectal variation about permissible combinations of co-ordinating conjunctions, and for many speakers either with or should be included in (46) not (42) (see Sag et al. 1985: p. 139). The main aspect of co-ordination in English not covered so far is that of ordering. The fact that a coordinating conjunction precedes its sister is simply due to our much-used LP-statement (47).3 (47) [SUBCAT] ~[SUBCAT] But we need extra statements to ensure that (45) follows the order shown, rather than (48) for example. (48) * and vodka, gin, rum The order of conjuncts is captured by an LP-statement schema (49). (49) [CONJ 0] [CONJ 1] where 0 {BOTH, NIL, EITHER, NEITHER} and 1 {AND, BUT, NOR, OR} Any category with a CONJ value in the set 0 precedes any category with a CONJ value in the set 1; (49) abbreviates a whole list of LP-statements, such as (50). (50) a. [CONJ BOTH ] [CONJ AND] b. [CONJ NIL] [CONJ BUT] etc. The variables in (46) and (49) are of course completely independent of each other. Also (46) should not be misunderstood as specifying linear order. For instance, (46) is responsible for both of (51a, b). (51) a. John, Bob and Jack b. John and Bob and Jack 3. Note, however, that (47) has no effect when the conjuncts are themselves lexical categories and so have a SUBCAT feature, as in (i) (Warner 1989: p. 203n): (i) John likes and admires Margaret. Some extra ordering statement is needed here, though this obeys the standard ordering of conjunction and conjunct.
< previous page
page_187
next page >
< previous page
page_188
next page >
Page 188 In (51a), 0 in (46) is and while 1 is NIL; in (51b), 0 is NIL and 1 is and. (49) requires [CONJ NIL] to precede [CONJ AND]. We have not yet mentioned examples like (52). (52) a. John and Mary or Joan b. either fish and chips or bread and butter In fact these are handled already by the rules we have, since they are not extra examples of flat coordination; rather they show conjuncts that are themselves co-ordinations. For instance, the most likely reading of (52a) (note that it is ambiguous) would be analysed as (53).
(53) Let us close this section by looking briefly at some aspects of co-ordination in other languages. The realizations of co-ordinating conjunctions vary greatly from one language to another. For instance, many languages use different conjunctions to conjoin different categories (such as one to conjoin Ss and another to conjoin other categories)—see Payne (1985). A simple way to express these constraints would be by means of FCRs; for example, Fijian kei is used only to conjoin N2s, a constraint expressible as in (54). (54) FCR: [CONJ KEI] N2 A number of languages use repeated conjunctions, e.g. French: (55) Je veux ou des pommes ou des poires. ‘I want either apples or pears’ This might be handled by adapting (46) so that both values of CONJ would be the same. Lastly, we look in a little more detail at one particular language, basing ourselves on Huang’s (1986) analysis of N2 co-ordination in Chinese (other categories in
< previous page
page_188
next page >
< previous page
page_189
next page >
Page 189 Chinese require other conjunctions, cf. above). Conjuncts with and without an overt co-ordinating conjunction can occur in almost any order, except that the first conjunct must have no overt marker (hé means ‘and’, the other words are people’s names). (56) a. Lăo-Wáng, Lăo-Lĭ, Lăo-Zhāng b. Lăo-Wáng, Lăo-Lĭ, hé Lăo-Zhāng c. Lăo-Wáng, hé Lăo-Lĭ, Lăo-Zhāng d. Lăo-Wáng, hé Lăo-Lĭ, hé Lăo-Zhāng ‘Lao-Wang, Lao-Li and Lao-Zhang’ Huang proposes the co-ordination schema in (57a), which is very similar to (46) above, and (57b, c). (57) a. X H[CONJ 0], H[CONJ 1]+ where 0 {NIL}, and 1 {HÉ, …} b. N2[CONJ ] [SUBCAT ], H where {HÉ, …} c. N2[CONJ ] H where {HÉ, …} (57b, c) may be compared with our rules (39a, b) for English. (57c) allows for the fact (which holds in Chinese but not English) that any conjunct with an overt conjunction can be replaced by a bare conjunct. LP-statements are needed to ensure that initial conjuncts are [CONJ NIL], i.e. [CONJ NIL] precedes any item with another value for CONJ. (56c), for instance, will now be analysed as (58).
58 12.4 Co-ordination and unbounded dependency In Chapter 11, we examined the GPSG analysis of various kinds of unbounded dependency, including wh-questions. We can now see how this phenomenon interacts with co-ordination, and how GPSG provides an elegant account of the restrictions observed. Consider the examples in (59).
< previous page
page_189
next page >
< previous page
page_190
next page >
Page 190 (59) a. * Which food do you drink beer and eat? b. * What does John sing and play madrigals? These are clearly bizarre, even though declaratives built on a similar pattern are fine (60), as are whquestions not involving a co-ordination (61). (60) a. You drink beer and eat burgers. b. John sings anthems and plays madrigals. (61) a. Which food do you eat? b. What does John sing? Clearly there is something to be said about the way in which co-ordination and wh-questions (and, though we shall not illustrate this, all unbounded dependency constructions) interact. We can gain some idea of what is going on here if we show simplified structures for these examples, with brackets around conjuncts and a G to indicate the location of a gap (62). (62) a. * which food do you [drink beer] and [eat G ] b. * what does John [sing G ] and [play madrigals] These examples contain a gap inside a co-ordinate construction; let us suppose that this is precisely what is wrong with them. Such examples have been much studied within the transformational tradition, and have generally been ascribed to the Co-ordinate Structure Constraint (a kind of island constraint—see 11.7), which states that no element may be moved out of a co-ordination (recall that the wh-phrase is claimed to be moved by a transformation to the front of the sentence). No such constraint is statable in GPSG, but suppose we look more closely at the categories of the conjuncts here (taking (62a) as our example). The sequence drink beer is a VP, while eat G is a VP/N2, i.e. a slashed category. So here unlike categories are being co-ordinated, and the ill-formedness of (59) may just be a special case of the Law of Co-ordination of Likes (see 12.2). But, as we have also seen, conjuncts do not need to be absolutely identical in feature specification, so we need to know whether co-ordination of VP and VP/N2 is permitted. A structure for (62a) would need to include the local tree (63). (63) VP/N2 VP [CONJ NIL] VP [CONJ AND]/N2 But (63) is ill-formed, for the [SLASH N2] on the mother is not found on one of the conjuncts, thus violating the HFC and FFP. Since SLASH is both a head feature and a foot feature, all categories in a coordinate structure must have identical specifications for SLASH. This does not obtain in (63), hence examples like (59) are ungrammatical; this is indeed a natural consequence of the GPSG analysis of coordination, and no special statement needs to be added to capture it.
< previous page
page_190
next page >
< previous page
page_191
next page >
Page 191 It is not, however, true to say that no co-ordinate constuction can contain a gap, as can be seen in (64a, b). A simplified structure for (64b) would be (64c). (64) a. Who does Jim love but Bob hate? b. Which teacher do students listen to and learn from? c. Which teacher do students [listen to G ] and [learn from G] So the examples in (64) contain a gap in each conjunct. But this is fine, since the conjuncts in (63) are both VP/N2, and the HFC and FFP are obeyed. This is often called the across the board phenomenon: in movement terms, an item can be moved out of a co-ordination only if it is moved out of each conjunct. Our final problem here is concerned with examples such as (65). (65) What did she go and buy? This looks awkward, since it appears to involve co-ordination of go (a VP) and buy G (a VP/N2). But there are plenty of reasons to think that this is not an ordinary co-ordination. For instance, only and is possible, and also the order is quite fixed (66). (66) a. * What did she go but buy? b. * What did she both go and buy? c. * What did she buy and go? It is possible only following go (and maybe a few other verbs like come ) (67). (67) * What did she depart and buy? In fact, we can write a separate lexical ID-rule for this use of go (68). (68) VP H[48], H[CONJ AND] In addition to ordinary examples like (69), this sanctions the local tree (70). (69) She went and bought a fur coat. (70) VP/N2 V0[48] VP[CONJ AND]/N2 (70) is fine, since the FCR that excludes SUBCAT and SLASH occurring together prevents [SLASH N2] from being instantiated on the V0, and both HFC and FFP are respected.
< previous page
page_191
next page >
< previous page
page_192
next page >
Page 192 12.5 Some remaining problems There is no doubt that the analysis of co-ordination is one of the strengths of GPSG: it has wide and accurate coverage, and very little needs to be stipulated, as general principles do much of the work. At the same time, a number of problems do remain, and we devote the final section of this chapter to a quick discussion of some of them. First, there are some circumstances in which N2s and Sbars can be co-ordinated, e.g. (71). (71) a. You can depend on my assistant and that he will be on time. b. We talked about John and that he had worked for MI5. Such co-ordinations are possible only after a preposition, even though prepositions cannot usually be followed by Sbar: (72) * You can depend on that my assistant will be on time. Sag et al. (1985: pp. 165ff.) propose an account of examples like (71) that involves a rule along the lines of (73) (simplified here). (73) N2[NFORM Sbar] Sbar This involves taking NFORM to be a category-valued feature (unlike what was assumed in 9.3 above), and is not compatible in detail with the use made of NFORM in Gazdar et al. (1985) or here, though perhaps the two could be reconciled. Examples such as (71) are also discussed by Goodall (1987), in his GB approach to co-ordination. His proposal involves a concept of ‘‘union of phrase-markers”, and nodes in a tree that neither precede nor dominate each other (fortunately it is too complex to explain here). He does offer some account of the ‘respectively’ meaning of (74) (i.e. where Jane saw Bill and Alice saw Tom). (74) Jane and Alice saw Bill and Tom. However, it is not clear that the fact that GPSG offers no special syntactic account of this reading is a shortcoming. Secondly, there is the problem of the number and person of co-ordinated N2s. Concentrating on number alone, the problem is how to ensure that two singular N2s co-ordinated by and form a plural N2 for agreement purposes: (75) John and Bill drink/*drinks Guinness. There is a solution to this in Sag et al. (1985: pp. 154–5), but here we shall sketch an answer adapted from Warner (1988). He proposes that a co-ordination with and is by default plural (76). (76) FSD: [CONJ AND] [+PLU] However, he is forced to make various other changes, e.g. that while [−PLU] is subject to the HFC, [+PLU] is not (this is to allow a [+PLU] co-ordination to contain [−PLU] conjuncts). So the definition of head features has to apply to attribute-value
< previous page
page_192
next page >
< previous page
page_193
next page >
Page 193 pairs, not just attributes. This may turn out to be justified, but it is clearly a complication.4 Thirdly, there is the thorny problem of the various kinds of ellipsis that may occur in co-ordinations: (77) a. John prefers beer, and Mark Perrier. b. I like cooked fish, but he prefers raw. Let us concentrate on (77a), the phenomenon known as Gapping, where (intuitively) a verb identical to that in the first conjunct has been omitted from the second. Sag et al. (1985: pp. 161–2) propose the ID-rule (78). (78) V2[CONJ ] [SUBCAT ], [BAR 2]+ where {AND, BUT, NOR, OR} This will generate a sequence of [BAR 2] categories after the conjunction (cf. the two N2s after and in (77a)). They further propose: The interpretation of an elliptical construction is obtained by uniformly substituting its immediate constituents into some immediately preceding structure, and computing the interpretation of the results, (idem.) So in (77a), Mark is interpreted as subject of prefers (like John), and Perrier as object (like beer ). This at least represents a brave attempt to handle a very tricky range of constructions. Fourthly, we can cite the examples in (79) (some are from Levi 1978: p. 22). (79) a. a corporate and divorce lawyer b. solar and gas heating c. domestic and farm animals d. electrical and water services e. academic and research institutions f. architectural and software considerations These show co-ordination of an adjective (e.g. solar ) with a noun (e.g. gas); this construction seems to be confined to adjectives that are used only attributively (80). (80) * This heating is solar. Levi’s solution is to claim that adjectives like solar are derived in the syntax (i.e. not just morphologically) from nouns, a solution unavailable in GPSG. These examples are also problematic from the viewpoint of the morphology-syntax division: it would normally be claimed that gas heating is a compound noun, but that solar heating is a syntactic adjective+noun construct. So this co-ordination appears to cross 4. The problem mentioned in this paragraph is part of a wider one of feature resolution when items with conflicting features are co-ordinated and agreement rules apply. In French, for instance, an adjective agreeing with a co-ordination of a masculine and a feminine noun is itself masculine. For a survey of comparable problems, see Corbett (1983).
< previous page
page_193
next page >
< previous page
page_194
next page >
Page 194 the morphology-syntax boundary. Radford (1988: pp. 208–9) proposes rule (81a) (slightly adapted, here), which implies the structure in (81b). (81) a. N1 [BAR 2, +N], H
b. This enables these examples to be handled as straightforward instances of co-ordination. However, as we argued in 3.2, it is wrong to regard expressions like gas heating as syntactic and the modifier gas as forming an NP. Instead, we might suggest that they are instances of co-ordination inside a compound noun (82).
(82) Notice that this implies that some adjective-noun sequences must be compounds; however, we may well be able to confine this possibility to the kinds of denominal adjectives studied by Levi. They are often known as relational adjectives (for one recent account of their behaviour, see Beard 1991). 12.6 Further reading The GPSG analysis of co-ordination is discussed in Sells (1985: pp. 127–33), Gazdar et al. (1985: Ch. 9) and, in great detail, in Sag et al. (1985). A general discussion of co-ordination can be found in McCawley (1988b: Ch. 9) and Greenbaum & Quirk (1990: Ch. 13).
< previous page
page_194
next page >
< previous page
page_195
next page >
Page 195 13 Current issues The twelve previous chapters have presented the theory of GPSG as developed in Gazdar et al. (1985). We have occasionally pointed out minor problems with that theory, but have not looked at major revisions of it. But, of course, research on GPSG—and on grammatical theory in general—has continued since 1985, and we have referred when appropriate to works such as Hukari & Levine (1991) and Warner (1989). This chapter looks at some further proposals that have been made since the mid-1980s, including developments in other (comparable) formalisms. 13.1 Category Co-occurrence Restrictions Kilbury (1986, 1987) has proposed a new grammatical mechanism, that of Category Co-occurrence Restrictions (CCRs). These are similar to FCRs, but differ in that they constrain not the occurrence of features on a single category but relations between different categories in a local tree. They enable one to make statements such as, ‘Any local tree with X as its mother must have Y as a daughter’, or ‘No local tree with W as a daughter also has Z as a daughter’. ID-rules are replaced by CCRs and a set of branches, i.e. pairs of permissible mothers and daughters. For instance (1) states that a local tree with VP as mother may have an N2 daughter: (1) In (2) we see an example of a CCR.
< previous page
page_195
next page >
< previous page
page_196
next page >
Page 196 (2) S |[N2 VP]| This states that any local tree with S as mother obligatorily has N2 and VP as daughters. If we also allow for the branch <S, V0>, we generate ordinary N2–VP trees as well as those in inverted clauses (V0–N2–VP). For instance, the set of ID-rules in (3) could be replaced by the set of branches and CCRs in (4) and (5). (3) a. S N2, VP b. S V0, N2, VP c. N2 SpecN, N1 d. N2 N2, N1 e. VP V0, N2 f. VP V0 (4) a. <S, V0> b. c. d. (5) a. S |[NP VP]| b. N2 |[N1]| c. VP |[V0]| The intention is that (4) and (5) capture generalizations that (3) does not, e.g. the fact that an S always contains N2 and VP as daughters (the similarity of (3a, b) is purely coincidental). It is further suggested that CCRs may help to render metarules superfluous, but we shall leave this until we have looked at other proposals along similar lines. CCRs are in some respects a new kind of statement, but in an important sense they are still the same kind of linguistic condition as the better-established notions of GPSG. This is because they are conditions on local trees. It is logically possible to impose non-local conditions on trees, that is to say that a tree is well-formed only if two nodes that are not in the same local tree are related in some particular way. It is then possible for a tree to be ill-formed even if every local tree it contains is wellformed. For a discussion of this issue, see Borsley (1991: pp. 41–3). 13.2 Passives revisited In 10.2 we discussed the standard GPSG account of passives, which employs a metarule to map lexical rules for active VPs into those for passive VPs. Pulman (1987) and Zwicky (1987a) have independently made rather similar proposals for an improved analysis making no use of a metarule.
< previous page
page_196
next page >
< previous page
page_197
next page >
Page 197 We shall start with the analysis of Pulman (1987), translating it into a more GPSG-like version. In addition to the [VFORM PAS] feature, there is a Boolean feature PASSIVE, subject to (6). (6) FSD: [−PASSIVE] Passive VPs will be introduced by (7). (7) VP H[n], VP[PAS, +PASSIVE] Alternatively, we could keep our general rule for introducing be and use (8). (8) FCR: VP[PAS] [+PASSIVE] No special lexical ID-rules for passives are needed. For transitive verbs, we will have (9). (9) VP[ PASSIVE] H[2], N2[ PASSIVE] Unless the mother VP in (9) has been introduced by a rule like (7), it will default to [−PASSIVE], and (9) will build an ordinary transitive VP. But if the VP is [+PASSIVE], so will the N2 introduced by (9) be. A passive N2 may seem an odd notion; it will in fact just be empty: (10) N2[+PASSIVE] e So a simple passive will be analysed as in (11).
(11) By-phrases are treated as modifiers, a proposal that has also been made by various other linguists. Pulman points out that, unlike clear examples of complements, passive by -phrases can easily follow other modifiers (12). (12) John was arrested in the park on Friday by the Special Branch. However, it should be added that passive by -phrases can also precede complements (13). (13) John was believed by the Special Branch to be a spy.
< previous page
page_197
next page >
< previous page
page_198
next page >
Page 198 In terms of ordering, then, by -phrases are not clearly like either complements or modifiers. Grimshaw (1990: pp. 108–12) specifically assigns them an intermediate status, as argument-adjuncts. Zwicky (1987a) also uses the Boolean feature PAS, and regards the value of VFORM on passive participles to be PSP (not PAS). He adds the FCR (14) (cf. (8)). (14) FCR: [VP, +PAS] [PSP] A VP with [+PAS] can be introduced as sister of be by (15). (15) VP H[n], VP[+PAS] He differs slightly from Pulman in his handling of the ‘missing N2’. He employs a new category-valued feature BSLASH (for ‘backslash’), and there is an FCR: (16) VP[+PAS] [BSLASH N2] A passive VP also has the feature [BSLASH N2]. A BSLASH feature is terminated simply by having an N2 not appear in some construction. Unfortunately, Zwicky gives less detail than one would like, but it is clear that he has in mind a structure such as (17) (cf. (11)).
(17) Details aside, the analyses of Pulman and Zwicky have in common that they avoid a metarule, and that they distinguish between the passive construction and the occurrence of passive morphology. This latter point is perfectly sensible, since there are languages (such as Chinese) that do not have special verb forms in passives. In 4.1, we mentioned the distinction between morphosyntactic and construction features: part of Zwicky’s objection to the standard GPSG treatment of passives is that it involves using PAS as both a morphosyntactic and a construction feature.
< previous page
page_198
next page >
< previous page
page_199
next page >
Page 199 13.3 Lexicalism In this section we shall outline some of the issues revolving around the complex notion of lexicalism, which might be broadly defined as an approach to describing language that emphasizes the lexicon at the expense of grammatical rules. This initial characterization is misleading, however, in that lexicalism covers a whole range of approaches and theories that capture this lexical emphasis in very different ways. It may be helpful to distinguish two main senses of ‘lexicalism’. The first is the claim that syntactic rules should not manipulate the internal structure of words. The second is the claim that as much information as possible about syntactic well-formedness should be stated in the lexicon. These senses may well be related, but they are distinct. We can begin by sketching the historical development of ideas concerning the lexicon in generative grammar. The following passage brings home the importance of this history: The development of Transformational Generative Grammar from its beginning up to the present can be seen, among other ways, as a progressive refinement of the structure of the lexical component. This does not mean that the evolution within the theory was motivated by considerations having to do with the lexicon itself; in fact, the opposite is true. That is, the changes in the organization of the lexicon followed from changes proposed for the organization of the transformational component, the categorial [phrase-structure] component and even the phonological component. The fact remains, however, that the lexicon, in the beginning, was conceived of simply as a list of lexical formatives, while today it is thought of as having a complex internal structure which is capable of handling a wide variety of phenomena. It is for this reason that the organization of the lexicon has become an important part of the theory of grammar. (Scalise 1984: p. 1) But, as we shall see, it is not just a matter of the organization of the lexicon; the division of labour between the lexicon and other components of the grammar is also crucial.1 In the earliest versions of generative grammar, the lexicon was simply seen as a list of idiosyncratic information about lexical items, specifically of semantic, syntactic and phonological information about each word or lexeme. The syntactic part contained information about a word’s part of speech and its strict subcategorization (see 2.2 above). Gradually, however, it came to be appreciated that it was possible to state generalizations over the lexicon, statements about sets of lexical entries and 1. We may mention in passing that evidence from other areas of linguistic research supports at least the general idea of lexicalism. For instance, sound change affects words individually rather than applying to all relevant words immediately—it is lexically gradual—and children acquire their language word-by-word rather than phoneme-by-phoneme; they do not suddenly acquire total mastery of (say) the phoneme /f/.
< previous page
page_199
next page >
< previous page
page_200
next page >
Page 200 the relations between them, which meant that the lexicon was no longer merely a depository of idiosyncratic information.2 For instance, it was argued that the relation between a verb such as refuse and its nominalization refusal should be captured by a lexical rule rather than a transformational one. This enabled the power of transformations to be reduced (e.g. they could be barred from altering category labels), and it also represented a kind of rediscovery of the importance of morphology. In the early 1970s, the term Lexicalist Hypothesis was used to describe this idea of employing lexical rules to capture phenomena previously analysed by means of transformations. Another way of stating this notion is the Principle of Morphology-Free Syntax (see also Selkirk 1982: p. 1): Syntactic rules cannot make reference to the internal morphological composition of words or to the particular rules involved in their morphological derivation. (Zwicky 1987a: p. 654) An important paper in pushing these ideas further was Wasow (1977) (already referred to in 10.2). He argued that lexical rules and transformations had clearly different properties (e.g. lexical rules could relate items of different categories and could have idiosyncratic exceptions, while transformations could alter syntactic structure but had few true exceptions). Specifically, he argued that adjectival passives (see 10.2) should be handled by a lexical rule and verbal passives by a transformational rule. The causative alternation seen in (18) should also be handled lexically. (18) a. The bulb shattered. b. John shattered the bulb. It is a short (though not necessarily justified) step from Wasow’s position to the view that all passives should be handled lexically. We can illustrate this by a quick look at the analysis of passives found in LFG. In 4.5, we discussed the LFG analysis of subcategorization in terms of grammatical functions. A transitive active verb such as cook would have the subcategorization frame (19). (19) SUBJECT, OBJECT A passive verb occurs with a subject and a by -object as in (20a), and so has the subcategorization frame (20b): (20) a. The lasagne was cooked by Pierre. b. BY-OBJECT, SUBJECT (19) and (20b) are related by (21), which is the LFG passive rule.3 (21) SUBJECT BY-OBJECT OBJECT SUBJECT 2. Andrews (1988) presents a useful survey of different approaches to the lexicon. 3. Here, ‘ ’ means ‘is replaced by’.
< previous page
page_200
next page >
< previous page
page_201
next page >
Page 201 So passivization is stated by a lexical rule that relates one set of lexical entries to another set; in LFG no special statement in the syntax is needed to describe passives. The most extreme version of lexicalism is found within categorial grammar (mentioned even more briefly in 4.5), where lexical entries incorporate information about the combinatorial properties of words and only general schematic operations are needed in the syntax. For instance, if common nouns are categorized as N, adjectives can be categorized as N/N (they combine with an N to form an N), while articles are NP/N (they combine with an N to form an NP).4 Another variant of extreme lexicalism would be lexicase theory, the ‘fundamental principle’ of which is that ‘all grammatical rules are to be viewed as generalizations about the lexicon’ (Starosta 1988: p. 40). Among the terminology used in this connection are radical lexicalism (Karttunen 1989) and superlexicalism (Newmeyer 1986: pp. 187–91). We have implied here that lexicalism is related to a reduction in the power and burden of transformational rules. It can be seen more generally as related to a reduction in the power and burden of syntactic rules of any kind. It was mentioned in 3.3 that GB attempts to do away with PS-rules, regarding constituent order as a parameter holding over the whole grammar. We could also have referred there to GB’s use of X-bar theory and lexical subcategorization to take over the functions of PSrules in building structure. For instance, given X-bar principles, the information that cook subcategorizes for an N2, the ordering parameter for English and a requirement that lexical information be ‘projected’ into syntactic structures, we can construct (22).
(22) So GB has a lexicalist side to it too (see Cook 1988: pp. 102–7). In fact, Wasow (1985: p. 204) observes that “contemporary syntactic theories seem to be converging on the idea that sentence structure is generally predictable from word meanings.” There is something in this statement, though it overstates the contribution of semantic, as opposed to syntactic, information about words. Another way of making a similar point is as follows: Syntactic representations are projected from the lexicon in that they observe the pertinent properties of lexical items (where ‘pertinent’ and ‘observe’ are defined theory-internally). (Szabolsci 1992: p. 242) This formulation deliberately leaves the way open for a great deal of variation in the way in which theories relate syntactic and lexical structure. 4. Note that this use of the slash notation has no relation to its use in GPSG SLASH categories.
< previous page
page_201
next page >
< previous page
page_202
next page >
Page 202 We saw in 1.1 that GPSG excludes transformations (as LFG and categorial grammar do), so one might expect, given the way in which the development of lexicalism has been traced above, that GPSG would be a predominantly lexical theory. But in fact GPSG is very much towards the grammatical end of the ‘grammatical-lexical’ continuum. This should be clear from the number of grammatical rules given so far, and the fact that very little has been said about the GPSG lexicon. In particular, information about subcategories of verb is found in lexical ID-rules, rather than in lexical entries (which will just say which SUBCAT type a verb belongs to). To be specific, consider the fact that a verb like be can occur with empty there as subject. In 9.3 we described this via a lexical ID-rule (23). (23) VP[AGR {N2[NFORM THERE, PLU]}] H[22], N2[ PLU] In LFG, no special syntactic rule is called for; rather the facts can just be stated in a lexical entry for be (simplified here): (24) (SUBJECT FORM) = THERE (SUBJECT NUMBER) = (OBJECT NUMBER) Without going into the details of the LFG formalism, we can just note that (24) ensures that the subject must be there, and that this subject takes on the number of the object, thus guaranteeing that agreement operates properly. However, it is not quite correct to say that GPSG makes no use of lexical rules or rules that relate distinct lexical entries. For instance, to capture the relation between the two uses of verbs like give (as in give X to Y and give Y X), a meaning postulate is used: [W]e assume the existence of meaning postulates that impose systematic relations between the meanings of homophonous verbs that are multiply listed. (Gazdar et al. 1985: p. 111) But meaning postulates are precisely semantic statements, and it is in semantic (rather than syntactic) terms that GPSG captures the facts of multiple subcategorizations (where LFG uses lexical rules that relate or manipulate lexical entries). And entities specifically called lexical rules are used as well. In addition to the passive and extraposition metarules (see 10.2 and 10.3), there are also passive and extraposition lexical rules (see Gazdar et al. 1985: pp. 219 and 222), which relate different lexical entries. Simplifying somewhat, the Extraposition Lexical Rule states that the interpretation of It V…Sbar is identical to that of Sbar V… 5 Kilbury (1986: p. 52) claims that there is a redundancy between lexical rules and metarules in GPSG: Since lexical rules do most of the work, and given that metarules apply only to lexical ID-rules, it is unclear why both should be needed for what is essentially one job. 5. Cf. also our passing references to meaning postulates and intensional logic representations.
< previous page
page_202
next page >
< previous page
page_203
next page >
Page 203 He proposes instead metalexical rules, which relate not only a verb’s syntactic features (e.g. whether it is passive, agrees with a sentential subject, or whatever) but also the interpretation of sentences containing it. Thus a redundancy is eliminated and the set of theoretical devices is made more uniform (cf. also the discussion of Kilbury’s work in 13.1). One path from GPSG, then, involves making it more lexicalist. In fact, this is the general tendency of post-1985 developments in grammatical theory within the unification tradition. One example is Unification Categorial Grammar (Zeevat 1988, Karttunen 1989), a development in some ways foreseen in Gazdar et al. (1985: p. 188). [A] suitably enriched categorial syntax might be seen as a natural progression from our current theory of grammar. 13.4 Head-driven Phrase Structure Grammar However, the theoretical development that we shall concentrate on here is that of Head-driven Phrase Structure Grammar. In 4.5 we gave a brief introduction to the HPSG account of subcategorization, and also cited some references. Now we shall say a little more about the rules and principles employed by HPSG. It will have been gathered that GPSG makes use of a large number of ID-rules, each of which contains fairly explicit information about the kinds of categories that they apply to. Use of feature percolation mechanisms and FCRs enables quite a large reduction in the features given in each rule, but the rules still specify a particular category on the left-hand side and particular categories for the non-head items on the right-hand side. Only in the area of co-ordination (see Chapter 12) do GPSG rules lack category information and thereby cover a range of different category types. It would clearly be a good thing, because more general, to take the GPSG approach to removing specific information from rules one stage further. This has been achieved in HPSG, as we shall now see. One general rule written in HPSG is the following (this is an ID-rule, so the right-hand side is unordered): (25) [SUBCAT <>] H[−LEX], C This says that any item with an empty SUBCAT list (cf. 4.5) can consist of a non-lexical head (e.g. a phrase) and a complement. This covers not just an S consisting of an N2 and a VP, but also an N2 consisting of an article or a genitive phrase plus an N1. Another general rule is: (26) [SUBCAT <[]> H[+LEX], C*
< previous page
page_203
next page >
< previous page
page_204
next page >
Page 204 An item with a single element on its SUBCAT list (such as VP or N1) consists of a lexical head and any number of complements. This takes the place of the great mass of lexical ID-rules employed in GPSG, which is a considerable advantage. Of course, just writing rules such as (25) and (26) is not itself sufficient to ensure that the correct structures are assigned, and this is where lexical information and grammatical principles enter the picture. HPSG is a lexicalist theory (cf. the previous section), and lexical entries contain a great deal of information about words, in a feature-based format. This includes subcategorization information, a list of the categories that a word requires. When combined with the general principle given in 4.5 and repeated here as the ‘Subcategorization Principle’, the correct categories will be assigned to the mother and sisters of a subcategorized item: The Subcategorization Principle Any category of the SUBCAT list of a head but not on the SUBCAT list of its mother must be matched by a sister of the head. For instance, a transitive verb subcategorizes for two N2s, one of which occurs as its sister inside the VP. The VP itself then subcategorizes for a single N2, which will be its sister, thus giving a mother S node with an empty SUBCAT list. In addition, HPSG uses a principle that is a reformulation of the HFC to handle feature percolation between an item and its head daughter: The Head Feature Principle The head features of a phrase are identical to the head features of its head daughter. This ensures, for instance, that only a verb can be the head of a VP, and that if the verb is finite, so will the VP be as well. There is much more to HPSG than we have been able to mention here. But one point worth making is that, in terms of linguistic insights and analyses, HPSG is very greatly indebted to GPSG: its main contribution has been on the formal side. Although many of the details are rather different, the spirit of GPSG in terms of linguistic descriptions is clearly visible, in HPSG as in many other recent proposals.
< previous page
page_204
next page >
< previous page
page_205
next page >
Page 205 14 Relevance to computational linguistics In this chapter we consider the relevance of GPSG to computational linguistics. We do this because GPSG has both contributed to, and benefited from, the increased mutual interest between theoretical and computational linguistics over the last decade. It is likely, however, that many readers of this book will have little or no familiarity with computational approaches to language. It is impossible to provide such readers with sufficient background here, but we can just say a little to introduce what follows. One important problem tackled in computational linguistics is that of parsing. A parser is a computer program takes takes a string of words as input and outputs a decision as to whether that string is generated by the grammar and (if it is) a structure for the string, i.e. a tree or some structure equivalent to a tree. It does this by exploiting a lexicon and grammar. A parser might begin by looking up every word in the string in its lexicon, and assigning a category (and possibly various features as well) to each word. Then it builds larger constituents by comparing sequences of categories in the string to the rules in the grammar. For instance—to take an informal and simplified example—it may combine an article and a noun together to form a noun phrase. This combining of items continues until a sentence has been found (e.g. by combining a noun phrase and verb phrase). Writing a simple parser is not itself a difficult computational task; the problem comes with writing parsers
< previous page
page_205
next page >
< previous page
page_206
next page >
Page 206 that will work reliably and efficiently (i.e. not too slowly) when faced with longish sentences, enormous dictionaries and broadly-based grammars.1 We can begin by tracing the reasons for the increased mutual interest between theoretical and computational linguistics. To put it bluntly, transformational grammar as it developed in the 1960s and early 1970s proved to be a massive disappointment for computational linguists. It proved impossible to provide a faithful implementation of transformational theories by building a transformational parser. The ‘obvious’ solution (of assigning a surface structure to a string and undoing the transformations so as to arrive at the deep structure) was soon seen to be impracticable. First, there is in an ordinary transformational grammar no surface grammar that can be used to build the surface trees. Secondly, undoing transformations is exceedingly difficult, since it may involve inserting deleted items and finding the right sequence in which to apply the appropriate operations. Partial solutions to some of these difficulties were proposed, but on the whole it must be said that transformational parsers are neither efficient nor elegant. For further discussion, see King (1983b). Against such a background GPSG seemed very promising. GPSG is basically an elaboration of contextfree grammar (CFG); but, unlike transformational grammar, GPSG does not go beyond the generative power of a CFG. The informal description of parsing that we just gave implicitly assumed that the rules of the grammar were in context-free format. Because of their importance in describing the syntax of programming languages, CFGs are well understood, and a number of efficient algorithms for parsing them have been proposed. That is to say, there are known procedures that, given a string and a CFG, are guaranteed to output in finite time a decision as to whether the string is generated by the grammar and (if it is generated) which structure is assigned to it. No such procedures are known in the case of transformational grammars. It therefore looked as if the development of GPSG provided the basis for writing grammars of natural languages that were linguistically motivated and computationally tractable— a possibility that had not previously existed. However, implementing the version of GPSG set out in Gazdar et al. (1985) has turned out to be rather harder than originally expected: [A]s far as practical implementations of GPSG are concerned, the theory loses its apparent efficient parsability and becomes computationally intractable. (Boguraev 1988: p. 95) We can now look at what some of the problems are. 1. This paragraph merely touches on a number of highly complex issues. Further exploration of approaches to parsing could usefully start with de Roeck (1983) and Allen (1987: Ch. 3).
< previous page
page_206
next page >
< previous page
page_207
next page >
Page 207 14.1 Problems with parsing GPSG Perhaps the neatest way to encapsulate the difficulty is to say that in GPSG there is a rather indirect relation between ID-rule and local tree. This is very different from the case of CFGs, where the PS-rule (1a) straightforwardly licenses the local tree (1b). (1) a. S N2 VP b. If one allows for optional elements, alternation or Kleene star, the link between rule and tree becomes somewhat less direct. Nevertheless, the point that in a CFG there is a clear relation between rules and trees remains valid. This situation does not essentially change if one makes use of features, rather than atomic categories, in the grammar. Even using variables in rules does not alter matters significantly. Rule (2a) is still responsible for tree (2b) provided one augments the parsing algorithm with a mechanism for handling feature instantiation and percolation. (2) a. S[ FINTE] N2 VP[ FINITE] b. But now consider what happens when we make use of the conventions of GPSG. Instead of rule (1a) we have rule (3). (3) S [BAR 2], H[−SUBJ] This may license a local tree such as (4).
(4) Thus there are far more features in the tree than are specified in the rule. This is of course just a consequence of the use of underspecified rules and the possibility of feature instantiation. The problem is, how precisely does one get from (3) to (4), bearing in mind the need to comply with various GPSG principles (HFC, CAP, etc.) and to avoid generating an ill-formed tree? In fact, there are a number of different issues involved here. Let us take first the fact that (3) is an IDrule not a PS-rule (unlike (1a), (3) says nothing about
< previous page
page_207
next page >
< previous page
page_208
next page >
Page 208 linear order). One solution would be to expand a grammar in ID/LP format into an equivalent CFG. So the mini-grammar in (5) would be expanded into that in (6).2 (5) a. S N2, VP b. N2 SpecN, N1 c. VP V0, N2 d. V0 N2 VP e. SpecN N1 (6) a. S N2 VP b. N2 SpecN N1 c. VP V0 N2 This method of expanding a GPSG into an ordinary CFG (via what is sometimes called a preprocessor or precompiler) has also been used in other contexts than the problem of ID/LP format, so we should understand its shortcomings. First, it means that the parser is not actually working on the grammar in its original form, so that it is not a true implementation of GPSG. Secondly, some expanded grammars will have enormous numbers of CF-rules, whereas the original grammar may have relatively few IDrules, so parsing on the expanded grammar may be very inefficient. For these and other reasons, it is in principle preferable for a parser to employ a grammar in ID/LP format directly, without it being first expanded into CF-format. Shieber (1984) has in fact presented an adaptation of the Earley CF parsing algorithm that directly accepts grammars in ID/LP format. Basically, the algorithm works by stipulating that a string of potential daughters for some node is regarded as having been found (i.e. the dot in the dotted rule is advanced) only if the order of elements in that string is compatible with the grammar’s LP-statements. Shieber claims that the time complexity of his algorithm is the same as that of Earley’s (i.e. it is proportional to the square of the number of rules and the cube of the length of the string being checked). Barton (1985) responds that this claim is optimistic in the extreme, arguing that ID/LP parsing is NPcomplete; e.g. Shieber’s algorithm can suffer from combinatorial explosion in the case of lexical ambiguity. However, it does appear that Shieber has provided a way of directly parsing ID/LP grammars, however inefficient his algorithm might become in certain circumstances. Another area where the expansion method has commonly been used is that of metarules. Metarules are applied to yield an enlarged set of ID-rules, so that the parser itself never sees the metarules. A possible objection is that this may lead to vast numbers of ID-rules, though it has been claimed that in practice it leads only to an approximate doubling of the number of ID-rules (Fisher 1989: p. 140). And, as we saw in 13.3, some linguists would prefer to do away with metarules in the first place. 2. We are using atomic categories here just for ease of illustration.
< previous page
page_208
next page >
< previous page
page_209
next page >
Page 209 Other aspects of GPSG do not seem amenable to such relatively simple solutions, though; in the remainder of this section we concentrate on various problems, before examining some more radical solutions in the next section. The following lengthy passage identifies the problems that arise (Hauenschild & Busemann 1988: p. 224): What would it really amount to if we tried to implement the axiomatic version of GPSG in a straightforward way? In order to find all admissible trees corresponding to a given sentence, we would have to do the following things for every local tree: —build every possible extension for every category in an ID rule, which means that every feature that is not specified in the rule may be either absent or instantiated by any of its values; —filter out the illegal categories with the aid of the FCRs; —build all the possible projections of ID rules with the remaining legal categories, thereby creating every possible order of the daughters; —filter out those combinations of categories that are inadmissible according to the FFP, CAP, and HFC; —filter out some more projections that are unacceptable because of some category contradicting an FSD; —filter out those projections that contradict any LP statement applicable to the daughters. After this, the subset of admissible local trees has to be identified which yields the desired complex structures in the following way: two (locally) admissible trees may be combined into a larger tree iff one of the daughters of one of them is identical with the mother of the other one. The whole process can be regarded as divided up into three major steps: the first step consists in constructing all the possible projections (possible according to ID rules and FCRs); the second step consists in filtering out local trees that are not admissible according to the restrictions imposed on them by the FIPs [feature instantiation principles], the FSDs and the LP statements. The last step is the combination of locally admissible trees into complex structures. And this approach is hopelessly inefficient, since the first step yields a combinatorial
< previous page
page_209
next page >
< previous page
page_210
next page >
Page 210 explosion of the set of categories: there will be an astronomically high number of possible projections constructed. And the second step is difficult to manage since the HFC and CAP seem to feed each other, so it is not possible to decide that one should be applied before the other. The problem, then, lies with feature instantiation, the interaction of feature-related principles and how to determine which particular extension of the set of features inherited from a rule can appear on a category. We are thus back at our earlier problem of the relation between an ID-rule and a local tree. 14.2 Various solutions Here we shall examine four different solutions to the kinds of problem sketched in the previous section. All involve modifications of one kind or another to the GPSG formalism. Shieber (1986b) restates GPSG (though without FCRs or LP-statements) in the formalism of PATR-II (see Shieber 1986a: Ch. 3). For instance, the GPSG rule (7) becomes the PATR rule (8) (7) S [BAR 2], H[−SUBJ] (8) X0 X1, X2 <X0 head n> = − <X0 head v> = + <X0 head bar> = 2 <X0 head subj> = + <X1 head bar> = 2 <X2 head subj> = − <X2 head agr> = X1 <X0 head slash> = <X2 head slash> <X0 head n> = <X2 head n> <X0 head v> = <X0 head v> <X0 head bar> = <X2 head bar> <X0 head agr> = <X2 head agr> <X0 head inv> = <X2 head inv> <X0 conj> = ~ <X1 conj> = ~ <X2 conj> = ~ Without explaining this in detail, what (8) achieves is to ensure matching of features between elements in the rule. E.g. the equation <X0 head bar> = <X2 head bar> guarantees that X0 and X2 (the S and VP) will share the same value for the bar feature. The equation <X0 conj> = ~ reflects the FSD (9).
< previous page
page_210
next page >
< previous page
page_211
next page >
Page 211 (9) FSD: ~[CONJ] Thus the effects of FSDs and feature instantiation principles are built directly into rules; these principles, etc. are implemented in the sense that (8) is to be generated automatically from (7) rather than written by hand. However, it should be clear that we are now a long way from straightforwardly implementing GPSG, and that a rule like (8) shows all the complexity and redundancy that makes PATR a sometimesuseful formalism, but not a linguistic theory. This approach is an extreme version of the expansion or precompilation method discussed in the previous section, with feature principles being applied to grammatical rules to give a vastly inflated grammar on which the parser works. Fisher (1989) presents a parsing algorithm for GPSG that is claimed to accept grammars closer to the framework presented in Gazdar et al. (1985) than most other algorithms do. Yet even he excludes the ID/LP format, FSDs and metarules. He also modifies the HFC, FFP and CAP by specifying features as either percolating (from daughter to mother) or trickling (from mother to daughter); propagation conventions are specified individually for each feature. From the original grammar G a skeleton grammar G′ is derived by slightly modifying the set of FCRs and omitting the set of percolating features; the language of G′ will be a proper superset of that of G . The skeleton grammar G′ is now parsed using (a form of) Earley’s algorithm; trickling features are handled by ensuring that if they belong to a mother category they are instantiated on each daughter. This first parsing phase results in the derivation of a set of parse trees. Each tree is then examined in turn, and features are added to each node, respecting the original set of FCRs in G and both trickling and percolating features. After this parsing phase, the tree reflects a parse according to the original grammar G .3 This approach makes the problem of the interaction of constraints more manageable by distinguishing trickling and percolating, and treating these at different stages of the parsing process. Algorithms of the type (i) derive a skeleton grammar, (ii) parse according to the skeleton grammar, (iii) check additional constraints to ensure conformity with the original grammar have been used quite widely in computer science. Researchers in Berlin have developed an approach that they term a constructive version of GPSG, on the grounds that it allows for the construction of a syntactic structure rather than its selection from a large set of candidate structures.4 The general idea is that constraints, etc. apply in a specified order, adding information to the categories in a local tree as they proceed. In between each application of a principle, FCRs are consulted to see if they can add anything. FCRs have to be consulted more than once because an FCR may come to apply only once some particular feature has been added by some instantiation principle. The CAP is redefined as a 3. Fisher also suggests that devices closer to the classic definitions of HFC, FFP and CAP can be incorporated in his parser. 4. The main reference is Hauenschild & Busemann (1988), mentioned in the previous section; but see also Busemann & Hauenschild (1988), Weisweber (1988).
< previous page
page_211
next page >
< previous page
page_212
next page >
Page 212 purely syntactic principle, the Agreement Principle (AP). This gives the following sequence of application (see Hauenschild & Busemann 1988: p. 230).5 (10) ID FCR FFP FCR AP FCR HFC FCR LP After an ID-rule has provided the skeleton for a local tree, FCRs may add features to it. The FFP then applies to percolate foot features, and FCRs are consulted again. Only then can the AP apply. The HFC applies near the end of the chain, since it is dependent on the AP to introduce agreement features. Finally, the local tree is checked against LP-statements. Ironically perhaps, this procedural interpretation of GPSG appears to be more computationally tractable than the essentially declarative formulation given in Gazdar et al. (1985). Lastly, we consider the approach taken within the Alvey Tools Project (for which the best published source is Boguraev 1988). ID-rules, LP-statements and metarules are retained, but the work of feature instantiation principles is taken over by innovative rule types, propagation rules and default rules. An example of a propagation rule is (11). (11) [−N, +V] [+H], U F(0) = F(1), F VERBALHEAD Despite the apparent similarity to ID-rules, (11) does not itself state the members of a local tree. Rather, it is to be interpreted as follows: for any rule with a verbal projection on the left, in a corresponding local tree the mother and head daughter will share the same VERBALHEAD features. Note that there is a Boolean feature H (for ‘head’), though this does not in fact turn up in trees. The U in (11) is a variable over the possible sisters of the head. The set of VERBALHEAD features is specified in a feature set declaration (12). (12) VERBALHEAD = {PRD, FIN, AUX, VFORM, PAST, INV} So the mother and head daughter must have the same values for these attributes. (11) does part of the work of the HFC, but it applies only to certain rules and certain features, thus minimizing the problems of interaction with other parts of the grammar. An example of a default rule would be (13). (13) N1 [+H], U F(1) = −, F {PRO, PN} In any local tree with N1 as mother, the default is for the head to be [−PRO, −PN] (i.e. neither a pronoun nor a proper name). (13) is similar in effect to an FSD, but applies only to a specific kind of local tree. The whole idea of this approach is that feature instantiation is severely constrained, and the link between ID-rules and local trees is far more under the control of the rule-writer. A large-scale grammar for English has been written within this formalism, and a number of the analyses 5. Note that FSDs are not used, and metarules are handled by a preprocessor.
< previous page
page_212
next page >
< previous page
page_213
next page >
Page 213 given in earlier chapters have been influenced by this grammar, thus emphasizing its closeness to GPSG (despite its use of non-GPSG devices). These four approaches all have in common that they abandon free feature instantiation, and attempt to limit the very complex interactions of feature instantiation principles with each other and with FSDs. It may well be that a ‘pure’ implementation of GPSG is unworkable, but it is clear that a number of paths can be taken to produce a more manageable formalism. However, it would be unfair to close this chapter without reiterating the contributions made by GPSG to computational linguistics. GPSG did not invent the idea of features, which are now common to most computational linguistic formalisms, but it did introduce new and fruitful ways of using them, such as slash features to indicate a missing constituent, and the flexible use of features to capture agreement. The influence of GPSG on computational approaches to language goes far beyond issues of the best way to construct a GPSG parser.
< previous page
page_213
next page >
< previous page
page_214
next page >
Page 214 15 Conclusion We have come a long way since our original examination of some simple PS-rules, both in terms of coverage of the structure of English and in terms of the development of a sophisticated theory of grammar. In this final chapter, we want to make a brief assessment of GPSG, including a look at other linguists’ evaluation of it. It may be useful to begin with the criteria for assessing grammatical formalisms set out by Shieber (1986a: pp. 5–6): Linguistic felicity: The degree to which descriptions of linguistic phenomena can be stated directly (or indirectly) as linguists would wish to state them. Expressiveness: Which class of analyses can be stated at all. Computational effectiveness: Whether there exist computational devices for interpreting the grammars expressed in the formalism and, if they do exist, what computational limitations inhere in them. Let’s begin with the last of these. As we saw in Chapter 14, GPSG ought in principle to score very highly in terms of computational effectiveness, but in fact it is far harder to implement than appears at first sight. And most implementations do not capture the entirety of what is formulated in Gazdar et al. (1985). Some of the unimplemented devices are either too complex or too unclear to be incorporated in a practical system. For instance, if you found the details of the operation of FSDs rather difficult, you may find the following opinion (expressed by a computer scientist who has implemented an efficient algorithm for a relatively ‘pure’ version of GPSG) reassuring: [T]he precise effect of FSDs is most unclear. The original definition attempts to explain the effects of FSDs mainly by giving examples, although there are
< previous page
page_214
next page >
< previous page
page_215
next page >
Page 215 a few mathematical definitions that, however, appear to confuse the definition rather than clarify it. A clear, formal definition of the effect of FSDs is urgently required. (Fisher 1989: p. 139) As we also saw, there are claims that GPSG-parsing is NP-complete. A separate class of criticisms is that made by Berwick & Weinberg (1984, especially Ch. 3), that efficient parsability is not a property unique to CF languages, since many context-sensitive languages are efficiently parsable. The general trend of their case is that GPSG is inadequate because it permits the generation of many non-natural CF languages, whereas GB imposes the right kinds of limitations (neither too restrictive nor too liberal) on the set of natural languages. But their argument is too programmatic to serve as a convincing defence of GB or critique of GPSG. In terms of felicity and expressiveness (which we can lump together), matters become rather more subjective. GPSG certainly provides a number of ways for grammarians to express information and to capture generalizations. But sometimes relying on general principles (FFP, etc.) leads to such complexity that it is very difficult to keep track of what is going on and whether it will be permitted for a particular category to bear a particular feature. And it sometimes is surprisingly tricky to state the required properties. Moreover, GPSG is pretty clearly deficient in not making available any neat way of stating lexical generalizations, especially with regard to subcategorization. In fact, in forcing the linguist to express a number of what are intuitively lexical facts in the grammar, GPSG does violence to the goal of felicity. In addition to its role as a grammar formalism, we should also consider GPSG as a linguistic theory. Wasow (1985: p. 197) suggests that GPSG constitutes ‘‘a return to a serious concern for observational adequacy”, i.e. with the goal of formulating explicit grammars that would generate all and only the grammatical strings of a language. This is in contrast to GB, which is far more concerned with linguistic universals and the explanation of language acquisition (cf. some of our remarks in 1.1 and 1.2). But observational adequacy is in no way an irrelevant or trivial goal, nor is it correct to imply that GPSG is uninterested in explanation or in universals (and Wasow acknowledges that GPSG does address the question of universals). In 13.3, we referred to Wasow’s view (1985: p. 204) that contemporary theories of syntax had in common the notion that sentence structure is largely predictable from word meanings (or at least from lexical subcategorization properties). He adds that linguists have identified ways in which languages deviate from simply reflecting subcategorization frames (e.g. unbounded dependencies, raising). Horrocks (1987: Ch. 5) extends this point by pointing out that accounting for such deviations has become central to syntactic theory. It is, he claims, one of the strengths of GB that it attempts to provide explanations for such discrepancies. For instance, consider the paradigm in (1).
< previous page
page_215
next page >
< previous page
page_216
next page >
Page 216 (1) a. It seems Jim has left. b. * It seems Jim to have left. c. * Jim seems has left. d. Jim seems to have left. Why is it that Jim can sometimes (1d) occur in a clause away from the verb of which it is a semantic argument, leave ? The GB answer is that Jim has to move into the matrix clause in (1d) in order to be assigned Case by the finite verb (see (1b), which is ungrammatical because Jim is subject of a non-finite verb). If the embedded clause is finite, Jim can stay in its underlying position (1a), and movement is then excluded (1c) because the trace left behind by this moved N2 fails to meet the conditions on empty categories. It is noteworthy, too, that there is a parallel in the distribution of reflexives: (2) a. * Jim believes himself is last. b. Jim believes himself to be last. (2b) (with an embedded infinitive like (1d)) is fine, but (2a) is ill-formed, because the reflexive there is subject to the same conditions as the trace in (1c). The details of this are not important for present purposes; what matters is that GB has an explanation for (1) and (2) in terms of general linguistic properties, whereas in GPSG this set of data would simply be stipulated (see Horrocks 1987: p. 298) by writing explicit rules to capture the facts (see the earlier remarks about observational adequacy). How persuasive this is no doubt depends on one’s attitude to the unformalized GB framework: if one believes that GB is not described with sufficient rigour for anyone to tell whether the grammar generates a particular string of words or not, then one may not feel that GB has explained (1) at all. But the aim of this chapter is not to be sectarian. So let us conclude that, although it doubtless has shortcomings, GPSG has made a signal contribution to grammatical theory by imposing rigorous standards of formalization, by providing novel means of capturing linguistic generalizations, and by giving rise to fruitful interactions with computational linguistics.
< previous page
page_216
next page >
< previous page
page_217
next page >
Page 217 References Allen, J. 1987. Natural language nderstanding. Menlo Park, Calif.: Benjamin/ Cummings. Andrews, A. 1982. A note on the constituent structure of adverbials and auxiliaries. Linguistic Inquiry 13, 313–17. Andrews, A. 1983. A note on the constituent structure of modifiers. Linguistic Inquiry 14, 695–97. Andrews, A. 1988. Lexical structure. In Newmeyer 1988, 60–88. Bach, E. 1989. Informal lectures on formal semantics. Albany, NY: State University of New York Press. Baker, C. 1989. English syntax. Cambridge, Mass.: MIT Press. Baltin, M. & A. Kroch (eds) 1989. Alternative conceptions of phrase structure. Chicago: University of Chicago Press. Barlow, M. & C.Ferguson (eds) 1988. Agreement in natural language. Stanford, Calif.: Center for the Study of Language and Information. Barton, G. 1985. On the complexity of ID/LP parsing. Computational Linguistics 11, 205–18. Bauer, L. 1990. Be-heading the word. Journal of Linguistics 26, 1–31. Beard, L. 1991. Decompositional composition: the semantics of scope ambiguities and ‘bracketing paradoxes’. Natural Language and Linguistic Theory 9, 195–229. Berwick, R. & A.Weinberg 1984. The grammatical basis of linguistic performance. Cambridge, Mass.: MIT Press. Boguraev, B. 1988. A natural language toolkit: reconciling theory with practice. In Reyle & Rohrer (1988), 95–130. Bolinger, D. 1967. Adjectives in English: attribution and predication. Lingua 18, 1–34. Borsley, R. 1983. A Welsh agreement process and the status of VP and S. In Gazdar et al. (1983), 57– 74. Borsley, R. 1984. On the nonexistence of VPs. In Sentential complementation, W. de Geest & Y.Putseys (eds), 55–65. Dordrecht: Foris. Borsley, R. 1991. Syntactic theory. London: Arnold. Busemann, S. & C.Hauenschild 1988. A constructive view of GPSG, or how to make it work. In Vargha (1988), 77–82. Cann, R. 1993. Formal semantics. Cambridge: Cambridge University Press. Chomsky, N. 1957. Syntactic structures. The Hague: Mouton. Chomsky, N. 1990. On formalization and formal linguistics. Natural Language and Linguistic Theory 8, 143–47. Cook, V. 1988. Chomsky’s universal grammar. Oxford: Blackwell.
< previous page
page_217
next page >
< previous page
page_218
next page >
Page 218 Corbett, G. 1983 Resolution rules: agreement in person, number and gender. In Gazdar et al. (1983), 175–206. De Roeck, A. 1983. An underview of parsing. In King (1983a), 3–17. Di Sciullo, A. & E.Williams 1987. On the definition of word. Cambridge, Mass.: MIT Press. Dryer, M. 1992. The Greenbergian word order correlations. Language 68, 81–138. Emonds, J. 1985. A unified theory of syntactic categories. Dordrecht: Foris. Everett, D. 1989. Clitic doubling, reflexives, and word order in Yagua. Language 65, 339–72. Falk, Y. 1983. Constituency, word order, and phrase structure rules. Linguistic Analysis 11, 331–60. Fisher, A. 1989. Practical parsing of generalized phrase structure grammars. Computational Linguistics 15, 139–48. Fodor, J. & S.Crain 1990. Phrase structure parameters. Linguistics and Philosophy 13, 619–59. Gazdar, G. 1981. Unbounded dependencies and coordinate structure. Linguistic Inquiry 12, 155–84. Gazdar, G. 1982. Phrase structure grammar. In Jacobson & Pullum 1982, 131–86. Gazdar, G. 1987. Linguistic applications of default inheritance mechanisms. In Linguistic theory and computer applications, P.Whitelock, M.Wood, H.Somers, R.Johnson & P.Bennett (eds), 37–55. London: Academic Press. Gazdar, G. & G.Pullum 1981. Subcategorization, constituent order, and the notion ‘head’. In The scope of lexical rules, M.Moortgat, H.van der Hulst & T. Hoekstra (eds), 107–23. Dordrecht: Foris. Gazdar, G., G.Pullum & I.Sag 1982. Auxiliaries and related phenomena in a restrictive theory of grammar. Language 58, 591–638. Gazdar, G., E.Klein & G.Pullum (eds) 1983. Order, concord and constituency. Dordrecht: Foris. Gazdar, G., E.Klein, G.Pullum & I.Sag 1985. Generalized phrase structure grammar. Oxford: Blackwell. Gil, D. 1983. Stacked adjectives and configurationality. Linguistic Analysis 12, 141–58. Gil, D. 1987. Definiteness, noun phrase configurationality, and the count-mass distinction. In The representation of indefiniteness, E.Reuland & A.ter Meulen (eds), 254–69. Cambridge, Mass.: MIT Press. Goodall, G. 1987. Parallel structures in syntax. Cambridge: Cambridge University Press. Greenbaum, S. & R.Quirk 1990. A student’s grammar of the English language. London: Longman. Greenberg, J. 1966. Some universals of grammar with particular reference to the order of meaningful elements. In Universals of language, J.Greenberg (ed), 73–113. Cambridge, Mass.: MIT Press. Grimshaw, J. 1990. Argument structure. Cambridge, Mass.: MIT Press.
< previous page
page_218
next page >
< previous page
page_219
next page >
Page 219 Gunji, T. 1987. Japanese phrase structure grammar. Dordrecht: Reidel. Hauenschild, C. & S.Busemann 1988. A constructive version of GPSG for machine translation. In From syntax to semantics, E.Steiner, P.Schmidt & C.Zelinsky-Wibbelt (eds), 216–38. London: Pinter. Hawkins, J. 1982. Cross-category harmony, X-bar and the predictions of markedness. Journal of Linguistics 18, 1–35. Holmback, H. 1984. An interpretive solution to the definiteness effect. Linguistic Analysis 13, 195–215. Horrocks, G. 1983. The order of constituents in Modern Greek. In Gazdar et al. (1983), 95–111. Horrocks, G. 1987. Generative grammar. London: Longman. Huang, C. 1986. Coordination schema and Chinese NP coordination in GPSG. Cahiers de Linguistique— Asie Orientale 15, 107–27. Hudson, R. 1987. Zwicky on heads. Journal of Linguistics 23, 109–32. Hukari, T. 1989. The domain of reflexivization in English. Linguistics 27, 207–44. Hukari, T. & R.Levine 1991. On the disunity of unbounded dependency constructions. Natural Language and Linguistic Theory 9, 97–144. Jackendoff, R. 1977. syntax: A study of phrase structure. Cambridge, Mass.: MIT Press. Jacobson, P. 1982. Evidence for gaps. In Jacobson & Pullum (1982), 187–228. Jacobson, P. & G.Pullum (eds) 1982. The nature of syntactic representation, Dordrecht: Reidel. Johnson, M. 1988. Attribute-value logic and the theory of grammar. Stanford: Center for the Study of Language and Information. Jones, M. 1983. Getting ‘tough’ with wh-movement. Journal of Linguistics 19, 129–59. Kaplan, R. & J.Bresnan 1982. Lexical-Functional Grammar: a formal system for grammatical representation. In The mental representation of grammatical relations, J.Bresnan (ed), 173–281. Cambridge, Mass.: MIT Press. Karttunen, L. 1989. Radical lexicalism. In Baltin & Kroch (1989), 43–65. Kilbury, J. 1986. Category cooccurrence restrictions and the elimination of metarules. 11th International Conference on Computational Linguistics: Proceedings of Coling ’86 Bonn, 50–5. Kilbury, J. 1987. A proposal for modifications in the formalism of GPSG. Proceedings of the Third Conference of the European Chapter of the Association for Computational Linguistics Copenhagen, 156– 59. Kilby, D. 1984. Descriptive syntax and the English verb. London: Croom Helm. King, M. (ed) 1983a. Parsing natural language, London: Academic Press. King, M. 1983b. Transformational parsing. In King (1983a), 19–34. Kornai, A. & G.Pullum 1990. The X-bar theory of phrase structure. Language 66, 24–50. Koster, J. & R.May 1982. On the constituency of infinitives. Language 58, 116–43.
< previous page
page_219
next page >
< previous page
page_220
next page >
Page 220 Lapointe, S. 1988. Toward a unified theory of agreement. In Barlow & Ferguson (1988), 67–87. Larson, R. 1985. Bare-NP adverbs. Linguistic Inquiry 16, 595–621. Lehmann, C. 1988. On the function of agreement. In Barlow & Ferguson (1988), 55–65. Levi, J. 1978. The syntax and semantics of complex nominals. New York: Academic Press. Liberman, M. & R.Sproat 1992. The stress and structure of modified noun phrases in English. In Sag & Szabolsci (1992), 131–81. Lightfoot, D. 1979. Principles of diachronic syntax. Cambridge: Cambridge University Press. Ludlow, P. 1992. Formal rigor and linguistic theory. Natural Language and Linguistic Theory 10, 335–44. Lyons, C. 1986. The syntax of English genitive constructions. Journal of Linguistics 22, 123–43. Lyons, J. 1968. Introduction to theoretical linguistics. Cambridge: Cambridge University Press. McCawley, J. 1988a. Adverbial NPs: bare or clad in see-through garb? Language 64, 583–90. McCawley, J. 1988b. The syntactic phenomena of English. Chicago: University of Chicago Press. McCloskey, J. 1988. Syntactic theory. In Newmeyer (1988), 18–59. McCloskey, J. & K.Hale 1984. On the syntax of person-number inflection in Modern Irish. Natural Language and Linguistic Theory 1, 487–533. McConnell-Ginet, S. 1982. Adverbs and logical form. Language 58, 144–84. Martin, R. 1987. The meaning of language. Cambridge, Mass.: MIT Press. Matthews, P. 1981. Syntax. Cambridge: Cambridge University Press. Napoli, D. 1989. Predication theory. Cambridge: Cambridge University Press. Newmeyer, F. 1986. Linguistic theory in America. 2nd edn. New York: Academic Press. Newmeyer, F. (ed.) 1988. Linguistics: The Cambridge survey, vol. 1. Cambridge: Cambridge University Press. Palmer, F. 1974. The English verb. London: Longman. Partee, B., A.ter Meulen & R.Wall 1990. Mathematical methods in linguistics. Dordrecht: Kluwer. Payne, J. 1985. Complex phrases and complex sentences. In Language typology and syntactic description, vol. 2, T.Shopen (ed), 3–41. Cambridge: Cambridge University Press. Pereira, F. & S.Shieber 1987. Prolog and natural-language analysis. Stanford, Calif.: Center for the Study of Language and Information. Pereira, F. & D.Warren 1980. Definite clause grammars for language analysis. Artificial Intelligence 13, 231–78.
< previous page
page_220
next page >
< previous page
page_221
next page >
Page 221 Pollard, C. & I.Sag 1987. Information-based syntax and semantics, vol. 1. Stanford, Calif. : Center for the Study of Language and Information. Pollock, J-Y. 1989. Verb movement, universal grammar, and the structure of IP. Linguistic Inquiry 20, 365–424. Pullum, G. 1982. Syncategorematicity and English infinitival to. Glossa 16, 181–215. Pulman, S. 1987. Passives. Proceedings of the Third Conference of the European Chapter of the Association for Computational Linguistics Copenhagen, 306–13. Radford, A. 1988. Transformational grammar. Cambridge: Cambridge University Press. Randall, J. 1988. Inheritance. In Syntax and semantics, volume 21: Thematic relations, W.Wilkins (ed), 129–146. New York: Academic Press. Reyle, U. & C.Rohrer eds 1988. Natural language parsing and linguistic theories. Dordrecht: Reidel. Sag, I., G.Gazdar, T.Wasow & S.Weisler 1985. Coordination and how to distinguish categories. Natural Language and Linguistic Theory 3, 117–71. Sag, I. & C.Pollard 1989. Subcategorization and head-driven phrase structure. In Baltin & Kroch (1989), 139–81. Sag, I. & A.Szabolsci (eds) 1992. Lexical matters. Stanford, Calif.: Center for the Study of Language and Information. Scalise, S. 1984. Generative morphology. Dordrecht: Foris. Selkirk, E. 1982. The syntax of words. Cambridge, Mass.: MIT Press. Sells, P. 1985. Lectures on contemporary syntactic theories. Stanford, Calif.: Center for the Study of Language and Information. Shieber, S. 1984. Direct parsing of ID/LP grammars. Linguistics and Philosophy 7, 135–54. Shieber, S. 1986a. An introduction to unification-based approaches to grammar. Stanford, Calif.: Center for the Study of Language and Information. Shieber, S. 1986b. A simple reconstruction of GPSG. 11th International Conference on Computational Linguistics: Proceedings of Coling ’86 Bonn, 211–15. Smith, D. 1978. Mirror images in Japanese and English. Language 54, 78–122. Smith, N. 1989. The twitter machine. Oxford: Blackwell. Sobin, N. 1987. The variable status of comp-trace phenomena. Natural Language and Linguistic Theory 5, 33–60. Somers, H. 1984. On the validity of the complement-adjunct distinction in valency grammar. Linguistics 22, 507–30. Starosta, S. 1988. The case for lexicase. London: Pinter. Steurs, F. 1991. Generalized phrase structure grammar. In Linguistic theory and grammatical description, F.Droste & J.Joseph (eds), 219–45. Amsterdam: John Benjamins. Szabolsci, A. 1992. Combinatory grammar and projection from the lexicon. In Sag and Szabolsci (1992), 241–68. Trask, R. 1993. A dictionary of grammatical terms in linguistics. London: Routledge.
< previous page
page_221
next page >
< previous page
page_222
next page >
Page 222 Uszkoreit, H. 1987. Word order and constituent structure in German. Stanford, Calif.: Center for the Study of Language and Information. Van der Auwera, J. 1985. Relative that —a centennial dispute. Journal of Linguistics 21, 149–79. Vargha, D. (ed.) 1988. COLING Budapest: Proceedings of the 12th International Conference on Computational Linguistics. Budapest: John von Neumann Society for Computing Sciences. Warner, A. 1987. Simplifying lexical defaults in generalized phrase structure grammar. Linguistics 25, 333–39. Warner, A. 1988. Feature percolation, unary features, and the coordination of English NPs. Natural Language and Linguistic Theory 6, 39–54. Warner, A. 1989. Multiple heads and minor categories in generalized phrase structure grammar. Linguistics 27, 179–205. Wasow, T. 1977. Transformations and the lexicon. In Formal syntax, P. Culicover, T.Wasow & A.Akmajian (eds), 327–60. New York: Academic Press. Wasow, T. 1985. Postcript. In Lectures on contemporary syntactic theories, P. Sells, 193–205. Stanford, Calif.: Center for the Study of Language and Information. Weisweber, W. 1988. Using constraints in a constructive version of GPSG. In Vargha 1988, 738–43. Williams, E. 1981. On the notions ‘lexically related’ and ‘head of a word’. Linguistic Inquiry 12, 245–74. Zeevat, H. 1988. Combining categorial grammar and unification. In Reyle & Rohrer (1988), 202–29. Zwicky, A. 1985. Heads. Journal of Linguistics 21, 1–29. Zwicky, A. 1986. German adjective agreement in GPSG. Linguistics 24, 957–90. Zwicky, A. 1987a. Slashes in the passive. Linguistics 25, 639–69. Zwicky, A. 1987b. Suppressing the Zs. Journal of Linguistics 23, 133–48.
< previous page
page_222
next page >
< previous page
page_223
Page 223 Index Page numbers in italics are references to ‘Further reading’ sections. across the board 191 adjectival phrases 50–51 adjective 85, 92–3, 139, 144–5, 194 adjunct 9, 50 ad-S 94, 97–8 ad-V 95–7 ADV 93 ADVTYPE 97–8 adverb 92–8 bare-NP 97 manner 29 sentential 94 adverbial 9–11, 92–8, 184 affix 61–62 AGR 113–127, 139, 142–5 agreement 60, 112–29 ALGOL 68 132 Allen, J. 206 Alvey Tools project 212–213 Andrews, A. 23, 88, 200 argument 9–12, 126 -adjunct 198 atom-valued feature 113 attribute 38 AUX 88–90, 133–4 auxiliary 87–92, 130–34 Bach, E. 23, 119 Baker, C. 5, 23, 98, 129, 146, 174 BAR 38, 58 Barton, G. 208 Bauer, L. 62 Beard, R. 194 benefactive 74 Berwick, R. 215 Boguraev, B. 206, 212 Bolinger, D. 85 Borsley, R. 5, 23, 36, 43, 51, 129, 173, 196 Bresnan, J. 49, 156 Busemann, S. 209, 211–212 Cann, R. 23 CAP, see Control Agreement Principle CASE 38, 70–72, 87 categorial grammar 49, 201, 203 Category Cooccurrence Restriction (CCR) 195–6 category-valued feature 113 Chinese 188, 198 Chomsky, N. 3–4 COMP 40, 79–81 complement 9, 50 complementizer 78–81 -trace 159 compound 28, 61, 193 computational linguistics 205–13 CONJ 176–9, 185–9, 192 conjunct 175 constant 21 constraint 176 context-free grammar 2, 34, 206 language 34 Control Agreement Principle (CAP) 114–24, 145, 154–6 controller 113–4, 123–4 control feature, see under feature control verb 124 Cook, V. 23, 174, 201
next page >
Coordinate Structure Constraint 190 coordinating conjunction 176 coordination 175–94 co-vary 78, 115, 166–7
< previous page
page_223
next page >
< previous page
page_224
next page >
Page 224 Corbett, G. 193 Crain, S. 33, 58 Danish 115 daughter 7 deep structure 3 Definite Clause Grammar 55 degree word 75 dependency grammar 15 de Roeck, A. 206 Di Sciullo, A. 28, 62 domination 7 domain 127 Dryer, M. 32, 48 Dutch 159 Earley algorithm 208 ellipsis 175 Emonds, J. 19, 41, 80, 143 empty 39 string 151 category 169–72, 197 endocentric 16 equi 124–6 Everett, D. 33 exclamative 167 Exhaustive Constant Partial Ordering 35 expletive 116–20 extension 65, 67–8, 181, 209 extraposition 141–5 Falk, Y. 32 FCR, see Feature Co-occurrence Restriction feature 16–17, 37–45, 53–6 Boolean 38 construction 40, 198 control 114, 154 Co-occurrence Restriction (FCR) 68–70 foot 103–8, 150 Principle (FFP) 99, 103–8 head 53, 138, 150 Convention (HFC) 55–60, 64–5, 69–70, 176–9, 181–2 morphosyntactic 40, 198 Specification Default (FSD) 71–2, 77–8 FFP, see Foot Feature Principle Fijian 188 Fisher, A. 208, 211, 215 Fodor, J. 33, 58 foot feature, see under feature formalization 4 formula 21 free feature specification 68, 165 free word order 28 French 43, 76, 92, 115, 134, 173, 188, 193 fronted element 147 FSD, see Feature Specification Default function 21, 96, 105 application 21 functor 60, 114 gap 149–156, 170–3, 190–1 gapping 193 Gazdar, G. 3, 5, 30, 35, 36, 40, 47, 52, 63, 71–2, 73, 77–9, 84, 87, 98, 108, 111, 115, 117, 124, 128, 129, 133–4, 136, 144, 146, 167–8, 173, 192, 194, 195, 203, 206, 211, 212, 214 generative capacity 34 generative grammar 2 genitive see possessive German 33, 47, 62, 76, 92, 128, 134, 167, 172
Gil, D. 14 Goodall, G. 192 government 32, 128 Government-Binding 33, 41, 44, 78, 80, 124–5, 192, 201, 215–6 Greek 29 Greenbaum, S. 5, 13, 92, 98, 111, 194 Greenberg, J. 31 Grimshaw, J. 198
< previous page
page_224
next page >
< previous page
page_225
Page 225 Gunji, T. 39, 48, 62, 73, 170 Hale, K. 171 Hauenschild, C. 209, 211–2 Hawkins, J. 32 head 15–17, 55–62, 180 feature, see under feature , relativized 62 , semantic 61 , syntactic 61 head-final 31–3 head-initial 31–3 Hebrew 14 HFC see Head Feature Convention Holmback, H. 119 Horrocks, G. 5, 29, 36, 51, 63, 73, 125, 127, 129, 146, 173, 215–6 HPSG 48–9, 60, 203–4 Huang, C. 188 Hudson, R. 63, 80 Hukari, T. 127, 143–4, 169, 195 hyponymy 60 ID/LP format 26–30, 34–5, 208 immediate dominance rule 25, 64–7, 207–8 immediate domination, see domination indefinite 109 infinitival complements 81, 120–6 infinitival relatives 163–4 INFL 40–1 inheritance 65–7, 104 instantiation 65–7, 69–72, 104, 207 interrogatives 99–104 intersection 164 INV 133–4, 165–6 inversion 131–4, 165 Irish 170–1 island constraint 172–3, 190 Italian 20, 61 Iterating Coordination Schema 186 Jackendoff, R. 18–20, 40, 41, 87, 93 Jacobson, P. 171 Japanese 14, 31–3, 59, 155 Johnson, M. 49 Jones, M. 169 JPSG 39, 48, 62, 73 Kaplan, R. 49, 156 Karttunen, L. 201, 203 Kilbury, J. 195–6, 202–3 Kilby, D. 81, 139, 146 King, M. 206 Kornai, A. 23, 59–60 Koster, J. 42 Lapointe, S. 128 Larson, R. 97 LAST 100, 109 Law of Co-ordination of Likes 176 legal category 64 Lehmann, C. 128 Levi, J. 193 Levine, R. 143–4, 169, 195 Lexical Functional Grammar (LFG) 49, 200–2 lexical entry 68 lexical ID-rule 65, 132 lexicalism 48, 195–9 radical 201
next page >
super- 201 Lexicalist Hypothesis 200 lexical ID-rule 69, 156 lexical rule 200–2 lexicon 8, 73, 199–203 Liberman, M. 29, 86 Lightfoot, D. 31–2 linear precedence statement 25–30, 40, 48, 58, 76, 187 lists 46 LOC 44 local 10, 47, 196 tree 35 Ludlow, P. 4 Lyons, C. 107 Lyons, J. 61 McCawley, J. 5, 90, 94, 96–8, 111, 129, 175, 194
< previous page
page_225
next page >
< previous page Page 226 McCloskey, J. 5, 171 McConnell-Ginet, S. 96 MAIN 165–7 main clause 165–7 Makua 30 Martin, R. 23, 96 Matthews, P. 9 maximal projection 16–18 Maximality 59 May, R. 42 meaning postulate 126, 140–1, 202 metarule 130–46, 152–9, 202, 208, 212 metalexical rule 203 minor category 18 modal 87–90 model theoretic semantics 20 modifier 9–13, 75 Montague, R. 20–21 morphological object 28 morphology 61–3, 193, 200 morphosyntactic locus 60–1, 177 mother 7 multiset 133 Napoli, D. 19–20, 61 Newmeyer, F. 5, 201 NFORM 117, 192 node 7 non-thematic 125 noun phrase 11–17, 19–20, 83–7, 106–11 NULL 152–3 object 8 Palmer, F. 129 parameter 33 parsing 205–13, 215 Partee, B. 7, 151 passive 135–41, 196–8, 200–1 PAST 65 PATR-II 210–11 Payne, J. 188 PER 38 percolation 56 Pereira, F. 55, 174 Persian 31 PFORM 46, 68 phonology 2 phrase-marker 7 phrase-structure rules 6, 25, 33 pied-piping 153 Pollard, C. 45 Pollock, J. 41 POSS 55, 86, 98 possessive 86, 107–11 POST 86 predicate 21 prepositional phrase 74–8 prepositional passive 139 preposition stranding 158 Principle of Morphology-Free Syntax 200 Principle of Phonology-Free Syntax 2 privileged 72, 77, 115 PRD 136 PRO 42, 154 projection 15 Projection Principle 44–5 pronouns 36, 66, 86, 109–11, 171–2
page_226
next page >
possessive 101 psychology 3 Pullum, G. 23, 36, 52, 59–60, 63, 79 Pulman, S. 139, 196–8 Q 95 Quirk, R. 5, 13, 92, 98, 111, 194 R 97 Radford, A. 5, 23, 28–9, 36, 51–2, 98, 193 raising 124–6, 140–1 Randall, J. 84 RE 115 recursive rule 10, 12, 58 reflexive pronoun 126–7 relational adjective 176 relative clause 99–102, 105–6, 147–64 root node 7
< previous page
page_226
next page >
< previous page
page_227
Page 227 S 40–3 Sag, I. 48, 181, 184, 187, 192–3, 194 Scalise, S. 199 selectional restrictions 114 Selkirk, E. 28, 61, 200 Sells, P. 5, 35, 52, 63, 73, 111, 129, 146, 173, 194 semantics 20 sentential subject 141–3 Shieber, S. 35, 48, 174, 208, 210–11, 214 sister 7 skeleton grammar 211 SLASH 150–67 slash category 150, 170 Slash Termination Metarule 1 152 Slash Termination Metarule 2 158 Smith, D. 31 Smith, N. 5 Sobin, N. 159 Somers, H. 9 Spanish 169 SPEC 39 Sproat, R. 29, 86 Starosta, S. 201 Steurs, F. 5 SUBCAT 45–50, 68–9, 79–80, 89–90, 145, 152, 176, 187, 203–4 subcategorization 8, 43–50, 60, 66, 75–6, 94, 200 Subcategorization Principle 204 SUBJ 42, 183 subject 7 supportive do 123 surface structure 3 Szabolsci, A. 201 target 113–4 term 21 there -constructions 116, 117–8, 202 topicalized 167 Tough movement 168–9 trace 154 transformational grammar 2, 172–3, 206 transformational rule 2, 145–6, 200 Trask, R. 5, 92 tree 7 unbounded dependency 147–73, 189–91 underspecification 136, 207 unification 3 universals 30 Uszkoreit, H. 33, 167, 172 value 38 van der Auwera, J. 146 variable 54–5, 79–80, 114, 176 VFORM 39, 89, 131–6, 197–8 VP 56 -Proposing 88 Warner, A. 80, 167, 178, 187, 192, 195 Warren, D. 55 Wasow, T. 5, 139, 200, 201, 215 Weinberg, A. 215 Weisweber, W. 211 Welsh 115 WH 148–9 WHMOR 158 wh-question 92, 147–8, 164–5, 189–191 Williams, E. 28, 61–2
word order 31, 48 X-bar theory 15–20, 31–2, 38, 58–60, 201
yes-no questions 150 Zeevat, H. 203 Zwicky, A. 40, 61–2, 63, 108–9, 128, 196–8, 200
< previous page
page_227