A Glossary compiled by Kleanthes K. Grohmann intended as a companion for: N. Hornstein, J. Nunes, and K. K. Grohmann. Un...
54 downloads
459 Views
506KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
A Glossary compiled by Kleanthes K. Grohmann intended as a companion for: N. Hornstein, J. Nunes, and K. K. Grohmann. Understanding Minimalism. To appear with Cambridge: Cambridge University Press. [expected: July 2005]
May 2005
Appendix: Glossary
The Glossary consists of two parts. Concepts puts the main theoretical notions of the Minimalist Program into perspective via a short paragraph on the relevant term. All terms are listed alphabetically and each entry is followed by chapter and/or section numbers referring to the most prominent parts in the book where the term is introduced or otherwise discussed at length. Definitions (complete) lists all formal definitions presented throughout the book both from the Minimalist Program and earlier stages of the Principle-and-Parameters Theory. The order of these definitions corresponds to the point where they are discussed in the book, which are indicated as such. Definitions (minimalist) restricts the attention to definitions relevant to minimalism; this list is ordered alphabetically. Boldfaced items in Concepts refer to other terms described; boldfaced and italicized terms followed by a parenthesized number-letter combination link to an item found in Definitions (complete).
A. Concepts adjunct An adjunct is a linguistic expression that is adjoined to a particular structure — by assumption, to a maximal projection (XP), merged after all specifiers. (Although in earlier frameworks, adjunction to an intermediate projection level, X’, was also sometimes assumed.) Adjuncts are not theta-marked and include time and manner adverbs or adverbial clauses, for example. See also bare phrase structure, specifier, X’-Theory. [sects. 4.4, 5.4, 7.3, 8.3, 10.4 — chap. 6] adjunct-head This is a relation between a head and an adjunct, adjoined to a projection of the head. For more local relations, see head-complement, head-head, specifier-head. [sect. 2.3.2] Agree Morphosyntactic (or formal) features must be licensed or checked by the computational system of human language (CHL) in the course of the derivation. In earlier instantiations of the Principles-and-Parameters Theory (P&P), such as Government-and-Binding Theory (GB), these features were not as prominent as in the Minimalist Program; there were licensing relations and configurations (such as government (4A, 4B, 4D)), but no full-fledged theory of features in CHL (viz. checking
KLEANTHES K. GROHMANN theory). In early minimalism (Chomsky 1993, 1995), specifier-head configurations were argued to be the only admissible checking relations (at least in the overt syntax; see Move F), and movement was motivated by the need to check off (uninterpretable) features in such a structural configuration, where the head bore the same features as the element moving into its specifier (checking by identity in a local relationship). In more recent minimalist explorations (Chomsky 2000, 2001, 2004), the operation that checks uninterpretable features is called Agree. This is feature valuation at a distance, through a c-command relation holding between a higher head whose uninterpretable features must be checked (the probe) and the element whose interpretable features are checked against (the goal). Agree then supersedes the original motivation for movement (feature checking), replacing it with an internal system of formal licensing (Agree), coupled with the ability of CHL to express displacement through an extended understanding of the Extended Projection Principle (EPP). [sect. 9.4.3 — chap. 10] agreement The minimalist introduction of a (formal feature) checking theory generalizes agreement phenomena as found in many languages holding between the subject and the finite verbal element, for example. See Agree for a particularly clear illustration. In this sense, agreement is the term for a formal relation holding of two linguistic expressions in a particular structural configuration (such as specifier-head or c-command). [sect. 2.3 — chaps. 4-5] architecture Grammar as understood in the Minimalist Program is a property of human beings and, since we speak of Universal Grammar (UG), should have a place in our minds/brains. Since, to simplify somewhat, language is the pairing of sound and meaning, our minds/brains must have the means to express this pairing. Sound (or its gestural counterpart as in sign language) is processed by the articulatory-perceptual (A-P) or sensorimotor system. This system must then connect somehow with the phonetic interface level, Phonetic Form (PF). Meaning arises as part of the conceptual-intentional (C-I) system, which connects to the grammar via Logical Form (LF). On the other end of the spectrum we have the lexicon, a collection of lexical items and functional categories. The syntax is the machinery that takes out these lexical items and assembles them in such a way as to create interpretable expressions at the PF- and LF-interfaces, respectively. This much is clear for the rough architecture of the grammar. Some technical issues involve the operation Spell-Out (e.g., When and how often does it apply?), the expression of displacement (Is movement only overt or can it also apply covertly?), and so on. In GB, this architecture was taken to be highly modular, where a variety of modules make up the grammar. We argued in the first half of this book that such a modular conception need not be the most ideal way to represent the grammar. However, the full details of the architecture of the grammar are still up for grabs. See also The GB T-Model of the Grammar (2A), A Minimalist TModel of the Grammar (2D). [sects. 2.2, 2.4, 9.4, 10.4 — chap. 1] argument structure Every predicate takes arguments, and the structure a particular predicate and its argument(s) make up is called “argument structure” — from a syntactician’s point of view, this refers to the phrase structure involved and to which position each participant fills. As far as arguments of verbal predicates go, we typically distinguish external arguments (more often than not, the subject) from internal arguments (usually, direct object and indirect object). Arguments are theta-marked by 349
GLOSSARY FOR UNDERSTANDING MINIMALISM their predicate. The different verbal predicates are ditransitive (taking one external and two internal arguments, such as give), transitive (taking one external and one internal argument, like like), and intransitive (taking one argument). The latter class is further refined by the split unergative (where the sole argument is external: run) vs. unaccusative (where it is internal: appear). Internal arguments are complements of the lexical verb V, external ones specifiers of the light verb v (depending one one’s view of VP-shells; see also VP-structure). Of course, predicates are not restricted to verbal elements; adjectives and nouns, for example, can function as predicates too, in which case we usually speak of small clauses (whose phrase-structure status is anything but agreed upon). See also Predicate-Internal Subject Hypothesis (3A), VP-Structures (3B, 3C). [sect. 10.3.3 — chap. 3] array see numeration, subarray articulatory-perceptual (A-P) system This is also known as the sensorimotor system and, in a very simplified view, refers to the part in our minds/brains that processes sounds (for spoken language) and gestures (for sign language) for both production and comprehension of language. By assumption, it’s directly interfaced with the language component (the language faculty) and as such fed by PF. This is an idealized picture for theoretical purposes of some hypothesized components in our brains; currently, there is no conclusive neurological evidence for more than mere speculation. See also architecture and conceptual-intentional (C-I) system. [sects. 1.3, 1.5, 2.3.1.6 — chap. 7] asymmetric c-command The asymmetric relation in c-command, introduced by Kayne (1994), holds if one element c-commands another, but that one doesn’t c-command the first, i.e. if of two syntactic objects only one c-commands the other. It has been argued that the LCA (Linear Correspondence Axiom (7B, 7C)), defined through asymmetric c-command, is needed for linearization. Recent work also involves a reformulation of where the LCA actually has to apply — whether at each step of the derivation, or only at the interfaces, in particular PF (see e.g. Moro 2000 and Nunes 2004 for discussion). [sect. 7.3] bare phrase structure First introduced in Chomsky (1994), (the GB-module of) X’Theory, which in turn replaced PS-rules (explicit rules specifying particular instances or possibilities of phrase structure), has undergone considerable reduction in size and power as well, just as all other modules. Rather than having a separate module dedicated to representing structure, bare phrase structure takes its place: an elegant (because simple) part of CHL that builds structure. Linguistic expressions are not formed by inserting lexical items and functional material on a preformed structural skeleton. Under a bare phrase structure approach, CHL manipulates lexical and functional items taken from the lexicon (via the numeration or lexical subarray) through successive applications of the two independently needed operations Copy and Merge. Linguistic expressions are thus built bottom-up, and the labels that were crucial ingredient of X’-Theory (see also bar-level) do not play the role anymore they used to. In particular, there is no a priori need for intermediate projection levels (X’). Projection levels are either minimal (Xmin, i.e. a head X0 in the usual case) or maximal (Xmax, i.e. a full phrase XP). Former intermediate projection levels are neither minimal nor maximal (Xminmax), while lexical items that don’t project any further are both minimal and maximal at the same time (for example, proper names). [sect. 6.3] 350
KLEANTHES K. GROHMANN bare output conditions Some conditions exist because they are (virtually) conceptually necessary — other exist because they have an effect on the output. The output is the interface between narrow syntax and interpretive components, i.e. the levels of representation, LF and PF, or simply interfaces. (This is why bare output conditions from Chomsky (1995) were renamed interface conditions in Chomsky (2000).) [sects. 1.3, 1.5, 8.2.2] bar-level In X’-Theory the bar-level indicates the projection status within a phrase: zero-bar indicates a head or minimal projection (X0/Xmin), one-bar the intermediate projection (X’/Xminmax), and two-bar (though not in the original proposal) the full, maximal projection (XP/Xmax). In bare phrase structure, bar-levels are determined functionally as opposed to a pre-fitted three-bar level system. [sects. 6.2-6.3] binding For certain dependencies, simple c-command or feature checking is not enough. Reflexives must be locally bound by their antecedents, for example, while pronouns must be free in this local domain and referential expressions must be free everywhere (not bound at all). GB had a separate module devoted to the specifics of binding, Binding Theory, which also subsumed parts of Theta Theory (in that different types of traces had to be bound in a specific way). Minimalism aims to replace such a component of the grammar with the more conventional means (namely c-command and/or feature checking). Among other things, reconstruction is one phenomenon of utmost importance for a minimalist formulation of binding. See also Binding Theory Principles A and B (4H), Binding Theory in GB (8A), Domain (8B), Binding (8C), Binding Theory on Minimalism (8E). [sects. 2.3.1.2, 3.2.3.3 — chap. 8] Binding Theory This is the module from GB-times that dealt with the licensing of binding relations. [chap. 8] Case Theory This module of GB was responsible for licensing Case-marking. (As a convention, concrete morphological case, such as nominative or accusative, is spelled with a lower-case ‘c’, while abstract Case, referring to Case-features, is spelled with an upper-case ‘C’.) With the abolishment of government as a critical structural configuration, filters of all sorts (such as the Case Filter), etc., there was no need to postulate a separate module in the language faculty for licensing of Case. The relevant structural relations (Case environments) boil down to specifier-head configurations for all structural Case: nominative case for subject (where the Case-feature is checked in [Spec,IP] or [Spec,TP]) and accusative case for direct object (where the Case-feature is either checked in [Spec,AgrOP] or in the outer [Spec,vP]), for example. This approach can also be extended to so-called “exceptional” Case-marking environments, where the subject of an embedded non-finite clause checks its Case-feature in the matrix clause (Spec-head). Case-marking PRO with null case (which only non-finite Infl bears) was an innovation first proposed by Chomsky and Lasnik (1993) and offers one possible way to eliminate the PRO Theorem (4G). [sects. 2.3.1.1, 5.4 — chaps. 4, 9-10] c-command Defined under C-Command (7A), this important structural relation — itself not a minimalist innovation at all (see Reinhart 1976, and even Ross 1967 already for early formulations)! — ensures that one expression can actually “see” another in order to bind or otherwise license it. One general licensing condition is deletion of traces or copies for PF-reasons that can be handled by the Linear Correspondence Axiom (7B, 7C), which makes use of the notion of asymmetric c-command. See also Agree, movement, Move, Move F. [chap. 7] 351
GLOSSARY FOR UNDERSTANDING MINIMALISM chain A dependency between two positions created by movement must somehow be marked to signal the identity of the two copies involved. In GB, the notion of a chain was introduced to establish a formal relationship between a moved element and its trace (and extended to CHAIN for non-movement expletive-associate relations). At least in some strands of minimalism, the notion of chain was taken over to the copy theory of movement. A chain holding of two copies must be reduced by way of deleting one of the two copies for purposes of linearization at PF (which is by default the lower one, as it bears fewer checked features than the higher one; see Nunes 2004 for a particularly clear exposition). (Note that, since movement is recast in terms of features in minimalism, the expletive-associate relation can be expressed by feature movement; see also Move F.) [sects. 2.3.1.1, 4.1, 4.3.4, 5.4, 6.4, 7.5, 8.3.1] checking This is the name for licensing formal or morphosyntactic features. See also checking theory. [sects. 2.3, 5.4-5.5 — chaps. 4, 9] checking theory One of the original goals of minimalism was to reduce the GB machinery. One crucial aspect was the successful abolishment of government and a replacement with a mechanism that captures the empirical facts handled by government and more. This mechanism was checking theory, which deals with licensing morphosyntactic features. The specifier-head configuration became a structural relation that was generalized to all feature checking environments. [chaps. 4, 9] complement Every head must combine (Merge) with another syntactic object to form a projection. The element a head merges with is the complement. In GB, complements had a slightly stronger meaning in that they were a formal ingredient of X’-structure. Under a bare phrase structure approach, there is no need for this term, but we use it nevertheless. One reason we still use it is that it carries with it certain notions of selections, the fact that heads don’t just take anything for a complement, but one that fits or matches. See also adjunct, specifier. [sects. 6.2-6.3] computational system of human language (CHL) By hypothesis (viz. Universal Grammar), the language faculty is a part of the human mind/brain dedicated to language processing. Within this faculty, CHL is the component that is fed by the lexicon with lexical and functional items to manipulate, and then accesses in turn the conceptual-intentional (C-I) system and the articulatory-perceptual (A-P) system with well-formed linguistic expressions by a speaker (interpretable in sound and meaning) — and the other way around, presumably, for the hearer. See also architecture, derivation, The GB T-Model of the Grammar (2A), A Minimalist TModel of the Grammar (2D), The Computational System under the Move-F Approach (9C), The Computational System under the Agree Approach (9D). [sects. 1.3, 2.3.1.6, 2.3.2.4, 2.3.2.6, 2.4, 9.3.2 — chap. 10] conceptual-intentional (C-I) system Simplifying somewhat, the C-I system refers to the part in our minds/brains that processes sounds (for spoken language) and gestures (for sign language) for both production and comprehension of language. By assumption, it’s directly interfaced with the language component (the language faculty) and as such fed by PF. This is an idealized picture for theoretical purposes of some hypothesized components in our brains; currently, there is no conclusive neurological evidence for more than mere speculation. See also architecture and articulatory-perceptual (A-P) system. [sect. 1.3, 1.5, 2.3.1.6, 8.3.3] conceptual necessity see (virtual) conceptual necessity 352
KLEANTHES K. GROHMANN converge A derivation is said to converge (i.e. yield a well-formed interpretation at both interface levels) if it satisfies Full Interpretation; otherwise it crashes. [sect. 1.5] Copy In current minimalist approaches, movement is not a single operation (Move), but a composite relation made up of (at least) Copy and Merge. Copy is then the operation that duplicates a linguistic expression. One way to rationalize the independent need for Copy is to consider the transition of a lexical item from the lexicon to CHL, i.e. into the derivation: when filling the numeration with a lexical or functional item, it doesn’t get deleted from the lexicon; rather, it is copied from the lexicon into the numeration, which then gets depleted in the course of the derivation. [sects. 6.4, 7.5, 8.3, 9.4] copy A copy is any occurrence of a syntactic object in the structure. In the course of a derivation this object may move, but it may be interpreted in its original base position (or an intermediate position). To ensure interpretation of a linguistic expression in a position other than its surface position, it leaves behind a copy. See also Copy, copy theory of movement. [sects. 2.2.2.2, 5.5, 6.4, 7.3, 7.5, 8.3, 9.4.2, 10.4.] copy theory of movement In generative approaches, the displacement property of human language is best expressed in terms of movement. As is well known from reconstruction effects and other licensing properties — both in GB (through such modules as Theta Theory, Case Theory, and so on) and the Minimalist Program (e.g., checking theory, features) — movement leaves behind something. In GB this “something” was a trace. Minimalist approaches assume that movement involves Copying a linguistic expression and then (re-)Merging it later. The two copies are identical to one another in most respects, except that the copy re-Merged has a fuller set of checked features. In this sense, movement leaves behind a copy that has to be deleted at a later point for PF-reasons. The copy theory of movement as proposed by Chomsky (1993, 1995) was picked up and extended by Nunes (1995, 2004) and then further developed by Hornstein (2001), Grohmann (2003), and numerous references cited in these works. [sects. 2.3, 6.4, 8.3] covert In the original P&P Theory (such as GB), LF was the covert component of CHL or the derivation. Covert means not overt, thus denotes operations that take place after Sstructure (whose surface position, the one we see/hear, then changes). In minimalism, covert refers to all operations that take place after Spell-Out. The details regarding the covert component depend heavily on the exact architecture of the grammar. [sects. 2.3, 8.3.1, 9.4] crash A derivation that does not converge does not yield a well-formed interpretation at both (or either) interfaces, PF and LF. [sect. 1.5] cycle The cycle refers to another old observation from the early days of (generative) grammar. The theory-independent idea behind a cycle is that some operations may not take place after certain others have applied; how to express this, which “certain others” are relevant, and whether the cycle is the only relevant notion, are different and certainly theory-dependent issues which we don’t take a stand on. (See e.g. Freidin (1999), Svenonius (2001) for some discussion.) Minimalist takes on the cycle abound (see e.g. Chomsky 1995: 233 on the cyclicity of strong features). To mention just two recent approaches, in the phase-model of Chomsky (2000, 2001, 2004), each phase constitutes a separate cycle, while in the model proposed by Grohmann (2003), dynamic sub-parts of the derivation called Prolific Domains do the job. [sects. 2.3.2.22.3.2.4, 2.4, 8.2, 9.4, 10.4] 353
GLOSSARY FOR UNDERSTANDING MINIMALISM Delete The residue of movement, taken to be copies in the copy theory of movement of the Minimalist Program (as opposed to traces from earlier frameworks, such as GB), must disappear when the derivation is shipped off to the PF-interface (the level of representation that feeds the A-P system) — after all, we only pronounce a given linguistic expression only once, not at each position of its derivational history. In earlier versions of minimalism, a separate operation Delete was assumed, possibly even part of a more general complex that includes Form Chain where the resulting chain has then to be reduced. It’s not clear, however, that Delete must be assumed to be a distinct operation. It is quite conceivable that deletion of unpronounced material is a default strategy of PF to create an interpretable output. It might even be motivated by (and thus a part of) linearization. The other use of deletion refers to features: uninterpretable features, [–interpretable] in our notation, must be checked and removed from the derivation. [sects. 6.4, 7.5, 8.3.1 — chap. 9] derivation To create well-formed linguistic expressions interpreted at all interfaces (the pairing of sound and meaning), lexical expressions have to be put together. These (as well as certain functional material) house in the lexicon from where they are copied into a numeration (or lexical subarray). The way from such a lexical array (to use a neutral term) to PF on the one hand (feeding the A-P system) — presumably after Spell-Out applies — and LF on the other (feeding the C-I system) is the derivation. The entire machinery generating linguistic expressions is also referred to as the computational system of human language (CHL). Much debate has been spent on whether the system is indeed derivational or representational; in this book, we adopt the former viewpoint, but the issue is anything but solved. See also The GB T-Model of the Grammar (2A), A Minimalist T-Model of the Grammar (2D), The Computational System under the Move-F Approach (9C), The Computational System under the Agree Approach (9D). [sects. 1.3, 1.5 — chaps. 2, 5-7, 9-10] derivational economy see economy displacement Human language has the property of (sometimes) interpreting linguistic expressions in a different position from the one in which they appear or are pronounced. We classify this displacement property as one of several “big facts” of human language (or rather Universal Grammar, which we are trying to adequately describe and explain). [sect. 1.3] Domain This is one of the terms in linguistic theory that has been used as often as it has meanings or contexts in which it applies. A domain is any sub-part of a structural representation (or derivation) that is relevant for a particular phenomenon to be captured. In GB, Binding Theory made use of a specific domain for binding to apply and be licensed. In early minimalism (Chomsky 1993, 1995), the notion of domain was introduced to define checking theory. For this type of domain, see also Minimal Domain of α (5B), Extended Minimal Domain (5C). [chaps. 3-5] D-structure The CHL in GB contained four levels of representation: the interface to the AP system (PF), the interface to the C-I system (LF), the level at surface phenomena could be observed and linearized for pronunciation (S-structure), and the level which was fed directly by the lexicon. D-structure was the name for the latter, historically derived from “deep” — although there is nothing “deep” about D-structure other than the so-called underlying representation (Chomsky 1965). With the advent of the Minimalist Program (Chomsky 1993), the levels of representation were reshuffled 354
KLEANTHES K. GROHMANN and out came the winners, LF and PF, since they are the only levels that are (virtually) conceptually necessary (for the pairing of sound and meaning). In addition, conditions applying at these levels or components of the grammar are bare output conditions. D- and S-structure don’t have either property: they are theory-internal constructs (hence not conceptually necessary in any way) and they don’t feed the interfaces or have any effect on the output (hence cannot heave conditions applying at them to the status of bare output conditions). [sects. 2.2.2.1, 2.3.2, 8.2] economy The approach to minimalism sketched in this book takes the notion of economy very seriously — so seriously in fact that we introduced two types of economy. Call one methodological economy and the other substantive economy. Economy measures of the first type comprise the familiar methodological “Occam’s razor” sort of considerations that relate to theoretical parsimony and simplicity: all things being equal, two primitive relations are worse than one, three theoretical entities are better than four, and so on. In short: more is worse, fewer is better. Substantive economy refers to instances where a premium is placed on least effort notions as natural sources for grammatical principles. The idea is that locality conditions and wellformedness filters reflect the fact that grammars are organized frugally to maximize resources. Short steps preclude long strides (i.e. Shortest Move), derivations where fewer rules apply are preferred to those where more do, movement only applies when it must (i.e. operations are greedy), and no expressions occur idly in grammatical representations (i.e. Full Interpretation holds) — exponents of representational economy, as we dubbed it in the beginning of the book. These substantive economy notions generalize themes that have consistently arisen in grammatical research and is it is this type of thinking that leads to comparison of derivation or considering applications of Merge to be preferred over Move, to mention just two examples we have discussed as exponents of derivational economy towards the end of the book. [chaps. 1-2, 9-10] endocentricity Within X’-Theory, every head projects a phrase and all phrases have heads. The same principle of endocentricity is derived rather naturally in bare phrase structure, where a linguistic expression is built bottom-up. The opposite stat of affairs, where a projection is exocentric (without a head that projects), was assumed to occur in grammar as well prior to the arrival of X’-Theory. In (Extended) Standard Theory, all the way to early GB (Chomsky 1981), sentences were headed by S (later Infl) — an exocentric node immediately containing subject and predicate; likewise, embedded clauses or clauses that employ a peripheral position beyond the subject were S’ (then Comp), which was not an intermediate projection of S. Chomsky (1986) first presented a phrase structure in which all projections are endocentric, replacing S/Infl by IP and S’/Comp by CP. (The only exocentric node that survives in some current versions of minimalism is the small clause SC.) [sect. 6.2.1] Enlightened Self-Interest see Greed equidistance see Equidistance (5D, 5E, 5F) exocentricity see endocentricity Extended Projection Principle (EPP) In GB, the observation that sentences need subjects was captured by the EPP (Chomsky 1981), an extension of the Projection Principle — “Representations at each syntactic level […] are projected from the lexicon, in that they observe the subcategorization properties of lexical items” (Chomsky 1981: 29). In minimalist approaches to CHL the function of the EPP was considerably extended, and 355
GLOSSARY FOR UNDERSTANDING MINIMALISM in the recent phase-based framework even marks the property of a head to have its specifier filled. [sects. 2.2.4, 2.3.1.3, 3.2.2, 3.3.1, 4.3.1 — chaps. 9-10] Extension Condition see Extension Condition (2C, 4F, 8F, 8G, 9A, 9E) feature There is no doubt that mundane grammatical properties, such as Case or phifeatures, have to be encoded abstractly for UG. The way this is done is similar to other types of features that have been assumed in linguistic theory for a long time: phonetic or phonological features that give instructions to an expression’s pronunciation, semantic features that classify inherent interpretive properties, and so on. The formal, or morphosyntactic, features that encode functional/grammatical properties are the core of the syntactic backbone (the derivation in CHL). These features come in one of two types of interpretability: [+interpretable]) or [–interpretable]. Interpretable features are interpreted by (one or both of) the interfaces, while uninterpretable ones are not. These must be checked in the course of the derivation and removed. For concreteness, we assume throughout that this removal is deletion of the feature. In earlier minimalist versions (Chomsky 1993, 1995), features were separated in terms of strength: strong features are those that would prohibit PF from linearizing and interpreting the linguistic expression formed in the syntax — these had to be checked overtly (and deleted from further computation). Weak features are harmless to PF (by assumption) and had to be checked covertly. [sects. 2.3, 4.3, 5.5 — chap. 9] feature interpretability see interpretability Form Chain With the advent of minimalism, the operations representing the displacement property of human language (movement) were inspected from a different perspective. Since the GB-implementation of movement leaving behind traces was not without problems (such as Full Interpretation), copies were proposed to replace them. This gave rise to the copy theory of movement — in the simplest case, the combination of two independently motivated operations, Copy and Merge. In order to mark all positions a linguistic expression holds within its syntactic life (for the interfaces, at least for LF to “find” other instances of a move element, or for PF to linearize), the old notion of a chain was resurrected. Form Chain is then the operation that follows Copy and (re-)Merge — and which is followed by some form of “chain reduction” (such as Delete, which usually targets the lower copy). Apart from classic references like Chomsky (1995) and Nunes (1995), see Hornstein (2001), Grohmann (2003), and Nunes (2004) for more recent discussion. [sect. 8.3.1] Full Interpretation This is the requirement that all the features of the pair (π, λ) — where π represents the object formed at (or shipped to) PF and λ at LF — be legible at the relevant interfaces. If π and λ are legitimate objects (i.e. they satisfy Full Interpretation), the derivation is said to converge at PF and at LF, respectively. If either π or λ doesn’t satisfy Full Interpretation, the derivation is said to crash at the relevant level of representation. [sect. 1.3, 1.5] generative (grammar) A grammar is called generative if it generates all the acceptable linguistic expressions (sounds, words, phrases, sentences) in a given language (GL). In terms of (capitalized) Generative Grammar, we refer to the approach to natural language started with Chomsky (1955, 1957), which became known as the “Standard Theory” (up to Chomsky 1965) and which then developed into Extended Standard Theory (EST; Chomsky 1970, 1973, 1977), before taking the form of the Principlesand-Parameters Theory (P&P; Chomsky 1981, Chomsky and Lasnik 1993) of which 356
KLEANTHES K. GROHMANN Government-and-Binding Theory (GB; Chomsky 1981, 1982, 1986a, 1986b) was the most developed formulation. Chomsky (1991) stands out as the paper that paved the way from a GB- to a minimalist view of the grammar, further developed in Chomsky (1993), the “early” period of minimalism. “Classical” minimalism as we use it refers to the work done in the light of Chomsky (1994, 1995), and the most recent modifications to minimalist theorizing derive from Chomsky (2000, 2001, 2004). [chap. 1] goal A probe is a head with uninterpretable (or [–interpretable]) features and a goal is an element with matching interpretable ([+interpretable]) features which have to be deleted through checking. In order to have its [–interpretable] features deleted for LF purposes and specified for morphological purposes, a given probe peruses its ccommand domain in search of a goal. A goal is accessible to a given probe only if there is no intervening element with the relevant set of features; that is, relativized minimality holds. Furthermore, in order for a goal to be active for purposes of Agree, it must have some [–interpretable] feature unchecked. Once a given element has all its [–interpretable] features checked, it becomes inactive; it’s still in the structure and may induce minimality effects, but it can’t participate in any other agreement relation. The notion of goal was first introduced in Chomsky (2000, 2001). [sect. 9.4.3 — chap. 10] government This is one of the name-givers for GB. It refers to a specific structural configuration which was argued to license local relations, such as theta-marking, Case-marking, and so on (see Government (4A, 4D, 4E)). Government has been replaced in the Minimalist Program by a variety of independently motivated means: checking theory, coupled with a more local structural relationship (the specifier-head configuration), and a different approach to morphosyntactic features, their role and interpretability in CHL, and agreement, to mention just a few. [sects. 2.2.7, 2.3, 4.2, 4.3.4, 5.2, 8.2 — chap. 3] governor This is the head that licenses some grammatical property on another linguistic expression through entering the structural relation government. [sects. 2.2.7, 4.3.4, 5.2, 5.5, 8.2.1-8.2.2] Government-and-Binding Theory (GB) To this date, GB, as developed most clearly in Chomsky (1981, 1982, 1986a, 1986b) and a lot of subsequent work, is the most successful theory expressing the Principles-and-Parameters Theory (P&P), the hypothesis that children come equipped with a set of universal linguistic principles and set parameters to finally converge on the adult grammar of the language they acquire. The Minimalist Program, presented in this book, aims to continue this legacy, but sets out about it quite differently. For one, it casts doubt on the highly modular structure of GB, where a large set of modules are an integral part of the language faculty (X’Theory, Theta Theory, Case Theory, Trace Theory, Binding Theory, PRO Theorem, and so on). Another integral part of the GB-system were the four distinct levels of representation that map the relation of lexical items from the lexicon to the eventual pairing of sound and meaning: D-structure, S-structure, Logical Form (LF), Phonetic Form (LF). In this book, we sketched some of the most salient aspects of GB in the beginning of most chapters before presenting a minimalist perspective. See also grammar of a particular language (GL), Universal Grammar (UG). [chaps. 1-4] grammar A (generative) grammar is a system of rules that generates all acceptable linguistic expressions in a language. This system is the object of our study in formal linguistics (in particular, syntax). In other words, when we study a language, we study 357
GLOSSARY FOR UNDERSTANDING MINIMALISM its grammar — and more generally (cf. UG): when we study grammar, we study language. Ergo, grammar is a synonym of language, just with a slight touch of settingapart from everyday use of the word ‘language’. [chap. 1] grammar of a particular language (GL) GL is a convenient shorthand notation to describe the outcome of parameter-setting. Children, born with a universal set of linguistic principles and the capacity to soak up language — a simplification of Universal Grammar (UG) — set out to transform the linguistic input of their specific environment into a grammar that will eventually converge with the adult grammar of the language they are acquiring. See also Principles-and-Parameters Theory (P&P). [sect. 1.2] Greed The condition Last Resort has been technically implemented either in terms of Greed (Chomsky 1993), according to which movement is licit only if some feature of the moved element is checked/deleted, or in terms of Enlightened Self-Interest (Lasnik 1999), according to which movement is licensed as long as some feature gets checked (and thus deleted), regardless if it’s a feature of the moved element or the target of movement. [sect. 9.3.1] head see bare phrase structure, X’-Theory, Minimal Projection (6A) head-adjunct see adjunct-head head-complement This is a very local relation between a head and its complement. If one doesn’t take θ-roles to be formal features, the head-complement relation seems necessary to express theta-marking (and more generally, i.e. for non-predicative heads, selection). Because these issues are tricky and still largely unresolved, the headcomplement, like head-head, never received as much attention as the specifier-head relation. [sects. 3.4, 6.2-6.3] head-head This is a very local relation between two heads, where one head is adjoined to the other. It seemed to be necessary in order to allow checking of two matching features on two adjacent heads that stand in a local relation to one another as the result of head movement (qua adjunction) or incorporation. With the replacement of Move (or rather, Move F) by Agree, the head-head relation loses some importance it never had in the first place. [sect. 6.3.2] head-specifier see specifier-head head movement see movement impenetrability see Phase Impenetrability Condition (10A) Inclusiveness Condition see Inclusiveness Condition (2E) interface see level of representation interface conditions see bare output conditions intermediate projection see bare phrase structure, X’-Theory, Intermediate Projection (6C) interpretability Formal or morphosyntactic features are [±interpretable]. Those that are [+interpretable] can be interpreted at the interfaces — either at LF or at PF — and lead to convergence. In other words, the interfaces can do something with this information and process it further (i.e. to send the linguistic expression to the A-P and C-I systems). However, if a feature is [–interpretable], the interfaces can’t do much with it and it has to be deleted (through or after checking) from the derivation (or CHL) before it reaches these levels of representation. If a [–interpretable] feature would make it to either LF or PF, Full Interpretation will be violated and the 358
KLEANTHES K. GROHMANN derivation will crash. More generally (beyond mere features), every linguistic expression sent to the interfaces must be interpretable at each interface. [chap. 9] interpretation Every linguistic expression must be interpreted. This takes place at the interfaces, PF and LF, driven by the principle of Full Interpretation. See also features, interpretability. [sects. 1.3, 1.5 — chap. 2] island In his extremely important (and still relevant) study, Ross (1967) characterized a variety of island effects. These arise when a dependency (such as a chain or binding relation) between two positions or linguistic expressions (words, phrases, etc.) cannot be created because one of the two players is inside a structure that cannot be accessed from the outside in a particular way (through a movement operation, for example). These structures are islands, which have been heavily studied ever since Ross’ observation (in all frameworks, from EST to GB). A unified, or separated, characterization of islands in minimalism still awaits the syntactic world, but Starke (2001) and Boeckx (2003) constitute two important recent works. [sects. 2.3.1.4, 2.3.2.5, 5.4.2.1, 5.5, 8.3.5.2, 10.1] language faculty In generative grammar (since Chomsky 1957, 1964, 1965), the standard way of talking about the capacity of human beings to acquire, use, and produce language is to assume a Universal Grammar (UG). UG is a biological part of humans and as such could have a physical reflex that is genetically endowed. Whether this reflex is indeed physical (in our brains) or only by proxy (in our minds), the going hypothesis is that the language faculty is a part of our minds/brains, a domain specialized for cognitive processes, alongside other faculties (each specialized for things like colors, numbers, vision, and so on). [chap. 1] Last Resort see Greed, Last Resort (9B) lexical array see numeration, subarray lexicon The words of a language must come from somewhere. The guiding hypothesis (in the absence of hard, physical evidence from the brain) is that the language faculty makes available a lexicon which houses all lexical items and functional material in a given language. (Presumably, there is one lexicon for each language a person may know, but even this is not clear.) This mental lexicon interacts with the CHL by copying some desired content into a lexical array (known as the numeration or subarray), which directly enters into the derivation through Select. [sects. 1.3, 1.5, 2.3.2.1, 2.3.2.6, 6.4, 9.3.2, 9.4.3, 10.2, 10.4] level of representation Characterizing language as the pairing of sound and meaning requires there to be a component that deals with sound and one that deals with meaning. Since the outcome of a derivation is the representation of a linguistic expression, the components dealing with sound and meaning, respectively, must be levels that can interpret these representations. The two levels of representation needed a priori are thus PF (for sound) and LF (for meaning). GB also assumed two additional levels of representation: S-structure to represent the stage of the derivation at the point of starting it (interfacing with the lexicon) and D-structure to represent the stage of the derivation at the point of pronouncing it (cf. Spell-Out in minimalism). Both of these, however, were formal levels in the sense that each was subject to a number of filtering conditions and operations that have to apply at or before this particular level. The minimalist hypothesis denies this need (such as X’-Theory or Theta Theory holding at D-structure). [chaps. 1-2] 359
GLOSSARY FOR UNDERSTANDING MINIMALISM Linear Correspondence Axiom (LCA) see Linear Correspondence Axiom (7B, 7C) linearization The CHL consists of a derivation which takes the relevant material out of the lexicon (Select) to compute it in the syntax (Copy and Merge). The end product is a linguistic expression that must be interpreted at the interfaces (PF and LF). In order to interpret a string of words at PF (giving us the sound of the expression), all words have to be lined up in particular way so that they can be pronounced. This does not seem to be relevant at LF, which takes a representation and interprets it compositionally. Since expressions may be interpreted in one place, but pronounced in another (displacement), there is no need for a particular unique ordering among the expressions. Exactly this however is crucial for PF: our sound-producing machinery simply has no way to pronounce several words simultaneously; each item has to be pronounced separately and in succession. This is what linearization does: it linearizes all linguistic sub-expressions (phrases within a sentence and then words within phrases) in an order with which PF can do something (like send information to the A-P system to eventually pronounce it). One popular way of syntactically linearizing strings of words is by enforcing each element to asymmetrically c-command the word it precedes. See also Linear Correspondence Axiom (7B, 7C). [chap. 7] locality It is an old observation that some operations must apply in closer structural relations than others. Intuitively, movement operations underlie an upper bound in the distance between two positions they can apply to. But locality is relevant for more than just movement: checking of features must take place locally (in a specifier-head relation or under Agree) and movement itself can come in various guises, each of which subject to (potentially slightly differing) locality conditions. Traditionally, the distinction was made between head movement (movement of a head), A-movement (movement into an argument position), and A’-movement (movement into an non-argument position). More recently (Chomsky 1995), the operation Move itself was argued to be split into category-movement vs. feature-movement (Move F). The “distance-factor” of locality can itself be blurred by intervening elements, such as barriers (in GB), islands, or simply an intervener of sorts (minimality). Recently, the opposite effect has been examined also, that movement operations underlie a lower bound in the distance between two positions they can apply to as well (Grohmann 2003). [sects. 1.3, 3.2.1, 6.2.6, 8.3.2 — chap. 2, 5, 9-10] Logical Form (LF) If language is to be understood as the pairing of sound and meaning, the language faculty must have the means to formally express sound and meaning. With respect to language-external expression of language, the A-P system provides the instructions to produce and perceive interpretable sound, and the C-I system provides the instructions to create an interpretable meaning. By hypothesis, the C-I system interfaces with the language faculty through LF. LF itself is the final “goal” of a derivation, in which the lexicon feeds the syntactic computation and creates a legible object (the representation that is interpreted for meaning). At (the level of representation) LF, all interpretable (or [+interpretable]) features are interpreted (i.e. Full Interpretation holds), all linguistic expressions receive some meaning (compositionally), and various covert operations (such as Move F) take place. It is also at LF that interpretive relations like reconstruction apply. See also architecture, Phonetic Form (PF). [sect. 2.2.2.3 — throughout chaps. 1-10] Match Whether Match is an operation proper (independently needed and motivated), like 360
KLEANTHES K. GROHMANN Merge or Copy, is another question we cannot answer in this book; an alternative would hold that Match is simply the result of matching of features in a particular structural configuration (specifier-head, head-head, head-complement, and so on). [sects. 2.3.1.1, 4.3.1, 9.2, 9.4.3.1] maximal projection see bare phrase structure, X’-Theory, Maximal Projection (6B) Merge In GB, linguistic expressions received a structural representation that conformed to X’-Theory and is best pictured as a preformatted skeleton of phrase structure into which lexical and functional items are inserted (and subsequently modified through movement). In minimalism, the skeleton-idea was replaced by the more dynamic structure-building view of the computation, where a derivation is built up from bottom to top. The operation that builds structure is Merge, defined over two syntactic objects that are joined together in a binary fashion. Successive applications of Merge (which can only target the root node of existing structures) derive the final linguistic expression. See also Copy, Extension Condition (2C, 4F, 8F, 8G, 9A, 9E). [sects. 2.3.2.1, 6.3.2, 10.3.2] methodological economy see economy Minimalist Program (minimalism) The most prominent theory within the P&P-approach to grammar was GB. When it became apparent that despite its success, GB constituted a humungous apparatus of modules, filters, rules, and operations, a minimalist approach to linguistic theory was developed. Primary goal was to keep theoretical assumptions to a minimum, which was enforced by inspecting each assumption to the criterion of (virtual) conceptual necessity and, if it failed, to check whether it might constitute a bare output condition. (Unless stated otherwise, minimalism refers to the original program (Chomsky 1993) published as Chomsky (1995).) [chap. 1-10] minimal projection see bare phrase structure, X’-Theory, Minimal Projection (6A) minimality see relativized minimality module The conception of the GB-grammar was an extension of the conception of the language faculty arrived at by Fodor (1975): it consists of a variety of modules that interact with one another. The main modules assumed formed the center piece of most of our chapters in this book, such as Binding Theory, Case Theory, PRO Theorem, Theta Theory, Trace Theory, or X’-Theory. Rather than assume such a modular structure, minimalism only assumes theoretical ingredients (assumptions, conditions, operations, tools, and other machinery) that are either (virtually) conceptually necessary or follow form bare output conditions. [sects. 1.3-1.4, 2.2.2.2, 2.2.6] Move In GB, movement was expressed through a rule Move α: “Move anything anywhere at anytime.” A minimalist approach to displacement requires a trigger for each operation, incl. instances of movement. The operation Move replaced Move α and became more restricted: in Chomsky (1995), Move was defined as the operation that moves a syntactic object from one position to anther where it can check a feature (strong if overt, weak if covert). Around that time, the view crystallized that Move is not a single operation, but the composite of two operations that are (virtually) conceptually necessary, Copy and Merge (Nunes 1995, 2004). See also Agree, Move F, pied-piping. [sects. 1.3, 4.3.2, 6.4, 8.3.5 — chap. 2] Move F With the rise of checking theory, it became necessary for formal features to be licensed. In many instances, this is the result from movement (in a specifier-head relation between two items that bear matching features). However, after careful 361
GLOSSARY FOR UNDERSTANDING MINIMALISM consideration of covert licensing possibilities, the view came through that features can also be licensed in the covert component (at LF). Since Move carries with it the least amount of material it can, the null hypothesis was the formal features can move just by themselves at LF, without any PF-content or -relevance. The variety of Move responsible for bare feature-movement was dubbed Move F (Chomsky 1995) and it replaced covert phrasal movement (in some variations on a theme also even overtly). See also pied-piping. [sect. 9.4.2] movement This is the technical term for displacement. One particularly common minimalist definition of movement is the composite of the two primitive operations Copy and Merge (as in the copy theory of movement), as opposed to a single operation Move. Earlier versions of minimalism did assume Move, however, and distinguished phrasal movement from feature movement, Move F. For presentation purposes, we often refer to movement simply as Move, which is then to be understood to be Copy+Merge (possibly accompanied by Form Chain and a chain reduction mechanism such as Delete). One particularly interesting case is head movement, which seemed to be very well understood for the longest time (in GB): adjunction of one head to another, subject to a very strict locality condition (the Head Movement Constraint of Travis 1984). With the emergence of minimalism, and the Extension Condition (2C, 4F, 8F, 8G, 9A, 9E) in particular, head movement lost some of its charm. One way to get around the problem is to assume that head movement doesn’t exist and is just a PFeffect (epiphenomenal, as the result of certain linearization or reordering conditions or, for example); see Chomsky (2000, 2001, 2004) for this assumption and some minimal references. Another option is to clarify the technical aspect of head movement, in terms of sideward movement (Nunes 1995, 2004; see also Bobaljik and Brown 1997). [sects. 2.3, 5.4, 6.4 — all chapters, technical issues also in chaps. 7, 9-10] multiple specifiers While X’-Theory made available a simple and universal schema for phrase structure consisting of one head, one complement, and one specifier (leaving adjuncts aside), bare phrase structure allows for a more dynamic and individual build-up of phrase structure. Recent discussion of multiple specifiers in the C-system (applied to several wh-phrases moving to several [Spec,CP]), incl. plenty of references, can be found in Richards (2001). Chomsky (1995: chap. 4, 2000) employs an outer [Spec,vP] to replace the unwanted (unwarranted?) AgrOP. In our presentation, we also mentioned the “three-argument problem” which can be solved under the assumption that heads may make available more than one specifier. [sect. 5.4.2] narrow syntax The part of CHL that deals with the purely syntactic aspect of the derivation is called narrow syntax: the route from Select to LF. [chap. 10] necessity see (virtual) conceptual necessity null Case see Case Theory numeration The numeration, introduced in “early minimalism” (Chomsky 1993) and kept throughout the “classical period” (Chomsky 1995) is the starting point of every derivation. It is the collection of lexical and functional items selected from the lexicon that is to be used up by sending all items into the derivation. In more recent models (Chomsky 2000, 2001, 2004), the single numeration was replaced by a number of lexical subarrays each one feeding one phase only. [sects. 2.3.2.6, 2.4, 6.4, 10.4.2] overt In the original P&P Theory (such as GB), LF was the covert component of CHL or the derivation. What happens before the LF-interface is reached makes up the overt 362
KLEANTHES K. GROHMANN component. Overt operations are those that take place after D-, but before S-structure. In minimalism, overt refers to all operations that take place prior to Spell-Out. See also architecture, Move, narrow syntax, pied-piping. [sects. 1.3, 2.2.3] phase With the abolishment of S-structure as a level of representation, a new cut-off point had to be found which divides the overt from the covert part of the derivation. multiple Spell-Out — cyclic derivation — edge — impenetrability — vP/CP. See also Phase Impenetrability Condition (10A). [chap. 10] Phonetic Form (PF) If language is to be understood as the pairing of sound and meaning, the language faculty must have the means to formally express sound and meaning. With respect to language-external expression of language, the C-I system provides the instructions to create an interpretable meaning, and the A-P system provides the instructions to produce and perceive interpretable sound. By hypothesis, the A-P system interfaces with the language faculty through PF. PF is that part of the computation which the derivation sends its information to when it spells out. By (the level of representation) PF, all uninterpretable (or [–interpretable]) features must have been checked (and erased or deleted), otherwise the derivation crashes. See also architecture, Logical Form (LF). [sect. 2.2.2.3 — throughout chaps. 1-10] pied-piping When categories move, something takes place that ideally shouldn’t: if displacement is expressed by movement, and if movement is triggered by the need of a syntactic object to check a (strong or weak) feature, why would the entire category move that contains at most one element that bears the relevant feature? The null hypothesis (in a minimalist fashion, the most economical) would hold that if a feature cannot be checked where it is located at a given time, it better move to a position where it can be checked (and subsequently deleted). Pied-piping is the term used for the apparent non-optimal design that human language seems to possess, that it is not only features that move. At least, and this is the general consensus, not in the overt component prior to LF. Once the derivation has reached the stage where the operation Spell Out can apply (that is, once all strong features are checked and the entire representation is sent off to PF), it proceeds to LF. Here, all remaining unchecked features have no effect on PF and thus pied-piping is not necessary any longer. In this sense, pied-piping is a condition on the computation enforced by PF. [sects. 2.3.1.2, 6.3.2, 9.4.2.1] precedence This obvious relation is not as easy to come to grips with as one might assume. In particular how does the generative system (which we assume is dumb by definition) know that a given expression α precedes another, β? We can hear it once pronounced, but this is a linearized object we’re talking about. Linearization partly comes from precedence, but somehow precedence has to be formally defined. The suggestion from Kayne (1994) is that asymmetric c-command does exactly this by looking for an asymmetric relation already existing within syntactic objects that could be mapped onto precedence (utilizing c-command coupled with the combination of sisterhood and dominance): if a lexical item α precedes a lexical item β, it must be the case that β doesn’t precede α. [sects. 7.3-7.5] Preference Principle see Preference Principle (8D) Principles-and-Parameters Theory (P&P) The most successful, though not yet totally working (or worked out) model of UG holds that universally, children are born with the same set of universal principles of grammar and a particular setting of linguistic 363
GLOSSARY FOR UNDERSTANDING MINIMALISM parameters. In the short course of acquiring their native language, children set these parameters to fit the input they receive (the primary linguistic data). This theory of grammar gave then rise to a particular formalization, Government-and-Binding Theory (GB). While superseded by minimalism in many ways, the handling of the P&P-ideas was still at best in GB. Since we believe that the minimalist direction is the one to pursue, we suggest that the P&P-model be revised which, given much evidence (see e.g. Clark and Roberts 1993), suffers from some fundamental flaws in practicality (computational complexity) — besides the fact that apart from two or three parameters, there is no agreement in sight on which ones are the relevant (or any even!) parameters. This said, we hold on to the idea of principles and parameters at least for the reason that the general direction is suggests must be on the right track from what we know about biology and grammar. See also generative (grammar). [chap. 1] PRO In GB, the empty category PRO existed alongside its companions pro (a Case-marked null expression) and trace (Case-marked if A’-bound, Caseless otherwise). It designated subjects in control structures (clauses embedded under predicates like want, as in John wants [ PRO to play cards ].) which could not be Case-marked because their governor was non-finite Infl (which does not assign Case). With the elimination of government, filters, and so on in minimalism, the nature of PRO has been subject to quite a bit of speculation and modification. The two most prominent approaches are null Case assigned to PRO by non-finite T (Chomsky and Lasnik 1993) and replacement of the empty category PRO by a copy, i.e. a movement-approach (Hornstein 1999) — neither without its problems. See also PRO Theorem (4G), PROperties (4J). [sects. 2.3.2.2, 4.3.4] PRO Theorem This is the module from GB-times that dealt with the licensing of PRO. See also PRO Theorem (4G), PROperties (4J). [sect. 4.3.4] probe A probe is a head with [–interpretable] features and a goal is an element with matching [+interpretable] features. In order to have its [–interpretable] features deleted for LF purposes and specified for morphological purposes, a given probe peruses its ccommand domain in search of a goal. A goal is accessible to a given probe only if there is no intervening element with the relevant set of features; that is, relativized minimality holds. It was first proposed in the phase-based framework of Chomsky (2000, 2001) to augment the newly introduced operation Agree. [sect. 9.4.3 — chap. 10] Procrastinate If we assume, in line with the minimalist approach, that movement is driven by the need to check formal (morphosyntactic) features, we need to say something on the timing: after all, there are operations that arguably take place overtly in one language, but covertly in another (if anything argued by UG and possibly formulated in a P&P-approach, is on the right track). We can then assume further that features come in two flavors, weak and strong. Strong features are phonologically indigestible and so must be checked before the grammar splits (Spell-Out, where the phonologically relevant material is shipped to PF); weak features, on the other hand, are phonologically acceptable and need only be checked by LF. We finally assume that grammars are “lazy” in that one doesn’t check features unless one must. This condition is called Procrastinate. Thus, since weak features need not be checked overtly, Procrastinate will require that they be checked covertly. By contrast, if strong features aren’t checked before the grammar splits, the derivation will phonologically gag. So 364
KLEANTHES K. GROHMANN strong features must be checked by overt movement. We can now say that the differences noted among languages is simply a question of feature strength. [sects. 2.3.1.3-2.3.1.6, 2.4, 9.2] reconstruction The displacement property of human language has one clear consequence which we presented as a “big fact” about grammar from a generative perspective (see Some “big facts” (1A)): expressions that appear in one position can be interpreted in another. To overcome this problem of dissimilarity between logical forms (what is interpreted at LF) and the syntactic representations that underlie them (S-structure in GB or the derivation at the point of Spell-Out in minimalism), the GB trace-based account must be supplemented with rules that reconfigure structure at LF. The standard assumption is that in the covert component, one can “reconstruct” complex wh-phrases, for example, to their trace positions and then raise just the simplex wh-operators. The copy theory of movement replaces this process of reconstruction with deletion operations with the result that LF interprets one copy and PF pronounces another. [sects. 6.4, 8.3] relativized minimality The intuition behind Rizzi’s (1990) version of minimality (see Relativized Minimality (5A)) — where the notion “α-government” covers both headand antecedent-government — is that movements must be as short as possible in the sense that one can’t move over a given position that one could have occupied if the element filling it weren’t there. In other words, the move required to meet some demand of a higher projection (i.e. to check some feature, to put it in minimalist terms) must be met by the closest expression that could in principle meet that requirement. This sort of restriction places a shortness requirement on movement operations, which makes sense in least effort terms in that it reduces (operative) computational complexity by placing a natural bound on feature checking operations. In this sense, minimality is a natural sort of condition to place on grammatical operations like movement (especially when these are seen as motivated by feature checking requirements). Starke (2001) or Boeckx (2003) offer minimalist formulations of relativized minimality effects (and a theory thereof). [sect. 9.3.2 — chap. 5] representational economy see economy Select Every derivation starts with the assembly of a lexical array (numeration under one approach, subarray under another). The array contains a choice of items from the lexicon that have to be used by entering them into the derivation they run through. Select then refers to the method of placing an item from the array into the derivation (for immediate Merge). Whether Select is an operation proper (independently needed and motivated), like Merge or Copy, is another question we cannot answer in this book. [sects. 2.3.2.6, 2.4, 9.4.2.2] selection The relation between a head and the complement it takes is one of selection: a given head selects for a specific complement. This type of selection may involve categorical selection (whether it be N or V, for example) or semantic selection (such as proposition, entity, or property, or requirements on an item’s animacy, for example). And just as the requirement of a specific (type of) complement is selected by a head, so is the type of specifier. The latter in particular (specifier-head relation) is often seen as the canonical structural configuration for checking of formal features. See Adger (2003: chap. 3) for a particularly readable recent exposition. [sects. 2.3.2.2-2.3.2.5] sensorimotor system see articulatory-perceptual (A-P) system 365
GLOSSARY FOR UNDERSTANDING MINIMALISM Shortest Move see Move sideward movement As understood in virtually all generative approaches to displacement, movement targets a particular expression within a derivation (i.e. a position in the tree) and moves it to another — within the same derivation or structure (i.e. within the same tree). Since Merge — which underlies movement in minimalist approaches to the grammar in tandem with Copy — is subject to the Extension Condition (2C, 4F, 8F, 8G, 9A, 9E), movement within a given tree can only target the root node of the structure at that point (and thereby extend it). Sideward movement, originally proposed by Nunes (1995, 2004) and expanded in Hornstein (2001), refers to copying an element from one tree and merging it to another, unconnected tree. Bobaljik and Brown (197) have argued for an implementation of this idea of “interarboreal movement” to subsume head movement under minimalist desiderata. [sects. 8.3.5.2, 9.4.2.3] specifier The specifier in X’-Theory was defined as the element that is sister to X’ (the necessary intermediate projection) and daughter of XP (the maximal projection). In bare phrase structure, the X’-projection is not required by definition, so the concept of a specifier has to be revised. Chomsky (1995: 245) does so: “The head-complement relation is the ‘most local’ relation of an XP to a terminal head Y, all other relations within being head-specifier” — apart from adjunction, which is addressed elsewhere. With structure building up incrementally through successive applications of the operation Merge, we can now distinguish “First Merge” from “Second Merge” (as suggested by Adger 2003: chap. 4). Given that a head has various selection requirements, the element it merges with first is the complement and the element it merges with second is the specifier. Another way of putting it is to say that a specifier merges with a projection of the head. Either approach also makes clear why specifier and head stand in such close relation to one another, and why this specifier-head relation might have implications for the derivation (such as checking of features). The latter approach, however, blurs the traditional distinction between specifier and adjunct (as advocated by Kayne 1994), while the former needs to say something additional to allow multiple specifiers (as proposed in Chomsky 1995). [sects. 1.3, 2.3.1.1 — chap. 6] specifier-head This is a local relation holding of a head and its specifier (“Spec-head”). The relationship is not a minimalist innovation — it featured prominently in GB already (of which government was an extension). Spec-head was argued in early minimalism to be the structural configuration necessary to license certain grammatical properties, namely by checking formal features (Chomsky 1993), which laid the grounds for checking theory (Chomsky 1995). This mechanism was later formulated in terms of Agree (Chomsky 2000). In any case, at least on the surface we can observe a lot of agreement relations holding between a head and its specifier (or the other way around) [sects. 1,3 2.3.1.1 — chaps. 3-4, 6] Spell-Out With the abolishment of S-structure, the minimalist approach to grammar replacing GB, had to find a way to distinguish overt from covert operations taking place in the derivation. There is too much evidence for an overt component (LF) to ignore this issue (such as reconstruction phenomena). Moreover, the interfaces have to be fed at some point with the information relevant for the pairing of sound and meaning. To designate a point in the derivation at which the relevant information is shipped to the PF-interface, the operation Spell Out was introduced. Unlike S-structure, 366
KLEANTHES K. GROHMANN Spell-Out is not a level of representation subject to a number of conditions, requirements, or filters, but an operation like any other, such as Copy and Move — required on the basis of (virtual) conceptual necessity. Currently, much research is devoted to the nature of Spell-Out and such questions are addressed as what exactly it does (Does Spell-Out feed PF and LF or just PF?), how often it applies (If Spell-Out is an operation like any other, shouldn’t the restriction to a single application be suspicious?), and so on. The idea of multiple Spell-Out of Uriagereka (1999) has been considered and developed or modified in such works as Epstein et al. (1998), Chomsky (2000), or Grohmann (2003). See also The GB T-Model of the Grammar (2A), A Minimalist T-Model of the Grammar (2D), The Computational System under the Move-F Approach (9C), The Computational System under the Agree Approach (9D). [sects. 2.3.1.6, 2.4, 9.2, 9.3.1, 9.4.2.2, 10.4] split Comp With the development of split Infl, efforts were increased to finer articulate the left periphery of the clause as well. The Comp-layer (CP) hosts elements that serve quite different (semantic and/or pragmatic) functions and that behave slightly different with respect to syntactic properties as well, such as topics, foci, wh-expressions, relativizing elements, and so on. Rizzi (1997) in particular expressed the structure of Comp in function-specific functional projections, and much subsequent research has aimed to support this distinction further (or argue against it). [sect. 6.2.5] split Infl After the exocentric root node of a sentence (S) gave way to endocentric IP in X’Theory of GB, much research has targeted grammatical phenomena taking place in Infl, i.e. below Comp and above VP. Building on earlier insights (Emonds 1976), Pollock (1989) proposed to split IP into separate projections for tense (TP) and agreement (AgrP). The effort to articulate the structure of Infl/IP (now usually considered to be just TP) in more fine-grained ways continued for a long time, at least all the way to Cinque (1999), and has not been solved yet for the majority of scholars. While minimalist work often simply considers CP >> TP >> vP >> VP as the clausal backbone, it must be noted that many of the phenomena and observations produced in favor of further splits in TP (and elsewhere, such as split Comp or verb phrase structure) must be taken into consideration. [sects. 4.3.1, 6.2.5] split VP see verb phrase structure S-structure The CHL in GB contained four levels of representation: the interface to the AP system (PF), the interface to the C-I system (LF), the level which was fed directly by the lexicon (D-structure), and the level at surface phenomena could be observed and linearized for pronunciation. S-structure was the name for the latter, historically suggesting something “surfacey” — although there is nothing “surfacey” about Sstructure other than a derived order of lexical expressions of the underlying representation as we pronounce them (Chomsky 1965). With the advent of the Minimalist Program (Chomsky 1993), the levels of representation were reshuffled and out came the winners, LF and PF, since they are the only levels that are (virtually) conceptually necessary (for the pairing of sound and meaning). In addition, conditions applying at these levels or components of the grammar are bare output conditions. D- and S-structure don’t have either property: they are theory-internal constructs (hence not conceptually necessary in any way) and they don’t feed the interfaces or have any effect on the output (hence cannot heave conditions applying at them to the status of bare output conditions). [sects. 1.3, 2.2.2.2, 2.3.1, 9.5] 367
GLOSSARY FOR UNDERSTANDING MINIMALISM strong feature see feature subarray The lexical subarray is a slightly modified version of the numeration, introduced in “early minimalism” (Chomsky 1993) and kept throughout the “classical period” (Chomsky 1995). Within the recent phase-based approach (Chomsky 2000, 2001, 2004), a single lexical array vis-à-vis a numeration was replaced by a subarray which cyclically feeds every phase. In other words, each phase has its own “numeration” — the lexical subarray. [sect. 10.4.2] substantive economy see economy successive-cyclic movement In minimalism, displacement of syntactic objects or linguistic expressions applies to elements that have unchecked features. These features just act as the trigger for movement: Move (or Copy followed by re-Merge) a category to check a specific feature. (The same can apply covertly through Move F or Agree.) However, as has become well attested in GB already, sometimes a movement operation takes place without being followed by feature checking — namely when it lands in an intermediate site whose only function is to act as an escape hatch for further movement. On long-distance wh-movement, for example (the classic case), it is not that the moving wh-element checks a wh-feature at each [Spec,CP], Rather, each [Spec,CP] must be moved through, with an intermediate touch-down, in order for relating the final landing site and the base position which can be pretty far away, in an unbounded fashion. This type of successive-cyclic movement (which applies to successive cycles) rules out instances of Superiority violations or other illicit structures in which the intermediate position is filled by another element, prohibiting the moving element in question (such as a wh-phrase) to move through. (Recent ideas about multiple specifiers throw up some issues to address, of course; see Richards 1997.) Note that under an Agreeapproach to CHL, this question doesn’t arise: features are no trigger for movement, but for Agree, with movement being triggered by an EPP-feature or property on a higher head. [sects. 2.3.2.2-2.3.2.4, 9.3.2, 10.4.4] θ-role see theta-marking theta-marking Predicates take arguments and together they form the argument structure of a proposition. But apart from taking or selecting arguments, predicates also assign a special role to them, a role they play in the expression of the predicate’s argument structure. These roles are thematic in nature, denoting the ‘theme’ of role they play. Thematic or theta-marking is one property predicates hold over their arguments. The resulting thematic or theta-roles, which we refer to as the more common θ-roles, divide into AGENT (more often than not the subject), THEME or PATIENT (or direct object), GOAL or BENEFICIARY (indirect object), among others perhaps, with EXPERIENCER being one θ-role carried by subjects of certain predicates (such as sleep or fear). We also briefly presented more recent ideas that θ-roles be understood as θ-features in order to finally eliminate remnants of a D-structure component of the grammar (see e.g. Bošković 1994, Hornstein 2001): theta-marking as understood in GB (part of the module known as Theta Theory) simply assigns these θ-roles rather than derive them, which is what essentially underlies a checking theory. But the issue is anything but resolved at this point. [chap. 3] Theta Theory This is the module from GB-times that dealt with the licensing of θ-roles and theta-marking. [sects. 1.3, 2.2.6, 2.3.2 — chap. 3] trace In GB, the element that was left behind as the result of movement was identified as a 368
KLEANTHES K. GROHMANN trace with specific properties (namely, [+anaphoric], [–pronominal]). The GB-module that dealt with the identification and licensing of traces was Trace Theory. The Minimalist Program assumes a simpler theory of movement, namely the copy theory of movement according to which an element that moves leaves behind a copy. Throughout the text we sometimes used the terms copy and trace interchangeably, especially when the discussion involved deletion vs. pronunciation of copies/traces — each copy or trace is either pronounced or it is not, in which case it must be deleted. [sects. 1.3, 7.5 — chap. 2] Trace Theory This is the module from GB-times that dealt with the licensing of traces. [sect. 6.4, 8.3.1] Uniformity Condition see Uniformity Condition (2F) uninterpretable see interpretability Universal Grammar (UG) UG is the null hypothesis: children come biologically equipped with a set of principles for constructing grammars. As we put it in the text metaphorically, these general principles can be thought of as a recipe for “baking” the grammar of a particular language (GL) by combining, sifting, sorting and stirring the primary linguistic data (PLD) in specifiable ways. Or, to make the same point less gastronomically, UG can be thought of as a function that takes PLD as input and delivers a particular grammar (of English, Brazilian Portuguese, German, etc.), a GL, as output. More concretely, the principles of UG can be viewed as general conditions on grammars with open parameters whose values are set on the basis of linguistic experience. These open parameters can be thought of as “on/off” switches, with each collection of settings constituting a particular GL. On this view, acquiring a natural language amounts to assigning values to these open parameters, i.e. “setting” these parameters, something that children do on the basis of the PLD that they have access to in their linguistic environments. See also Principles-and-Parameters Theory (P&P), Some “big facts” (1A). [chap. 1] uninterpretable see interpretability valuation A recent minimalist alternative to checking theory holds that (uninterpretable) features are not checked in a specifier-head configuration, but by the licensing operation Agree. What Agree does is establish a relation between a higher probe and a lower goal (through c-command) in which the uninterpretable features are identified by interpretable ones and given a value. This view holds that features are not fully specified, but come in higher-class categories (such as “number”), which are given the specific values in the valuation process (e.g., “singular”). [sect. 9.4.3, chap. 10] verb phrase structure In the transition from GB to minimalism, a closer inspection of structural properties and relations led to a finer articulation of phrase structure. Just as IP was split into separate functional projections (split Infl), and later extended to CP (split Comp), VP underwent a growth process. The so-called VP-shells are needed to make room for the VP-internal subject, which we generalized in this book to the Predicate-Internal Subject Hypothesis (3A). The two main views either consider a recursive verb phrase (VP, Larson 1988) with an empty shell or adopt a light verb projection (vP, Hale and Keyser 1993). See also argument structure. [chaps. 3-5] (virtual) conceptual necessity The big step from GB to minimalism was to scrutinize all theoretical assumptions and revisit them in the light of (virtual) conceptual necessity — Is the modular structure of GB conceptually necessary?, i.e. could we not possibly 369
GLOSSARY FOR UNDERSTANDING MINIMALISM conceive of grammar to be non-modular? Are four levels of representation conceptually necessary, or is even one? ? Are all operations and conditions assumed in GB conceptually necessary to derive a linguistic expression?, and so on. Whatever is not conceptually necessary should not be part of a generative grammar, unless there are other reasons why they would be needed The only type of “other reason” acceptable is if they have an effect on the output, i.e. if they are bare output conditions. [sect. 1.5] VP-shell see verb phrase structure weak feature see feature X’-Theory Once the inadequacy of PS-rules to adequately describe and explain syntactic structures was generally agreed upon, X’-Theory was put into place to express phrase structure (Jackendoff 1977), which even elevated to the status of a separate module in the language faculty in GB (Chomsky 1981, 1986a). The X’-schema invariably takes full phrasal projections (XP) to consist of a head (X0) which in combination with its complement projects to an intermediate level (X’). Specifiers are then added to X’ to form the fully projected XP. Adjuncts are adjoined elements, either to the X’- or to the XP-level (depending on one’s view of clause structure assembly). In minimalism, classic X’-Theory was replaced by bare phrase structure, not only as a module, but also as a phrase-structure building (instead of simply representing) apparatus. See also Minimal Projection (6A), Maximal Projection (6B), Intermediate Projection (6C). [sects. 1.3-1.4, 2.3.2 — chaps. 6-7]
B. Definitions (complete) (1A) Some “big facts” [chap. 1] F1: Sentences are basic linguistic units. F2: Sentences are pairings of form (sound/signs) and meaning. F3: Sentences are composed of smaller expressions (words and morphemes). F4: These smaller units are composed into units with hierarchical structure, i.e. phrases, larger than words and smaller than sentences. F5: Sentences show displacement properties in the sense that expressions that appear in one position can be interpreted in another. F6: Language is recursive, that is, there’s no upper bound on the length of sentences in any given natural language. (2A) The GB T-Model of the Grammar [= chap. 2, (6)] DS Move
SS
PF Move
LF
370
KLEANTHES K. GROHMANN (2B) Theta-Role Assignment Principle (TRAP) [= chap. 2, (68)/(106)] θ-roles can only be assigned under a Merge operation. (2C) Extension Condition (preliminary version) [= chap. 2, (90)] Overt applications of Merge and Move can only target root syntactic objects. (2D) A Minimalist T-Model of the Grammar [= chap. 2, (115)] N = {Ai, Bj, Ck ... } Select & Merge & Move
Spell-Out
PF
Select & Merge & Move
LF (2E) Inclusiveness Condition [= chap. 2, (116)] The LF object λ must be built only from the features of the lexical items of N. (2F) Uniformity Condition [= chap. 2, (117)] The operations available in the covert component must be the same ones available in overt syntax. (3A) Predicate-Internal Subject Hypothesis (PISH) [chap. 3] The thematic subject is base-generated inside the predicate. (3B) VP-Structure I: VP-Shells vs. Light Verbs [chap. 3] a. VP-shell: additional VP, projection of an empty verbal head V b. Light verb: semantically bleached verb that participates in the formation of some complex predicates (3C) VP-Structure II: Verb Types [chap. 3] a. Transitive structures: [vP DPsubject v [VP V DPobject ] b. Ditransitive structures: [vP DPsubject v [VP DPdirect-object V PP/DPindirect-object ] c. Unergative structures: [vP DPsubject v [VP V DPobject ] d. Unaccusative structures: [VP V DP ] (4A) Government [= chap. 4, (3)] α governs β iff (i) α c-commands β and (ii) β c-commands α. (4B) Government [= chap. 4, (6)/(42)] α governs β iff (i) α m-commands β, and (ii) β m-commands α.
371
GLOSSARY FOR UNDERSTANDING MINIMALISM (4C) M-Command [= chap. 4, (7)] α m-commands β iff (i) α does not dominate β; (ii) β does not dominate α; (iii) every maximal projection dominating α also dominates β; and (iv) α does not equal β. (4D) Government [= chap. 4, (10)] α governs β iff (i) α m-commands β; and (ii) there is no barrier γ that dominates β but does not dominate α. (4E) Barrier [= chap. 4, (11)] α is a barrier iff (i) α is a maximal projection; and (ii) α is not a complement. (4F) Extension Condition (preliminary version) [= chap. 4, (29)] Overt applications of the operations Merge and Move can only target root syntactic objects. (4G) PRO Theorem [= chap. 4, (38)] PRO must not be governed. (4H) Binding Theory Principles A and B [= chap. 4, (39)] a. Principle A An anaphor must be A-bound in its governing category. b. Principle B A pronoun must not be A-bound in its governing category. (4I) Governing Category [= chap. 4, (40)] α is a governing category for β iff (i) α is the minimal XP that dominates β and (ii) α is a governor for β. (4J) PROperties [= chap. 4, (41)] PRO: [+an, +pro] (5A) Relativized Minimality [= chap. 5, (4)/(69)] X α-governs Y only if there is no Z such that: (i) Z is a typical potential α-governor for Y; and (ii) Z c-commands Y and does not c-command X.
372
KLEANTHES K. GROHMANN (5B) Minimal Domain of α (MinD(α)) [= chap. 5, (20)] The set of categories immediately contained or immediately dominated by projections of the head α, excluding projections of α. (5C) Extended Minimal Domain [= chap. 5, (21/(42)] The MinD of a chain formed by adjoining the head Y0 to the head X0 is the union of MinD(Y0) and MinD(X0), excluding projections of Y0. (5D) Equidistance (first version) [= chap. 5, (22)/(51)] Say that α is the target of movement for γ. Then for any β that is in the same MinD as α, α and β are equidistant from γ. (5E) Equidistance (interim version) [= (i) from chap. 5, exercise 5.7] If α and β are in the same MinD, then α and β are equidistant from a target γ. (5F) Equidistance (final version) [= chap. 5, (57)/(73)] If two positions α and β are in the same MinD, they are equidistant from any other position. (6A) Minimal Projection: X0 [= chap. 6, (59)] A minimal projection is a lexical item selected from the numeration. (6B) Maximal Projection: XP [= chap. 6, (60)] A maximal projection is a syntactic object that doesn’t project. (6C) Intermediate Projection: X’ [= chap. 6, (61)] An intermediate projection is a syntactic object that is neither an X0 nor an XP. (7A) C-Command [= chap. 7, (13)] α c-commands β iff (i) α is a sister of β; or (ii) α is a sister of γ and γ dominates β. (7B) Linear Correspondence Axiom (LCA; preliminary version) [= chap. 7, (14)] A lexical item α precedes a lexical item β iff α asymmetrically c-commands β. (7C) Linear Correspondence Axiom (LCA; final version) [= chap. 7, (17)] A lexical item α precedes a lexical item β iff (i) α asymmetrically c-commands β; or (ii) an XP dominating α asymmetrically c-commands β. (8A) Binding Theory in GB [= chap. 8, (1)] (i) Principle A: An anaphor (a reflexive or reciprocal) must be bound in its domain. (ii) Principle B: A pronoun must be free (not bound) in its domain. (iii) Principle C: An R-expression (e.g. a name, a variable) must be free (everywhere).
373
GLOSSARY FOR UNDERSTANDING MINIMALISM (8B) Domain [= chap. 8, (2)] α is the domain for β iff α is the smallest IP (TP) containing β and the governor of β. (8C) Binding [= chap. 8, (3)] α binds β iff α c-commands and is coindexed with β. (8D) Preference Principle [= chap. 8, (47)] Try to minimize the restriction in the operator position. (8E) Binding Theory in Minimalism [= chap. 8, (61)] (i) Principle A: If α is an anaphor, interpret it as coreferential with a c-commanding phrase in its domain. (ii) Principle B: If α is a pronoun, interpret it as disjoint from every c-commanding phrase in its domain. (iii) Principle C: If α is an R-expression, interpret it as disjoint from every ccommanding phrase. (8F) Extension Condition (preliminary version) [= chap. 8, (73)] Overt applications of Merge and Move can only target root syntactic objects. (8G) Extension Condition (revised preliminary version) [= chap. 8, (74)] Overt applications of Merge can only target root syntactic objects. (9A) Extension Condition (revised preliminary version) [= chap. 9, (7)/(25)/(54)] Overt applications of Merge can only target root syntactic objects. (9B) Last Resort [= chap. 9, (11)] A movement operation is licensed only if it allows the elimination of [–interpretable] formal features. (9C) The Computational System under the Move-F Approach [= chap. 9, (44)] N = {Ai, Bj, Ck …} Select & Merge & Copy
LF
PF Spell-Out
(9D) The Computational System under the Agree Approach [= chap. 9, (86)] N = {Ai, Bj, Ck …} Select & Merge & Copy & Agree
LF
PF Spell-Out
374
KLEANTHES K. GROHMANN (9E) Extension Condition (final version) [= chap. 9, (55)] Applications of Merge can only target root syntactic objects. (10A)Phase Impenetrability Condition (PIC) [= chap. 10, (53)/(101)] In a phase α with head H, the domain of H is not accessible to operations outside α, only H and its edge are accessible to such operations.
C. Definitions (minimalist) (1)
Binding Theory [see chap. 8 (61)] (i) Principle A: If α is an anaphor, interpret it as coreferential with a c-commanding phrase in its domain. (ii) Principle B: If α is a pronoun, interpret it as disjoint from every c-commanding phrase in its domain. (iii) Principle C: If α is an R-expression, interpret it as disjoint from every ccommanding phrase.
(2)
C-Command [see chap. 7 (13)] α c-commands β iff (i) α is a sister of β; or (ii) α is a sister of γ and γ dominates β.
(3)
Equidistance [see chap. 5 (57)/(73); tentative formulations: chap. 5 (22)/(51), ex. 5.7] If two positions α and β are in the same MinD, they are equidistant from any other position.
(4)
Extension Condition [see chap. 9 (55); for tentative formulations, see chap. 2 (90), chap. 4 (29), chap. 8 (73), (74), and chap. 9 (7)/(25)/(54)] Applications of Merge can only target root syntactic objects.
(5)
Extended Minimal Domain [see chap. 5 (21)/(42)] The MinD of a chain formed by adjoining the head Y0 to the head X0 is the union of MinD(Y0) and MinD(X0), excluding projections of Y0.
(6)
Inclusiveness Condition [see chap. 2 (116)] The LF object λ must be built only from the features of the lexical items of N.
(7)
Intermediate Projection: X’ [see chap. 6 (61)] An intermediate projection is a syntactic object that is neither an X0 nor an XP.
(8)
Last Resort [see chap. 9 (11)] A movement operation is licensed only if it allows the elimination of [–interpretable] formal features.
375
GLOSSARY FOR UNDERSTANDING MINIMALISM (9)
Linear Correspondence Axiom (LCA) [see chap. 7 (17); for a tentative formulation, see chap. 7 (14)] A lexical item α precedes a lexical item β iff (i) α asymmetrically c-commands β; or (ii) an XP dominating α asymmetrically c-commands β.
(10) Maximal Projection: XP [see chap. 6 (60)] A maximal projection is a syntactic object that doesn’t project. (11) Minimal Domain of α (MinD(α)) [see chap. 5 (20)] The set of categories immediately contained or immediately dominated by projections of the head α, excluding projections of α. (12) Minimal Projection: X0 [see chap. 6 (59)] A minimal projection is a lexical item selected from the numeration. (13) Phase Impenetrability Condition (PIC) [see chap. 10 (53)/(101)] In a phase α with head H, the domain of H is not accessible to operations outside α, only H and its edge are accessible to such operations. (14) Predicate-Internal Subject Hypothesis (PISH) [see chap. 3, section 3.2.2] The thematic subject is base-generated inside the predicate. (15) Preference Principle [see chap. 8 (47)] Try to minimize the restriction in the operator position. (16) Theta-Role Assignment Principle (TRAP) [see chap. 2 (68)/(106)] θ-roles can only be assigned under a Merge operation. (17) Uniformity Condition [see chap. 2 (117)] The operations available in the covert component must be the same ones available in overt syntax.
376