TYPES & TOKENS On Abstract Objects
Linda Wetzel
Types and Tokens
Types and Tokens: On Abstract Objects
Linda Wetzel
The MIT Press Cambridge, Massachusetts London, England
© 2009 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please e-mail
[email protected] or write to Special Sales Department, The MIT Press, 55 Hayward Street, Cambridge, MA 02142. This book was set in Stone Sans and Stone Serif by SNP Best-set Typesetter Ltd., Hong Kong and was printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Wetzel, Linda, 1951–. Types and tokens : on abstract objects / Linda Wetzel. p. cm. Includes bibliographical references and index. ISBN 978-0-262-01301-7 (hardcover : alk. paper) 1. Type and token (Linguistics). 2. Language and languages—Philosophy. I. Title. P128.T94W47 2009 111′.2—dc22 2008043056 10 9 8 7 6 5 4 3 2 1
For Sylvain Bromberger and Richard Cartwright
Contents
Acknowledgments Introduction xi 1
2
3
ix
The Data 1 Type–Token Use in Philosophy The Data 3 Conclusion 21
Types Exist 23 1 Types Exist 24 2 Objections 28 3 The Alleged Trouble with Abstract Objects Conclusion 50 Paraphrasing, Part One: Words 53 1 Goodman’s and Sellars’s Suggestion 2 Words 58 Conclusion 70
4 Paraphrasing, Part Two 73 1 Other Paraphrases 73 2 Characterizing Statements 3 Old Glory 85 4 Quantification 87 5 Class Nominalism 88 Conclusion 92 5
1
77
The Trouble with Nominalism 93 1 What Goodman and Quine Say 93 2 The Trouble with Nominalism 99 Conclusion 103
55
30
viii
Contents
6
Remarks on a Theory of Word Types 1 Kinds 106 2 Questions (1) through (5) 112 Conclusion 124
7
A Serious Problem for Realism? 125 1 What Are Occurrences of Expressions? 125 2 What Are Occurrences Generally? 130 3 David Lewis’s “Against Structural Universals” Concluding Remarks 150
Notes 151 References 163 Index 175
105
138
Acknowledgments
Although the bulk of this book was written during the period 1996–1999, some material appeared in print earlier than 1996, and revisions were made up until 2007. Thus there has been ample time for teachers, colleagues, students, friends, and anonymous reviewers to make suggestions and criticisms that have had a constructive impact on this book (and ample time for me to forget some of their contributions). As it is impossible to acknowledge all these contributions, below are the names of those who get the lion’s share of the credit. My long-time mentor, Sylvain Bromberger, contributed to my appreciation of the importance of types, especially in linguistics. Although our views diverge in significant ways, his early platonic views about types informed my own view. He raised the question of the relationship of types to their tokens, and his later nominalist views about types served as a foil to my own platonist views. He has commented generously on drafts for encyclopedia entries I’ve published on types and tokens, and most importantly, he has offered his invaluable comments on this book at an American Philosophical Association’s “Author Meets Critics” session. Over the years I’ve been much influenced by my thesis supervisor, Dick Cartwright, both for his general approach to metaphysics, with its careful scrutiny of ordinary and scientific language, as well as for his platonism. He kindly lent me his unpublished lecture notes on abstract objects, which opened my eyes to the diversity to be found among abstract objects and enlightened me on the difference between tokens and occurrences. Along with Dick Cartwright, the late George Boolos effectively turned me into a platonist with his comments on one of my early efforts defending, of all things, nominalism. His guidance, concern, and support over the years were invaluable. The late Jerry Katz lent a sympathetic, patient, platonist ear over the course of many discussions and arguments. His unabashed platonism served as a foil to my more abashed platonism.
x
Acknowledgments
I owe a debt of gratitude to my mentor and sometime colleague, Michael Jubien, for support and evaluation of my work over the years, and for chairing the “Author Meets Critics” session at the APA. I am also indebted to Takashi Yagisawa, who organized and participated in the session, and to Bruce Aune and the late Greg Fitch for their copious and helpful comments at the session. Thanks also to my colleagues Wayne Davis and Steve Kuhn, who obligingly critiqued earlier drafts of this work and provided a wealth of useful advice and support over the years. Lon Berk read the book in its entirety, providing striking insights and unwavering encouragement. At a meeting of the Society for Exact Philosophy, Mike Dunn provided helpful comments on my work on occurrences of expressions and encouraged me to extend the work to other sorts of types, which resulted in chapter 7. The late David Lewis also provided invaluable feedback on this chapter. I am indebted to Ned Markosian for his comments on “On Words,” when it was first published in Journal of Philosophical Research; to Palle Yourgrau and Ned Block for helpful comments; and to audience members at Georgetown University, MIT, the City University of New York, and the APA “Author Meets Critics” session. I am most grateful to Georgetown University for the sabbatical and other paid leaves that allowed me to finish and revise the book. Thanks, Dean Jane McAuliffe and Wayne Davis, for your support! My thanks also go to numerous Georgetown University students over the years, especially Matt Burstein, Heath White, and Matt Rellihan for their constructive comments and criticisms, and to Joseph Rahill for editorial assistance. Much credit goes to the four anonymous reviewers for the MIT Press, who must have read the manuscript very carefully indeed to give me such thoughtful and extensive comments. Credit goes to the Philosophy Documentation Center for permission to reprint some material in chapters 3 and 6 that originally appeared in Journal of Philosophical Research, and to Kluwer Academic Publishers for permission to reprint material in chapter 5 that originally appeared in Philosophical Studies and material in chapter 7 that originally appeared in Journal of Philosophical Logic. And last but not least I am grateful to my husband, Lon Berk, our children Cassidy and Norah Berk, and my father Bill Wetzel for their support and understanding during the most difficult periods of this undertaking.
Introduction
Peirce (1931–58, vol. 4, p. 423) illustrated the type–token distinction by means of the definite article: there is only one word type ‘the’, but there are likely to be about twenty tokens of it on this page.1 Not all tokens are inscriptions; some are sounds, whispered or shouted, and some are smoke signals. And some, as David Kaplan (1990) pointed out, are empty space (e.g., in a piece of cardboard after letters have been cut out of it). The type ‘the’ is neither written ink nor spoken sound. In fact, it is no physical object at all; it is an abstract object. Or consider the grizzly (or brown) bear, Ursus arctos horribilis. At one time its U.S. range was most of the area west of the Missouri River, and it numbered 10,000 in California alone. Today its U.S. range is Montana, Wyoming, and Idaho, and it numbers fewer than 1,000. Of course no particular bear numbers 1,000, and no particular bear ever had a range comprising most of the area west of the Missouri. It is a type of bear, a species of bear, that has both properties. For one more example, consider Mozart’s Coronation Concerto (K.537), the penultimate of his twenty-seven piano concerti. There are scores of the work, performances of it, recordings of performances of it, compact discs that contain a recording of a performance of it, and playings of the compact discs; but none of these is identical to the concerto (type) itself. This book is, first, an attempt to make the case that types exist, and second, to explore and answer some of the questions that naturally arise if they do. It is therefore an essay in ontology, urging that there are abstract objects. Traditionally, an object is said to be abstract if it lacks a spatial and a temporal location.2 So numbers, sets and propositions are said to be abstract objects. What makes types of special interest is that, unlike abstract objects of other sorts—sets and numbers, for example—types have tokens; they are in some sense repeatable.3 Are they universals, then? Many authors just assume that they are (see, for example, the quotes from Goodman in
xii
Introduction
chapter 5), although some do not (see the quote from Richard Wollheim below). There is no need to take a stand on whether types are universals here, since nothing hinges on it in what follows. But I will anyway. Having instances is to my mind the hallmark of a universal, and since types are the sort of thing that have instances, they are universals. That is, the tokening relationship is a sort of instantiation relationship. Defending this claim, however, would result in a different, much longer essay, and since nothing hinges on it in what follows, I will bracket the question here. Readers who do not agree that types are universals should just view this essay as an essay on abstract objects, for that is what it is. Whether or not types are universals, there certainly are important differences between types and such classic examples of universals as the property of being white or the relation of being between. Wollheim (1968) helpfully mentions three differences. First, he says, the relationship between a type and its tokens is “more intimate” than that between (a classic example of) a property and its instances. By this he means that “not merely is the type present in all its tokens like the [property] in all its instances, but for much of the time we think and talk of the type as though it were itself a kind of token, though a peculiarly important or pre-eminent one” (p. 76). The way I would put this last point is that types are objects. (The point will be defended in chapter 2.) Second, Wollheim notes that although types and the classic examples of properties often satisfy the same predicates, there are many more predicates shared between a type and its tokens than between a classic example of a property and its instances (p. 77). Third, he argues that predicates true of tokens in virtue of being tokens of the type are therefore true of the type (Old Glory is rectangular), but this is never the case with classic properties (being white is not white) (p. 77). The relation between types and their tokens will be taken up in more detail in chapter 6. For now it suffices to note the apparent differences between types and other abstract objects, on the one hand, and between types and other universals, on the other. Again, this is not to say that types are not universals; I think they are. Nor is it to insist that there is no concept of property under which types could turn out to be properties in the final analysis. But I am not here identifying types as properties, in view of the important differences mentioned above and below between types and the classic examples of properties such as being white. One of the chief differences between types and such classic properties as being white has to do with the sorts of arguments advanced in favor of their existence. In the debate over classic properties and relations, it is tradi-
Introduction
xiii
tional to concentrate on predicates. However, ever since Quine 1961a was published in 1948, there seems to have emerged a philosophical consensus that Quine neutralized the arguments for universals and abstract objects based on the meaningfulness of predicates—on the grounds that one need not infer from “Some dogs are white” that whiteness or being white exists. That is, many philosophers today share Quine’s view that predicates do not need a “reference”—that “The letter ‘A’ was Phoenician” can be true even if there is no property of being Phoenician. (David Armstrong [1978a, p. 16] calls Quine’s form of nominalism “ostrich nominalism,” I suppose because it puts its head in the sand on the question of what makes the sentence true.) Whatever the merits of the Quinean view, I shall not argue against it in this essay. There is a better argument (for abstract objects, anyway). Few think that “The letter ‘A’ was Phoenician” can be true if there is no unique letter ‘A’, or that “there are exactly twenty six letters of the English alphabet” can be true if there are no letters, or more than twenty six of them. Unlike such properties as whiteness and being Phoenician, types are quintessentially objects. Types are quintessentially objects in the Fregean and Quinean senses. For Gottlob Frege, (roughly) an object is anything that can be referred to with a singular term (e.g., ‘A’, ‘the letter “A” ’, ‘the first letter of the English alphabet’). For W.V. Quine, (roughly) an object is anything that can be the value of a bound variable of quantification (e.g., “There are exactly twenty six letters of the English alphabet”). We use singular terms to refer to types and we quantify over them, not only in our everyday language but also in our best scientific theories of reality. Just how very often we do so will be made clear in chapter 1. The linguistic data presented there suggest that, far from being the exception, apparent reference to and quantification over types is the norm or nearly so in our speaking and writing habits. Types have to be reckoned with, and cannot simply be swept under the rug. In chapter 2, therefore, it is urged that if we take either Quine’s or Frege’s criterion of ontological commitment for objects seriously, we must countenance types. Another hallmark of objecthood that types have associated with them are criteria of identity—rules that give criteria for when x and y are numerically identical and when they are different. It is commonly held that physical objects of the same sort are different if and only if they do not occupy the same spatiotemporal location. But abstract objects have criteria of identity too. Sets are different if and only if they have different members. Symphonies are different if and only if they are composed of quite different notes. Chemical elements are different if and only if they have different
xiv
Introduction
atomic numbers. Sometimes the criteria of identity are not so easily formulated and are a matter of theoretical debate, as with species and words. But the problem with such cases, as we will see in chapter 6, is not a lack of criteria of identity, but the existence of several (not unlike the standard problem afflicting the concept of a person). Nominalists, of course, want to sweep abstract objects under the rug, or, rather, “analyze them away.” The epistemological motivation for this is explored in chapter 2. Paul Benacerraf (1983) has posed a challenge to any realist philosopher of abstract objects to explain how spatiotemporal creatures like ourselves, caught up in the causal nexus, could have knowledge of them. In recent years, the motivation for nominalism has largely come down to a felt need for some sort of causal requirement on knowledge. Although the nominalist seems at first blush to have an advantage here, I argue that recent philosophical efforts have shown that no reasonable causal requirement on knowledge is likely to rule out knowledge involving abstract objects. And in chapter 5 I show that whatever epistemological advantage the nominalist might be thought to have in virtue of a less abstract ontology is offset by the epistemological disadvantages that accompany nominalism. For the nominalist, type talk is just a harmless façon de parler for talk about tokens. Essential to the nominalist program, then, is “analyzing away” all apparent references to, or quantification over, types. The usual nominalist attitude seems to be that there are a few bothersome sentences that need paraphrasing—for example, “the color red resembles the color orange more than the color green,” or “ ‘Paris’ consists of five letters”— but that they are easily paraphrased in terms of sentences referring to and quantifying over tokens (i.e., spatiotemporal particulars). In chapter 3 I consider the popular suggestion that an adequate paraphrase for ‘The T is P’ is ‘Every (token) t is P’. I focus on the case of words to see if there is anything all tokens of a word have in common and show that there isn’t. Chapter 4 further explores the prospects of providing adequate paraphrasing for ‘T is P’, rejecting such promising possibilities as ‘Every normal t is P’, ‘Most ts are P’, ‘Average ts are P’, and ‘Either every (token) t is P, every normal t is P, most ts are P, or average ts are P.’ Borrowing from research in linguistics, I suggest that the best paraphrase would be ‘ts are P’ where this is a generic, or “characterizing” sentence, but show that even this does not work. I argue that in view of the fact shown in chapter 1 that we are up to our necks in apparent references to, and quantifications over, types, only a systematic “reduction” could assure us of ade-
Introduction
xv
quate paraphrasing. The thrust of chapters 3 and 4 are that prospects for such a systematic reduction are slim. Chapter 5 explores the consequences of taking Quine and Goodman’s form of nominalism as applied to linguistics seriously to show how counterintuitive and epistemologically problematic it is. Assuming, then, that realism about types can be shown to be more attractive than nominalism as a philosophy, and that therefore we ought to take types seriously, certain questions naturally arise. What are types? What makes a token a token of one type rather than another? (This is not to assume that a token can’t be a token of more than one type; grizzly bears are still bears.) How do we know it is a token of that type? Do some types fail to have tokens? What, if anything, do all and only tokens of a particular type have in common other than being tokens of that type? This last question is answered in chapter 3 (“Nothing beyond being a token of the type”); sketches of answers to the other questions are in chapter 6. The final chapter concerns a puzzle that arises if we do take types seriously. Consider the line ‘Macavity, Macavity, there’s no one like Macavity’. The word ‘Macavity’ occurs three times in the line. The line itself occurs three times in T. S. Eliot’s (1952, p. 163) poem “Macavity: The Mystery Cat,” so the line, we may assume, is a type. It consists of seven words. Seven word types, or seven word tokens? Not seven word tokens, since tokens are concrete and the line is abstract. So it must consist of seven word types. But this too is impossible because there are only five word types of which it might consist! I offer a solution to this problem for words and expressions. Then I extend the puzzle to other so-called structural types (e.g., flags, molecules, and sonatas)—types that structurally involve other types (stars and stripes, atoms, and notes, respectively)—and I offer a solution to it also. David Lewis (1986a) surveys several plausible accounts of what a structural universal is, and argues that none of them work. I argue that his objections do not work against the account of structural universals that depends upon my account of occurrences. The style of doing metaphysics in this book is not revisionist—I am not proposing that we embark on a bold new metaphysics. Although I am a great fan of Quine (and Goodman), there is a noticeable tension in Quine. He preaches ontological relativity, but also speaks of “acquiescing in our mother tongue” and “not rocking the boat.” Of course it is fun to “rock the boat” ontologically, as do the delightful and bizarre theories, for
xvi
Introduction
example, which hold that everything is a number, or that only the spacetime field and its wrinkles exist, or that there is just one very busy subatomic particle zipping back and forth in time, comprising all the matter there is. But although desert landscapes are lovely, so are other landscapes. Even the overpopulated urban one in which I find myself has a good deal of charm. The task I set myself is to figure out what inhabits the local landscape—that is, what we are committed to when we acquiesce in our mother tongue. When we do so we are committed to types.4
1
The Data
The distinction between types and tokens has widespread application. The present chapter will show just how widespread it is. Reference to types occurs not only in philosophy, logic, zoology and linguistics, but in most other disciplines. First we will look at the important role the type–token distinction plays in philosophy; then I will present the data that show that talk of types is thoroughly ensconced in ordinary and scientific language and theory. I will not argue in this chapter that, as a result of the truth of this type-talk, types exist; that is the thesis of chapter 2. Nor will I argue here or anywhere else that there are no type-free statements logically equivalent to any of these type statements; I assume there often are. Chapters 3 and 4 will examine whether all of them can be so paraphrased. The purpose of chapter 1 is simply to present the multitudinous data that would require this nominalistic paraphrasing. Type–Token Use in Philosophy In philosophy of language, linguistics, and logic the type–token distinction is clearly important because of the central role played in all three by expressions, which come in types and tokens. Especially noteworthy is the debate concerning the relation between the meaning of a sentence type and the meaning of a sentence token (a relation that figures prominently in Grice 1969). So, for example, the sentence type ‘John loves opera’ means that John loves opera, but a speaker might say it sarcastically meaning by her token that John loathes opera. In philosophy of mind, the distinction yields two versions of the identity theory of mind (each of which is separately criticized in Kripke 1972, for example). The type version of the identity theory (defended by Smart [1959] and Place [1956], among others) identifies types of mental events/
2
Chapter 1
states/processes with types of physical events/states/processes. So, for example, it says that just as lightning turned out to be electrical discharge, so pain might turn out to be C-fiber stimulation, and consciousness might turn out to be brain waves of 40 cycles per second. On this type view, thinking and feeling are certain types of neurological processes, so absent those processes, there is no thinking or feeling. The token identity theory (defended by Kim [1966] and Davidson [1980], among others) maintains that every token mental event is some token physical event or other, but it denies that a type matchup must be expected. So, for example, even if pain in humans turns out to be realized by C-fiber stimulation, there may be other life-forms that lack C-fibers but have pains too. And if consciousness in humans turns out to be brain waves that occur 40 times per second, perhaps androids have consciousness even if they lack such brain waves. In aesthetics, it is generally necessary to distinguish works of art themselves (types) from their physical incarnations (tokens). (See, e.g., Wollheim 1968; Wolterstorff 1980; Davies 2001.) This is not the case with respect to oil paintings like da Vinci’s Mona Lisa where there is and perhaps can be only one token, but it seems to be the case for many other works of art. There can be more than one token of a sculpture made from a mold, more than one elegant building made from a blueprint, more than one copy of a film, and more than one performance of a musical work. Beethoven wrote nine symphonies, but although he conducted the first performance of Symphony No. 9, he never heard the Ninth, whereas the rest of us have all heard it; we have all heard tokens of it. In ethics, actions are said to be right or wrong—but is it action types or only action tokens? There is a dispute about this. Most ethicists from Mill (1979) to Ross (1988) hold that the hallmark of ethical conduct is universalizability, so that a particular action is right/wrong only if it is right/wrong for anyone else in similar circumstances—in other words, only if it is the right/wrong type of action. If certain types of actions are right and others wrong, then there may be general indefeasible ethical principles (however complicated they may be to state, and whether they can be stated at all). But some ethicists hold that there are no general ethical principles that hold come what may—that there is always some circumstance in which such principles would prescribe the wrong action— and such ethicists go on to add that only particular (token) actions are right or wrong, not types of actions. See, for example, Murdoch 1970 and Dancy 2004.
The Data
3
The Data The type–token distinction is also widely applicable outside of philosophy. The main point of the current chapter—apt to be denied, ignored or understated by most philosophers (even myself, before I did the research)— is that: Talk of types is thoroughly ensconced in ordinary and scientific language and theory. That is, we seem to refer to and quantify over types (and other abstract objects) with an astounding level of frequency in ordinary language, science, and art—whenever generality is sought. In such contexts, apparent references to types may well be the rule, rather than the exception. Even when the reference is ambiguous as to a type or a token—for example, in the title of the Pratt (1998) article “From the Andes to Epcot, the Adventures of an 8,000-Year-Old Bean”—it usually turns out it is the type being referred to. Witness how in the bean case the ambiguity was not cleared up until the fourth sentence of this paragraph: What may be the world’s oldest bean has driven Daniel Debouck, a Belgian plant geneticist, in and out of Andean mountains and valleys from Bolivia to Ecuador for 20 years. Recently, the bean has embarked on an international circuit, from Peru’s Sacred Valley of the Incas to Epcot Center at Disney World, with stops in the Midwest and Japan. This is the story of the nuna, a 8,000-year-old bean. For most of its history, the nuna has been a bit player on the agricultural stage.
Since the ubiquity of type talk is somewhat contrary to what one’s philosophical training would lead one to expect, this entire chapter has been devoted to presenting it. The ubiquity of type talk will be shown by exhibiting many examples from various sources and disciplines. To assuage the reader’s concern that perhaps these examples are not representative but constitute merely carefully culled “anecdotal evidence,” we shall use as our starting point an arbitrary copy of a science periodical. Science magazine might seem a good choice, because it is a general science periodical that covers numerous areas of science. A quick read of the July 7, 1995, issue (randomly selected) reveals that at least two-thirds of the articles clearly have apparent references to types. Unfortunately, it would take the rest of this book to wade through all twenty-eight articles.1 And, with titles like “p34cdc2 and Apoptosis,” or “In Vivo Transfer of GPI-Linked Complement Restriction Factors from Erythrocytes to the Endothelium,” there is a risk of distraction by too much biomedical jargon. Hence my second choice: one issue of the
4
Chapter 1
New York Times’s “Science Times.” It also covers general science—often relying on the same sources as periodicals such as Science—but it usually contains fewer than a dozen articles. Examining it will be a more manageable job. The selection was “random” since I had not looked at it before I selected it. What topics are touched on will dictate what topics we pursue in more detail after we’ve scrutinized the newspaper’s presentation. I hope the reader will be impressed by the sheer volume of the examples, their ubiquitousness, and how ordinary, familiar and harmless they seem. Those for whom this point is obvious would do well to read the section on phonology in this chapter (for it will be useful later) and then proceed to chapter 2, where it is urged that we ought to conclude from the data that types exist. (If the reader is straining at the bit to offer type-free paraphrases of all the data, she should proceed at once to chapters 3 and 4, where it is shown that there is reason to think this cannot be done.) The January 2, 1996, copy of the New York Times’s “Science Times” contains nine articles on a variety of topics: one on historical linguistics, two on environmental biology, two on human genetics (four altogether in biology), one on physics, two on computers, and one on chess. Even a casual reading shows that eight of the articles contain many apparent references to, or quantifications over, types (and the ninth is not without them). The references are “apparent” in that they appear in the surface structure. That is, in the surface structures of the sentences, there are many quantifiers that, if they quantify over anything, must be construed as quantifying over types rather than tokens (particulars with unique spatiotemporal locations)—unless we are to impute nonsense or falsity to the sentence. Similarly, there are many terms in the surface structure that qualify syntactically as singular terms—such as ‘the Florida gopher tortoise’, ‘the same word’, ‘the long gene’, or ‘the D4 dopamine receptor gene’—which (in the context in which they occur) cannot be construed as referring to tokens, unless we are to be uncharitable to the author. (Singular terms include noun phrases like the ones mentioned, and proper nouns like ‘Frederick the Great’, ‘Boston’, and ‘Alpha Centauri’. They purport to refer to a single thing, unlike general terms such as ‘tree’. There will be a bit more discussion about singular terms in chapter 2.) The idea will be clear from the examples. The examples will be presented with a brief indication in each case as to why, if we assume the sentences are true and if we take the surface structure seriously, the referents of certain singular terms cannot be spatiotemporally located particulars—tokens—and the quantifications—construed straightforwardly—cannot be quantifications over tokens. The reader may be anxious to “analyze away” apparent type
The Data
5
talk in favor of talk of tokens by considering paraphrases that quantify only over tokens. Such paraphrases are useful and some will be examined as we go along. But patience is urged; the issue of paraphrasing away all references to types will be taken up in chapters 3 and 4. Similarly, for a discussion as to why we should take the surface structure seriously as a guide to ontology, see chapter 2, because there will be no such discussion in chapter 1. Here are some of the more representative examples, where the apparent references to and quantifications over types have been italicized. (Italicized passages occurring in the original quote have been underlined instead.) The reader may notice occasional explicit apparent references to classes and properties in the examples. These have not been italicized. Compared to type-references, apparent references to classes and properties are few and far between. (Of course, since classes are abstract objects, like types, and properties have instances, as types do, and all have conventionally been considered “platonic objects,” references to classes and properties would also be welcome to the thesis of this book.) 1
Linguistics
Historical-Comparative Linguistics Linguistics is awash in apparent references to and quantifications over types. The situation in historicalcomparative linguistics is analogous in many respects to that in evolutionary biology, except that languages are among the taxonomic units, not species. The analogy is made explicit in Johnson 1996, “New Family Tree Is Constructed for Indo-European Languages”: In tracing the pedigree of languages, linguists face the same problems biologists confront in drawing taxonomic charts of the species. Similar traits do not necessarily imply common descent. . . . (p. B15)
I will argue in chapter 6 that the analogy is even stronger, because words should be viewed as real kinds, just as species are. As we saw earlier, it was by means of a word that Peirce first characterized the distinction and coined the expression ‘type-token’ for it. From the Times: “Suppose two languages inherit the same word meaning ‘winter’, and both of them independently shift its meaning so that it means ‘snow’ instead,” Dr. Ringe said. “Greek, Armenian and Sanskrit actually did that.” (p. B15)
Since the same word is said here to occur in more than one language, and, unlike a word token, can shift its meaning, then the same word is a type. (I assume that ‘winter’ refers to a word, as in ‘meaning what “winter”
6
Chapter 1
means’. If not, delete the italics.) And if word types are abstract objects, then so are the languages that contain them: A proposed family tree of Indo-European languages would account for the development of the Germanic tongues from an early offshoot of the ancestor of Slavic and Baltic languages, which include Lithuanian, Latvian, Russian, Czech and Polish. (p. B9)
Since the languages referred to here are spoken by many people, they are types, not spatiotemporal particulars,2 and so is the tree that accounts for their development. When we shift our gaze from the newspaper to something more scholarly in linguistics, we see that apparent references to types, and quantifications over them, are even more frequent. All examples are from Collinge 1990, An Encyclopedia of Language, which consists of twenty-six articles. Author after author explains a subdiscipline of linguistics by making copious apparent references to types. There was no need to search painstakingly for examples; the articles most relevant to the question of what the ontology of linguistics is contain many examples on almost every page. It would perhaps surprise no one that languages should turn out to be abstract objects, but we shall see that even the most “concrete” and “physical” end of linguistics, phonetics, requires copious types. After that we shall proceed to phonology, and in chapter 6 to lexicography. Phonetics In “Language as Available Sound: Phonetics,” M. K. C. MacMahon explains (Collinge 1990): Phonetics (the scientific study of speech production) embraces not only the constituents and patterns of sound-waves (ACOUSTIC PHONETICS) but also the means by which the sound-waves are generated.
His description of articulatory phonetics contains many apparent references to parts of the speech organs, for example: The sound-waves of speech are created in the VOCAL TRACT by action of three parts of the upper half of the body: the RESPIRATORY MECHANISM, the voice-box (technically, the LARYNX), and the area of the tract above the larynx, namely the throat, the mouth, and the nose. . . . The front of the larynx, the ADAM’S APPLE . . . is fairly prominent in many people’s necks, especially men’s. Anatomically, the larynx is a complicated structure . . . it contains two pairs of structures, the VOCAL FOLDS and VENTRICULAR FOLDS. (p. 4)
There follow several more pages of painstaking descriptions of the fifteen other parts of the speech organs (the tongue has five parts), every one of which is referred to as ‘the’ such and such. (Interestingly, the author defends this way of speaking as follows:
The Data
7
X-ray studies of the organs of speech of different individuals show quite clearly that there can be noticeable differences—in the size of the tongue, the soft palate and the hard palate, for example—yet regardless of genetic type, all physically normal human beings have vocal tracts which are built to the same basic design. In phonetics, this assumption has to be taken as axiomatic, otherwise it would be impossible to describe different people’s speech by means of the same theory. Only in the case of individuals with noticeable differences from this assumed norm (e.g. very young children or persons with structural abnormalities of the vocal tract such as a cleft of the roof of the mouth or the absence of the larynx because of surgery) is it impossible to apply articulatory phonetic theory to the description of the speech without major modifications to the theory. [p. 6])
What do the speech organs produce? Words, syllables, and sound-segments— types, that is: Unless we are trained to listen to speech from a phonetic point of view, we will tend to believe that it consists of words, spoken as letters of the alphabet, and separated by pauses. This belief is deceptive. Speech consists of two simultaneous ‘layers’ of activity. One is sounds or SEGMENTS. The other is features of speech which extend usually over more than one segment: these are known variously as NON-SEGMENTAL, . . . or PROSODIC features. For example, in the production of the word above, despite the spelling which suggests there are five sounds, there are in fact only four, comparable to the ‘a’, ‘b’ ‘o’ and ‘v’ of the spelling . . . but [there are] also . . . two syllables, ‘a’ and ‘-bov’. Furthermore, the second syllable, consisting of three segments, is felt to be said more loudly or with more emphasis. (pp. 6, 8)
Not only do individual syllable types like ‘a’ and ‘-bov’ exist, but also the syllable itself exists as a theoretical unit in phonetics (much as the species does in Mayr’s discussion below): The nature of the syllable has been . . . a matter for considerable discussion and debate. . . . [M]ost native speakers of a language can recognise the syllables of their own language. . . . Various hypotheses have been suggested: that the syllable is either a unit which contains an auditorily prominent element, or a physiological unit based on respiratory activity, or a neurophysiological unit in the speech programming mechanism. The concept of the syllable as a phonological, as distinct from a phonetic, unit is less controversial. . . . (p. 8)
MacMahon’s discussion of vowels clearly quantifies over vowels (types, i.e.): The notion that there are five vowels in English is quite erroneous, and derives from a confusion of letter-shapes and sounds. Most accents of English contain about 40 vowel phonemes, but the number of actual vowel sounds that can be delimited in any one accent runs into hundreds. (p. 19)
Obviously, if by “actual vowel sounds” he meant tokens, the number would run into trillions rather than hundreds. Similarly,
8
Chapter 1
Jones’s . . . contribution was to provide a set of reference points around the periphery of the [vowel] area [of the tongue] in relation to which any vowel sound of any language whatever could be plotted. These reference points are known as the Cardinal Vowels. Altogether there are 18 Cardinal Vowels . . . (p. 21)
Notations and transcriptions are required to describe movements away from these Cardinal Vowels: The notation of a Southern English pronunciation of ah, for example, could be [ɑ+]. . . . When making a phonological transcription . . . the use of a particular Cardinal Vowel symbol does not necessarily mean that the phonological unit represented by that symbol is a Cardinal in quality. (p. 21)
No doubt the reader gets the idea and can be spared the examples concerning consonants. Phonology In “Language as Organized Sound: Phonology,” Erik Fudge (Collinge 1990) contrasts phonetics with phonology. This is an important distinction, and some of the facts mentioned below will prove crucial in chapter 3. For example, it is not generally appreciated outside of linguistics that some of the properties of a phoneme—its fricativeness, perhaps—may not be physically phonetically present in a particular utterance, as when ‘seven’ with its ‘v’ phoneme is pronounced [sem]. (‘Fricative’ characterizes the frictional passage of a breath against a narrowing at some point in the vocal tract—e.g., lips, teeth or tongue. Examples include the initial sounds in ‘fish’, ‘vine’, ‘thin’, ‘sit’.) Fudge explains that phonetics, unlike phonology, is independent of particular languages; it is concerned with “the observable” and is dependent on technical apparatuses for obtaining results (p. 30). Phonetics, he claims, disproves a common fallacy about the nature of speech, i.e. the assumption that speech is made up of ‘sounds’ which are built up into a sequence like individual bricks into a wall (or letters in the printed form of a word), and which retain their discreteness and separate identity. (p. 31)
Moreover, it is very rare for two repetitions of an utterance to be exactly identical, even when spoken by the same person. (p. 31)
(Utterance here is a type because if it were a unique space-time token, it would not be possible for there to be two repetitions of it.) Phonology on the other hand is concerned with particular languages; it is more dependent on its historical-philosophical contexts than on technical apparatuses; it assumes that many utterance (tokens) are “alike in form and meaning” (p. 31).
The Data
9
[I]t is much more reasonable to regard the phonological representation [of an utterance] as being a string of individual, discrete elements much like letters in a printed word. (p. 31)
These discrete elements are, of course, the standard theoretical units of phonology, phonemes and allophones. Fudge illustrates the relationship between them in type terms: In standard English as spoken in England, the l of feel [dark [l]]is pronounced differently from the l of feeling [clear [l]]. . . . The technical term for the former articulation is ‘velarised’. . . . Other varieties of English do not exhibit this difference: many Scots and American varieties have dark [l] in both feel and feeling. . . . [T]he difference between clear [l] and dark [l] is completely predictable from the phonetic context in which the l appears. . . . [so] we say that they are allophones of the same PHONEME. (p. 32–33)
Other examples he gives rely heavily on types: [P]in is pronounced [PHIN], whereas spin is [spIn]. The strongly asperated [ph] never occurs after /s/, and the unaspirated [p] never occurs at the very beginning of a syllable . . . an utterance-final /p/ (as in Come on up!) is quite likely not to be released at all. (pp. 33–34) English /r/ has at least four different allophones. . . . (p. 34) For many speakers the ‘long o’ phoneme has a much more ‘back’ pronunciation before dark [l] than before other sounds. (p. 34)
Allophones are not to be confused with alternations: In the feel/feeling case which we considered earlier, the two words concerned are closely connected . . . : the difference may be described as an ALTERNATION. . . . Allophones of the same phoneme often participate in alternations in this way. However, it is not necessary to have an alternation in order for two sounds to be allophones of the same phoneme. . . . Conversely, the existence of an alternation does not necessarily indicate that the alternating sounds are allophones of the same phoneme. . . . [A]lternations . . . [between distinct phonemes] are often termed MORPHOPHONEMIC alternations, because they are alternations between phonemes with morphological relevance. (pp. 37–38)
One reason that phonology is concerned with particular languages is that sounds which are allophones of the same phoneme in one language may in other languages operate as distinct phonemes. (p. 33)
Thus one cannot “read off” the phonological representation of a particular utterance from a comprehensive phonetic representation. This fact will prove important in chapter 3. The relationship between phonetic differences and phonemic differences is complex; phonetic differences do not always give rise to phonemic differences:
10
Chapter 1
Where a particular phonetic difference does not give rise to a corresponding phonemic difference, we say that this phonetic difference is NON-DISTINCTIVE. . . . [D]ifferences which can give rise to a change of meaning, i.e. phonetic differences between phonemes, are referred to as DISTINCTIVE. The difference between [p] and [b] in English for example, is distinctive: pit and bit, ample and amble, tap and tab, are pairs of distinct words, not alternative pronunciations. (p. 35)
To make matters worse (for someone opposed to types), “phonologically relevant properties connected with the utterance are [not] necessarily physically present in the utterance” (p. 32): Take, for instance, the English word seven (phonetically [sevən] in careful speech). The fricativeness of the segment after the [e] vowel would certainly be taken as an essential property . . . of that segment: in English the difference between [b] and [v] is distinctive. . . . In informal speech the word might be pronounced something like [sebm], where the segment after [e] is a plosive . . . not a fricative; the essential distinctive feature of fricativeness . . . can no longer be found in the speech signal at this point. Indeed, in very colloquial speech the pronunciation might well be simplified to something like [sem], in which what was originally the fricative has no separate existence of its own in the speech signal. (p. 43)
This suggests that the “phonologically relevant properties connected with an utterance” are best understood as properties of utterance types, which some, but not all, tokens possess. At any rate, the phonemes themselves are easily describable as types: The net result . . . is that the phonemes of English fall into classes from which the distinctive features form convenient labels: /p t k f q s ʃ h/ are the class of ‘voiceless’ sounds in English, /t d s z q ð l n ʃ r/ are the ‘coronols’ . . . , /m n / are the ‘nasals’, /i e ɒ u / are the ‘short vowels’, and so forth. (p. 35)
As with the species and the syllable, the phoneme itself (and not just this or that phoneme) may be considered a type of types: Some scholars have viewed the phoneme as a family of sounds (allophones) in which (i) the members of the family exhibit a certain family resemblance, and (ii) no member of the family ever occurs in a phonetic context where another member of the family could occur. (p. 33)
Many more examples of types in linguistics could be exhibited, but I expect the above will suffice. 2
Biology
As I said, there were four articles on biology: two on environmental biology and two on human genetics.
The Data
11
Environmental Biology Species are natural kinds (more on this in chapter 6) that have members. I will argue that a species is best understood as a type, not as a set or class (understood in the usual extensional mathematical sense), and that its members are tokens. The following examples from Dicke 1996, “Numerous U.S. Plant and Freshwater Species Found in Peril,”3 are quite characteristic of discussions of particular species. The ivory-billed woodpecker, once North-America’s largest and most spectacular, was declared extinct. Less than a century ago, it was found across the South. Its last confirmed sighting in the United States was in Louisiana in the 1950s. The banded bog skimmer, a rare dragonfly, was found for the first time in Maine. It was the first time it had appeared so far north. The Tarahumara frog, which lived in Arizona, has disappeared from the United States. However, it is still found in Sonora, Mexico. (p. B12)
Obviously, no particular flesh-and-blood ivory-billed woodpecker token “was found across the South a century ago” and now is extinct. ‘It’ refers to a type of woodpecker. Similarly, no particular banded bog skimmer is rare, and no particular Tarahumara frog disappeared from the United States—a certain type of dragonfly and type of frog has those characteristics. But, it might be asked: how does a species come by such characteristics as being found across the South if a species is an abstract object? And the obvious and correct answer is: by virtue of facts about its tokens. The ivory-billed woodpecker was “found across the South” because there were tokens of it in all parts of the South; it was relatively easy to find a token most anywhere in the South. The banded bog skimmer was found for the first time in Maine because a token of it was found. The Tarahumara frog, which lived in Arizona, can be said to have “disappeared from the United States,” because although there used to be Tarahumara frogs in the United States, there are none in the United States anymore. It is very important to note that I am not claiming that there are no sentences equivalent to the above sentences but which do not appear to refer to types and quantify only over tokens. Nor am I denying that facts about types are in large part dependent on facts about tokens. If the ivory-billed woodpecker is extinct, it is because there are no more of them.4 The present point is only to show that apparent references to types are extremely common. (Note also that I am not trying to argue in this chapter that types exist; that will be the job of chapter 2.) The use of singular terms to refer to particular species also appears in the Stevens (1996) article, “Wildlife Finds Odd Sanctuary on Military Bases”:
12
Chapter 1
At Eglin Air Force Base, under the flight path of A-10 warplanes, rare species like the endangered red-cockaded woodpecker . . . and the imperiled Florida gopher tortoise . . . thrive in a habitat of longleaf pine. (p. B9)
(Again, it might be asked: if a species is an abstract object, how can it be said to “thrive in a habitat of longleaf pine”? And the same sort of answer presents itself: the predicate applies to the species in virtue of facts about members of it. Obviously, the members of it need not thrive in order for the species to thrive; all that is needed is that a sufficient number of members of it live long enough to reproduce for the species to be said to thrive.) Quantifications over species also occur. In Dicke 1996, for example, we find: Of 20,481 species examined, about two-thirds were secure or apparently secure, while 1.3 percent were extinct or possibly extinct, 6.5 percent were critically imperiled, 8.9 percent were imperiled, and 15 percent were considered vulnerable. (p. B12)
It is clear that the 20,481 things examined are not particular organisms, but types of organisms, species. Again, it is facts about tokens that make these claims true (although I for one do not know what those facts are). The point is that what is being quantified over are types, not tokens. Human Genetics In Angier 1996, “Variant Gene Tied to Love of New Thrills,” we find: Maybe it is appropriate that the first gene that scientists have found linked to an ordinary human personality trait is a gene involved in the search for new things. (p. A1)
Obviously “the gene” in question is not a particular gene from one cell of one person, but is a gene many of us have tokens of—in fact many tokens of (i.e., a token in each cell in our bodies). The gene encodes the instructions for the so-called D4 dopamine receptor, one of five receptors known to play a role in the brain’s response to dopamine. (p. A1)
If the gene is a type, then so are the instructions it encodes and the receptor for which it encodes the instructions, and the brain that responds and its response. (That the receptors are not tokens is clear anyway from the quantification over them, since obviously each individual brain has many more than five receptor tokens that are able to respond to dopamine.) Because if the gene is a type—one that Bill Clinton, say, has many tokens of—then the dopamine receptor for which it encodes instructions cannot be one of Clinton’s many dopamine receptors, but must be a type of receptor, of which Clinton has many tokens.
The Data
13
As it turns out, novelty seekers possess a variant of the D4 receptor gene that is slightly longer than the receptor of more reserved and deliberate individuals. In theory, the long gene generates a comparatively long receptor protein, and somehow that outsized receptor influences how the brain reacts to dopamine. (pp. A1, B11)
Again, since the D4 receptor gene is a type, so are its two variants, the long gene and that of more reserved and deliberate individuals—also, the proteins that each generates. (Only a token of a gene can generate a token of a protein.) Similarly, in what follows, since the gene is a type, so is the report of the link: It is also the first known report of a link between a specific gene and a specific normal personality trait. . . . [T]he gene does not entirely explain the biological basis for novelty seeking. . . . Scientists say the dopamine receptor accounts for perhaps 10 percent of the difference in novelty-seeking behavior between one person and the next. . . . [W]e would expect maybe four or five genes are involved in the trait. (p. B11)
The last sentence above quantifies over gene types. In Fisher 1996, “Second Gene Is Linked to a Deadly Skin Cancer,” we find similar apparent references. Clearly, scientists don’t waste their time naming particular gene tokens; CDK4 is a gene type, as is its normal form, its mutated form, its protein, and so on: Researchers have found a second gene responsible for malignant melanoma. . . . [S]cientists . . . have identified a defect in a gene known as CDK4. . . . Their findings appear in this month’s issue of the journal Nature Genetics. . . . In its normal form, the CDK4 protein inhibits the p16 protein and so prevents the cell from dividing. But in the mutated form of the CDK4 gene its protein product too is changed. . . . The scientists . . . found the defect to be a germline mutation, meaning that it can be passed down from parent to child. (p. B18)
Admittedly, it may be possible to pass a particular gene token—one particular gene in one particular gamete—from parent to child; but since the same gene token does not get passed on to the next generation also, much less from other parents to their children, the it referred to in the final sentence above must be a gene type. Notice too that the journal referred to is a type, as is each of its issues, of which there is only one this month (with presumably many tokens). When we turn from the New York Times to a more scholarly work in biology, we can see that the apparent references to/quantifications over biological types exhibited above are not peculiar to the newspaper. Ernst Mayr 1970 (Populations, Species, and Evolution) is a good follow-up because it is about both species and genetics. Opening it to a page chosen at
14
Chapter 1
random reveals an unbelievable amount of “type talk.” The sheer volume of such talk shows that anyone who wants to “analyze away” type talk in favor of talk about tokens had better offer a systematic reduction of talk about types (something that, I will argue in chapters 3 and 4, there is ample reason to think cannot be done). For example, here are some of the dozens of quantifications over species and subspecies we find on this randomly selected page from Mayr: As a first approach to a study of intraspecific variability one may analyze the presence and frequency of subspecies in various groups of animals. The number of subspecies correlates, by definition, with the degree of geographic variability and depends on a number of previously discussed factors. . . . Degree of variability may differ quite strongly in families belonging to the same order. For instance, among the North American wood warblers (Parulidae) only 20 (40.8 percent) of the 49 species are polytypic [have several subspecies], while among the buntings (Emberizidae) 31 (72.1 percent) of the 43 North American species are polytypic. The difference is real and not an artifact of different taxonomic standards. Of the species of passerine birds in the New Guinea area 79.6 percent are polytypic, while only 67.8 percent of the North American passerines are polytypic. Among the 25 species of Carabus beetles from central Europe, 80 percent are polytypic, while in certain well-known genera of buprestid beetles not a single species is considered polytypic. . . . Classifying species as monotypic or polytypic is a first step in a quantitative analysis of phenotypic variation. Another way is to analyze the subdivisions of polytypic species: What is the average number of subspecies per species in various groups of animals and what is their average geographic range? There are believed to be about 28,500 subspecies of birds in a total of 8,600 species, an average of 3.3 subspecies per species. It is unlikely that this average will be raised materially (let us say above 3.7) even after further splitting. The average differs from family to family: 79 species of swallows (Hirundinidae) have an average of 2.6 subspecies, while 70 species of cuckoo shrikes (Campephagidae) average 4.6 sub species. . . . (p. 233)
Mayr also refers to higher-order types, for example the species, to convey higher-order generalizations. That is, in characterizing the population structure of species generally, instead of referring to an individual species, for example the ivory-billed woodpecker, he writes of “the species”—and consequently of its central populations, its peripheral populations, its epigenotype, its border, and so on, all of which are types if the species is: These marginal populations share the homeostatic system, the epigenotype, of the species as a whole. They are under the severe handicap of having to remain coadapted with the gene pool of the species as a whole while adapting to local conditions. The basic gene complex of the species (with all the species-specific canalizations and feedbacks) functions optimally in the area for which it had evolved by selection, usually somewhere near the center. Here it is in balance with the environment and here it can afford
The Data
15
much superimposed genetic variation and experimentation in niche invasion. Toward the periphery this basic genotype of the species is less and less appropriate and the leeway of genetic variation that it permits is increasingly narrowed until much uniformity is reached. These peripheral populations face the problems described in the discussion of “species border”. . . . Environmental conditions are marginal near the species border, selection is severe, and only a limited number of genotypes is able to survive these drastic conditions. (p. 232)
3
Artifacts
Computers In Lewis 1996, “About Freedom of the Virtual Press,” we are told The personal computer is now officially grown up. In January 1975, the Altair 8800 kit, considered the first true personal computer, made its debut on the cover of Popular Electronics. . . . Today, having reached the age of 21, the personal computer is only now beginning to reveal its true value and greatest potential. . . . (p. B14)
It is clear that the personal computer being referred to here is a type of computer, rather than the first token ever made, because it has a twenty-oneyear history which only begins with the Altair 8800—a subtype with spatiotemporal tokens—but doesn’t end there. Similarly, the other computer article in the issue of Science Times we’re considering, “Sometimes Achieving Simplicity Isn’t Cheap and Isn’t So Easy,” is devoted to another subtype of personal computer, the PN-8500MDS Super Powernote, said to be a “cheap nonstandard computer with limited functions” (p. B11). Human artifacts lend themselves well to type talk. Of course, when only one machine of a certain type gets built—the Cassini spacecraft bound for Saturn in 1997, for example—it may be tempting to say that here there is no type; there is only the “token.” But far more common is the situation where there is more than one token, as with the Altair 8800, or the Volvo 850: The 850GLT, the latest addition to Volvo’s menu in the U.S., is all new from the kisser to the tail. . . . [T]he 850 was conceived in the late 1970s, and design work began in 1986. . . . (Car and Driver, November 1992)
The reference here surely is not to a particular 850, which would not warrant so much attention and design work, but rather to a type. I trust such type talk is familiar enough that it is not necessary to produce the countless other examples that might be given. Just one more example of a quote about a human artifact needs to be mentioned here; and that is of a work of art. A case can be made that all apparent references to paint-
16
Chapter 1
ings are to particular physical objects uniquely located in space and time5 and not to types; but a great deal, perhaps most, other apparent references to works of art, in other art forms, do not so refer. When Charles Rosen (1972, p. 186), for example, in a very characteristic passage, writes that The opening of the E flat Quartet K. 428 shows how widely Mozart could range without losing the larger harmonic sense. . . . The opening measure is an example of Mozart’s sublime economy. It sets the tonality by a single octave leap . . . , framing the three chromatic measures that follow. The two E flats are lower and higher than any of the other notes, and by setting these limits they imply the resolution of all dissonance within an E flat context
—by ‘the E flat Quartet K. 428’ he does not intend to be referring to a particular performance (or to any other particular object or event) but to the E flat Quartet itself—something that was performed in Mozart’s day and also today. Wollheim (1968), Wolterstorff (1975, 1980), and Davies (2001) have argued that the work itself—what Mozart composed—is an abstract object. I agree. What Mozart composed is best understood as a type, and when it is performed, one hears a token of it. Of course, if the E flat Quartet K. 428 is a type, then so is its opening, its opening measure, and each of its other measures, its tonality, its first interval, each occurrence of E flat, and so on. (See chapter 7 for a discussion of occurrences.) Chess It should come as no surprise, then, that the Byrne 1996 column in Science Times about chess, another human invention, contains many apparent references to types, even though the article is ostensibly about a particular chess game played in the 1995 United States Championship: Accepting the Queen’s Gambit with 2 . . . dc has been known since 1512. In the early days, Black tried to keep the pawn, but after some bad positional and tactical knocks, the strategy has aimed for a semiopen board with free piece play and pressure against the white d4 pawn. No more 3 . . . b5? 4 a4 c6 5 e3 Bb7 6 ab cb 7 b3 with the fall of the pawn and superiority for White. Black must be watchful in this opening. Thus, the pawn snatch with 10 . . . Nd4 11 Nd4 Qd4 is too risky. For example, 12 Rd1 Qg4 13 Qg4 Ng4 14 Bb5! forces mate. But White has to be precise, too, after 10 . . . Be7, and play 11 a4 b4 12 Rd1. Instead, Dzindzichashvili took too much for granted in continuing to gambit his d4 pawn. After 11 c3? Nd4! 12 Nd4 Qd4, he could have tried 13 Rfd1, but 13 . . . Qg4 14 Qe3 Bb7 15 f3 Qb4 is safe enough for Black. He could also have tried 13 Qf3. . . . (p. B18)
If accepting the Queen’s Gambit with 2 . . . dc has been known since 1512, clearly it is a type of opening, since the only token that Byrne might be referring to, Patrick Wolff’s doing so in the 1995 U.S. Championship, has
The Data
17
not “been known since 1512.” (For the same reason, Byrne could not be referring to a possible event-move of Wolff’s, since it too could not have “been known since 1512.”) So too the Queen’s Gambit itself is a type, a token of which Dzindzichashvili played. Black cannot be Wolff, because Wolff did not play the game “in the early days” or receive “bad positional and tactical knocks.” Therefore the strategy referred to, Black’s strategy, cannot be the token of it that is Wolff’s alone, nor can the pawn referred to be any particular pawn token in Wolff’s possession. Most impressive of all is the sheer number of sequences of moves referred to, like the pawn snatch, 10 . . . Nd4 11 Nd4 Qd4, no token of which occurred in the actual Wolff-Dzindzichashvili game. It was pointed out to me that “the pawn snatch, 10 . . . Nd4 11 Nd4 Qd4” might be read as referring to a nonactual but possible token, the possible event of Wolff’s snatching the pawn, rather than to a type of move, and similarly for the moves that Dzindzichashvili “could have tried.” But the passages in question read better when interpreted as referring to types of moves; that is, doing so is more consistent with the rest of the paragraph and better achieves the level of generality the analysis of the game is seeking (as indicated by the use of the terms ‘White’ and ‘Black’). If I am right, then although the game (token) itself is described move by move in the article, most of the sequences of moves referred to above did not occur in the game at all; they are not tokens, but types of sequences of chess moves. (And if I am wrong, replace ‘most’ by ‘some’ in the preceding sentence.) Tokens of them may or may not exist. They are plays in other versions of the Queen’s Gambit—and these too are types. So Byrne 1996 is “about” a chess game (token), but the explanations offered in chess theory involve apparent references to types of chess moves. 4
Physics
Not even something as knock-down drag-out physical as football is safe from abstract types. In Leary 1996, “Physicists See Long Pass as Triumph of Torques,” we are told that “It turns out that the flight of a football is almost as complicated as the flight of an airplane,” said Dr. Rae. . . . Dr. Rae has done computer simulations of the forces acting on a flying ball and developed mathematical equations explaining the interactions. . . . [T]hree different kinds of torque are shaping its motion, he said. The wobble, which causes the front end of the football to trace out a circular pattern in the air as it travels, appears to keep the ball on track. . . . Dr. Rae said he discovered that the Magnus force, which results in areas of high and low pressure on opposite sides of the football, produces a torque toward the rear that pushes the nose of the ball to the right. (pp. B9, B16)
18
Chapter 1
Of course particular footballs are located in space and time, but the football is a type of object. (So although the first sentence might be viewed as a quantification over particular footballs, I italicized it anyway, because of the many other references to ‘the football’.) And if the football is a type, then so is its front end, rear end, nose and opposite sides, its wobble, and all three of the different kinds of torque shaping its motion, not to mention the forces that produce them. One of the forces cited as acting on the football was the Magnus force. Let us leave the newspaper behind, and see what physicists say about forces generally. Are there both particular forces and types of forces? Michael Faraday (1860) explains what he means by a “force, or power”: Suppose I take this sheet of paper, and place it upright on one edge, resting against a support before me . . . and suppose I then pull this piece of string which is attached to it. I pull the paper over. I have therefore brought into use a power of doing so—the power of my hand carried on through this string. . . . (p. 16)
Clearly, the power—the force—mentioned is supposed to be particular to Faraday’s hand, and therefore is a token force. Similarly, a bit of water has a force in it, and each piece of shot, he tells us, has “its own gravitating power”: Here, for instance, is some quick lime, and if I add some water to it, you find another power or property in the water. It is now very hot. . . . Now that could not happen without a force in the water to produce the result. (pp. 22–23) I have here a quantity of shot; each of these falls separately, and each has its own gravitating power. . . . (p. 31)
So there clearly are particular forces (token forces), according to Faraday. And if each piece of shot has its own gravitational force there would have to be very many different forces, or powers—zillions of them. But Faraday says there aren’t: We are not to suppose that there are so very many different powers; on the contrary, it is wonderful to think how few are the powers by which all the phenomena of nature are governed. [The earth] is made up of different kinds of matter, subject to a very few powers. . . . (p. 19) I explained that all bodies attracted each other, and this power we called gravitation. (pp. 44–45)
A simple way to reconcile these apparently conflicting claims of how many forces are involved is to say that, according to Faraday, there are many force tokens, but very few force types.
The Data
19
A notion related to force is field. Einstein (1934) credits Faraday (along with Maxwell) with first conceiving of the field: Faraday conceived a new sort of real physical entity, namely the “field,” in addition to the mass-point and its motion. (p. 35)
Since ‘the mass-point’ must here refer to a type (if it refers at all) because there is more than one mass-point, this suggests that ‘the field’ also refers to a type—unless, that is, Einstein thinks there is only one field. However, he does not, for he says [F]ields are physical conditions of space. (p. 68) The electro-magnetic fields are not states of a medium but independent realities, which cannot be reduced to terms of anything else and are bound to no substratum, anymore than are the atoms of ponderable matter. (p. 104)
Yet Einstein often refers to the electromagnetic field, as, for example, in these passages: The Maxwell-Lorentz theory of the electro-magnetic field served as the model for the space-time theory and the kinematics of the special theory of relativity. (p. 103) Something of the same sort confronts us in the electromagnetic field. (p. 105) Besides the gravitational field there is also the electromagnetic field. This had, to begin with, to be introduced into the theory as an entity independent of gravitation. (p. 73)
The last sentence suggests that the electromagnetic field exists, according to Einstein. And indeed he writes that By the turn of the century the conception of the electromagnetic field as an ultimate entity had been generally accepted. . . . (pp. 43–44) The electromagnetic field seems to be the final irreducible reality, and it seems superfluous at first sight to postulate a homogeneous, isotropic etheric medium. . . . (p. 106)
Once again it seems reasonable to reconcile these claims in terms of types and tokens: for Einstein, the electromagnetic field exists, it is a type of field, and it has many tokens. Another discovery credited to Faraday is that of the electron. Heisenberg (1979), for example, writes: Faraday’s investigations, his discovery of the electron (i.e. the atom of electricity and radio-active radiation) led us finally to Rutherford and Bohr’s famous atomic model and thus introduced the latest epoch of atomic physics. (p. 99)
20
Chapter 1
Edward Teller (1991) explains that Faraday did not measure the size of the copper atom, nor did he measure the charge of the electron; he measured the ratio of the two. (p. 100)
The two Teller is referring to are types: the size of the copper atom, and the charge of the electron. With this we are brought to atomic physics, where a good deal of type talk is to be encountered—for example, in the title of Ernest Rutherford’s important 1911 paper, “The Scattering of α and β Particles by Matter and the Structure of the Atom.” Teller chronicles Rutherford’s activity as follows: To study the atom, the English physicist Rutherford shot electrically charged particles through thin foils. The particles went through the material as if nothing were there. A few of the fast charged particles (which were actually α particles produced in the radioactive decay of heavy elements) were sharply deflected. Rutherford succeeded to explain his results in a quantitative way by a simple model of the atom. (pp. 133–134)
It is clear that the atom of Rutherford’s theory is not only a type, but a very important one, a model of a high order of abstraction. As Teller describes it: In the Rutherford model, the atom consists of a heavy nucleus whose radius is less than one ten-thousandth the radius of the atom. The nucleus carries a positive charge which is a multiple of the charge of the electron. (p. 134)
If the atom is a type, so is its nucleus, its radius, the proton, and the electron. Similarly, the helium atom, its nucleus (an α particle), the hydrogen atom, and so on, referred to in the following, must be types. The bombarding α particles are themselves nuclei of the helium atom (with a charge of 2 units). The α particles carry so much energy that they cannot be deflected by the light electrons (whose weight is 1/1840 of the hydrogen nucleus and even less compared to the four times heavier helium nucleus). . . . From the distribution of the deflection angles that he found, Rutherford deduced that Coulomb’s Law, F = e1e2/r 2, is valid down to a distance less than 1/10,000 of the radius of the atom as a whole (which is about an Angstrom unit or 10−8 cm). From this, Rutherford conjectured that the hydrogen atom looked like an electron rotating about a proton with the ratio of the masses 1 to 1840. The electron and proton, of course, carry charges which are equal but opposite. (p. 134)
In addition to atoms and molecules, many subatomic particles have been discussed in type terms. Richard Feynman (1995) chronicles their discovery and classification in a few pages in Six Easy Pieces, using copious amounts of type talk to do it:
The Data
21
We have a new kind of particle to add to the electron, the proton, and the neutron. That new particle is called a photon. . . . [Quantum electrodynamics] predicted [that] . . . besides the electron, there should be another particle of the same mass, but of opposite charge, called a positron. . . . (p. 37) The question is, what are the forces which hold the protons and neutrons together in the nucleus? . . . Yukawa suggested that the forces between neutrons and protons also have a field of some kind, and that when this field jiggles it behaves like a particle. [A]nd lo and behold, in cosmic rays there was discovered a particle of the right mass! It was called a ì-meson, or muon. (p. 38)
Obviously, by ‘it’ he means a type of particle, not a particular particle. Owing to its length, I consign to an endnote Feynman’s description of approximately thirty different particles and their interactions. Suffice it to say that his description relies almost exclusively on type talk.6 After his description is as complete as he could make it, he comments that “This then, is the horrible condition of our physics today” (p. 44). Feynman was saying this in the 1960s, before the quark had been discovered and long before the (1996) headline: “Tiniest Nuclear Building Block May Not Be the Quark”: Scientists at Fermilab’s huge particle accelerator 30 miles west of Chicago reported yesterday that the quark, long thought to be the simplest building block of nuclear matter, may turn out to contain still smaller building blocks and an internal structure.
Conclusion Type talk is pandemic. It is not occasional; it is not unusual; it is the norm. So much so—and this has been the point of the present chapter—that it deserves to be taken more seriously than it usually is. It cannot be casually dismissed as “just a way of speaking about tokens” (even if after careful discussion it were to be so analyzed). As the following example from “New Element: Zinc’s Heavy Kin” makes clear, even when there is only one token the talk may revolve instead around the type: Scientists at a German research institute have added a new element to the periodic table: element 112, a heavier, still unnamed relative of zinc, cadmium and mercury. A team of German, Russian, Slovak and Finnish physicists detected a single atom of the new metal on Feb. 9, the Society for Heavy Ion Research in Darmstadt announced today. They made the element by bombarding lead, which is element 82, with zinc, element 30, until two atoms fused as a new substance with as many protons as the two together. . . .
22
Chapter 1
The heaviest element in nature is uranium, which has 92 protons. . . . But 112 will remain nameless for the time being.
We have to face the responsibilities posed by such talk of types: either concede that types exist, or give a systematic semantics for claims apparently referring to types. This book attempts to make the case for the greater plausibility of conceding that they exist.
2
Types Exist
What are we to conclude from the data presented in chapter 1? I think we should conclude that the generic objects, the types, apparently referred to in the data exist—that is, species, genes, epigenotypes, languages, body parts like the larynx, syllables, vowels, allophones, computers like the Altair 8800, Mozart’s Coronation Concerto, the Queen’s gambit, the hydrogen atom, the football and so on. In section 1 of this chapter, it will be argued that, since this conclusion seems to be an immediate consequence of the data, the burden of proof is on those who would deny it. Section 2 will present some objections to this argument, the main one being that each claim that appears to refer to a type is merely a façon de parler for a claim that does not refer to a type. So, because such apparent references can be “paraphrased away,” according to this objection, they are harmless, and we need not suppose types exist. Fully answering this important objection will be the main business of chapters 3 through 5. Before turning to that, I explore in section 3 of this chapter the chief motivation for presuming that there is something suspect about types and hence that there is something to be gained by paraphrasing sentences that appear to refer to types so that the result does not. The chief motivation for denying there are any types is epistemological. Abstract objects, such as types, are said not to stand in causal relations with us, and since knowledge is thought to require such a relation, we cannot, the argument goes, have knowledge of abstract objects such as types. Since this is a front burner issue in philosophy of mathematics (since mathematical objects are abstract objects), much of the discussion occurs in that area. We examine the reasons given by Paul Benacerraf, W. D. Hart, and Hartry Field for demanding a causal requirement on knowledge. Basically, in response I offer a four-part counterargument: (i) any such requirement is either implausible, or is compatible with realism about abstract objects (contra Benacerraf); (ii) there is little help for the requirement to
24
Chapter 2
be had from naturalized epistemology (contra Hart); (iii) Field’s quirky requirement either reduces to a causal one, or reduces to the issue of what the best theory of the world is; and (iv) we do have at least a partial causal answer as to how we have knowledge of abstract objects. The upshot of this discussion is that any causal theory of knowledge robust enough to respond to well-directed criticisms is broad enough to apply to knowledge of types as well as their exemplified tokens. For this reason the plausibility of a causal theory of knowledge, contrary to what is often assumed, does not undermine the plausibility of the existence of types. But first let us return to the data of chapter 1. 1
Types Exist
Why think types exist? As I said, because it is a prima facie immediate consequence of the data exhibited in chapter 1. If 8.9 percent of the 20,481 species examined on Eglin Air Force base are imperiled, then there exist some imperiled species. If the Tarahumara frog has disappeared from the United States, then there is a species that has disappeared from the United States. If there are only four or five genes involved in the trait that prompts novelty-seeking, then there are genes—and “the four or five genes” can’t be tokens, as there are many more tokens, so the four or five must be gene types. If there are just five receptors that play a role in the brain’s response to dopamine, of which the D4 dopamine receptor is one, then there exist receptors, and for the same reason, the receptors being referred to can’t be tokens. They must be types, one of them being the D4 dopamine receptor. If in fast colloquial speech the English word ‘seven’ might be pronounced something like [sem], then there is an English word type, ‘seven’. If most accents of English contain about forty vowel phonemes, then there are vowel phoneme types. If Faraday discovered the electron and Rutherford shot electrically charged particles through thin foils to study the atom, then there is something—some type of thing—that Faraday discovered and another that Rutherford studied. And so on for the Queen’s Gambit, the E flat Quartet K. 428, and the three forces acting on the football. If the data are true (as they stand and not paraphrased), then there are certain generic entities such as species, receptors, genes, words, quartets, chess openings, and so forth; they exist. But, it may be asked, why think that because certain sentences are true, certain objects exist? The answer is: the sentences say or imply the objects exist, and the sentences are true, so (absent overriding objections) the objects exist.1 I think this intuition is so powerful that it should carry the
Types Exist
25
day, but it is worth articulating the criteria it is based upon—criteria that articulate how it is that such sentences “say or imply” certain objects exist. The two on which I will rely were formulated by W. V. Quine and Gottlob Frege. These criteria are to be broadly construed. Quine’s is a criterion for existence generally and hence is known as a “criterion of ontological commitment.” Frege’s is not a general criterion of ontological commitment; rather it is better described as a criterion for deciding, among the many existent things, which of them are objects. We might call it a “criterion of object commitment.” Of course, if something counts as an object, then it exists, as we say in the vernacular. But unlike Quine, Frege also thought that there were nonobjects, properties and other (as he called them) “functions.” So, for example, he thought the natural numbers were objects, but even though the addition function that operates on them was a function, not an object, it too is part of reality. For Frege, functions are “incomplete” entities in a way that objects are not; they need to hook up with objects to produce something complete. We will not be discussing Frege’s functions in this book.2 Quine’s Criterion Quine’s criterion is stated (in two forms) most famously in Quine 1961a, p. 13, as: To be assumed as an entity is . . . to be reckoned as the value of a variable. In terms of the categories of traditional grammar, this amounts roughly to saying that to be is to be in the range of reference of a pronoun. Pronouns are the basic media of reference. . . . The variables of quantification, ‘something’, ‘nothing’, ‘everything’, range over our whole ontology, whatever it may be; and we are convicted of a particular ontological presupposition if, and only if, the alleged presuppositum has to be reckoned among the entities over which our variables range in order to render one of our affirmations true.
Quine’s example is that ‘Some dogs are white’ says that some things that are dogs are white; and, in order that this statement be true, the things over which the bound variable ‘something’ ranges must include some white dogs, but need not include doghood or whiteness. (p. 13)
He gives a slightly longer formulation of his criterion that involves theories (which for him are sets of sentences) in Quine 1961b, p. 103: “entities of a given sort are assumed by a theory if and only if some of them must be counted among the values of the variables in order that the statements affirmed in the theory be true.”3 So, for example, if subatomic physics says
26
Chapter 2
that “some quarks have charm” then subatomic physics is committed to the existence of quarks. Let us apply Quine’s criterion of ontological commitment to the data of chapter 1. If we accept those sentences as true, then we must acknowledge the existence of generic entities such as species, genes, epigenotypes, languages, syllables, vowels, allophones, quartets, chess openings, atoms, and so on, all of which were apparently copiously quantified over. If it is true, as was claimed in chapter 1, that “most accents of English contain about forty vowel phonemes” then there are accents of English and there are vowel phonemes. If there are three forces acting on the football, then there are forces. As for species, Quine’s (1961a, p. 13) remark is a propos: when we say that some zoölogical species are cross-fertile we are committing ourselves to recognizing as entities the several species themselves, abstract though they are.
Quine’s criterion, then, supports the existence of the sorts of entities in question (species, for example) on the grounds that we quantify over them using such English words as ‘every’, ‘some’, and ‘all’. Frege’s Criterion Frege’s criterion, on the other hand, supports there being individual objects answering to certain singular terms (for example, ‘the Tarahumara frog’ in chapter 1). (We could make do with Quine’s criterion alone, were it not for his attitude toward singular terms, which he chooses to “analyze away.”. Frege takes the referential function of singular terms more seriously.) A singular term is either a proper name—Frege’s examples are ‘Jupiter’ and ‘Frederick the Great’—or a definite description. A definite description is a phrase of the form ‘the such and such’. According to Frege (1977a), p. 45, “the singular definite article always indicates an object.”4 He gives as examples of phrases containing the singular definite article (and hence referring to objects) ‘the chemical element gold’ and ‘the North Pole’. (Strictly speaking, it is not expressions standing alone but uses of them in a context that qualify as singular terms.) Armed with the notion of a singular term (formal criteria for which are in endnote 6) we can give a formulation of Frege’s criterion of object commitment. This version is due to Bob Hale: if a range of expressions function as singular terms in true statements, then there are objects denoted by expressions belonging to that range.5
Types Exist
27
For example, the names of heavenly bodies (‘Jupiter’, ‘the Milky Way’) form a range of expressions that function as singular terms in true statements of astronomy, so there are astronomical objects denoted by them. The names of individual species form a large range of expressions that function as singular terms in true statements of zoology, so there are objects—species—denoted by them. Frege would also say that there are numbers, because numerals and other numerical expressions function as singular terms in true mathematical statements. (And I agree with him on this, but since I have gone into it in 1989b I will not go into it here.) The point of requiring that there be a “range of expressions” is to avoid commitment to the existence of an object answering to every last stray apparent singular term (such as ‘God’s sake’ in ‘Close the door, for God’s sake’), while remaining committed to the existence of objects answering to kinds of terms, as for example those occurring in well-established theories such as zoology, nuclear physics, or number theory. Exact criteria for what it is to “function as a singular term in true statements” are difficult to formulate. However, Dummett (1981, chap. 4) offered a formal set of criteria for singular terms, which were later amended by Wright (1983, pp. 53–64) and Bob Hale (1987, chap. 2), and although I criticized their formulations in Wetzel 1990, such criteria represent a very good first-level approximation of such criteria. Since they are rather convoluted, an adequate discussion of them would take us onto a long tangent. I enclose them in an endnote for the curious reader.6 But even if formal syntactic criteria that comprise clear cut necessary and sufficient conditions are impossible to come by, and even if there are some difficult cases, we may say, on Frege and Dummett’s behalf, that it is clear that there is a real distinction between terms that function as singular terms and terms that do not—in, say, English. The terms in question from chapter 1—‘the Tarahumara frog’, ‘the D4 dopamine receptor’, ‘the English word “seven” ’, ‘the electron’, ‘the Queen’s Gambit’, ‘the E flat Quartet K. 428’, and so on are clear examples of terms that function as singular terms in the true statements in which they appear, and are recognizable examples from ranges of expressions (species terms, cell terms, word terms, chess opening terms, musical work terms). Based on a straightforward application of Frege’s criterion, if the data statements in chapter 1 are true, then these singular terms refer to objects. The broad point is that Frege’s and Quine’s criteria, when applied straightforwardly to the data presented in chapter 1, dovetail in yielding the conclusion that types are objects; they exist.7
28
2
Chapter 2
Objections
Three objections to our argument will be discussed. The most important objection is based on a remark of Quine’s. Right after Quine (1961a, p. 13) says when we say that some zoölogical species are cross-fertile we are committing ourselves to recognizing as entities the several species themselves, abstract though they are
he adds: We remain so committed at least until we devise some way of so paraphrasing the statement as to show that the seeming reference to species on the part of our bound variable was an avoidable manner of speaking. (p. 13)
In the same vein, he says in Quine 1961b (pp. 103–104): a man frees himself from ontological commitments of his discourse . . . [when] he shows how some particular use which he makes of quantification, involving a prima facie commitment to certain objects, can be expanded into an idiom innocent of such commitments. . . . In this event, the seemingly presupposed objects may justly be said to have been explained away as convenient fictions, manners of speaking.
The main objection then (whether or not Quine would raise it) is that each statement in the data that apparently refers to or quantifies over species and other types is an “avoidable manner of speaking,” a façon de parler for some other claim, a claim that does not seem to refer to or quantify over species and other types. (This is the sort of objection apt to be raised by someone who thinks types don’t exist.) Several questions need to be answered if this objection is to be adequately motivated. First, what is objectionable about referring to or quantifying over species and other types so that the paraphrases should be considered more respectable? (The mere fact that a statement might be paraphrased by a second statement does not mean we should reject the ontological commitments of the first statement, unless there is some reason to believe those commitments suspect, or more suspect, than those of the second statement.) Why should we want to “paraphrase them away”? Second, what is the appropriate paraphrase and how do we know when we have provided one? Third, if there is something objectionable about apparently referring to types, why do we constantly do it; what do we gain by doing it? Section 3 of this chapter will be devoted to the first question (viz., what is supposed to be objectionable about referring to or quantifying over
Types Exist
29
species and other types). Most of the next two chapters, chapters 3 and 4, will address the second question (of whether and how it is possible to paraphrase them). And chapters 5 and 6 will address the last question (concerning what is advantageous about type talk). Basically, then, dismantling the main objection will be done one question at a time throughout most of this book. But before we get to section 3, and at the risk of digressing, there are two other objections to my argument based on the data that should be noted. One stems from the fact that in order to draw the conclusion that some types exist, one must first assume that the data are true. These are substantial scientific claims; how do we know they are true? Of course scientific claims, especially ones appearing in the newspaper, ought to be considered revisable. And of course any particular claim from chapter 1 may be challenged on scientific grounds: perhaps other studies would show that of the 20,481 species examined on Eglin Air Force Base, 13 percent are imperiled rather than 8.9 percent, for example. But such considerations are philosophically irrelevant to the current project, because it doesn’t matter to the question of whether species exist whether 8.9 percent are imperiled, or 13 percent, or even 100 percent (perhaps a huge meteor is about to hit the earth). As long as any sort of claim along these lines is true, there are species. Therefore, a philosophically interesting denial that the data from chapter 1 are true is either a claim that (i) the data are not, strictly speaking, true, but are just a manner of speaking, a façon de parler, for something that is strictly speaking true; or (ii) the theories of biology, linguistics, physics, and nearly all disciplines are fictions.8 Option (i) is basically just the main objection, to be considered below. (There is little point in distinguishing between the position that type talk is true when it is suitably interpreted in terms of some paraphrase—e.g., about tokens—and the position that such talk is false although the result of paraphrasing it is true, especially if there is a serious issue as to whether it can be adequately paraphrased.) Option (ii) so violates the premises of the present context, which attempts to do justice to the scientific claims made in biology and physics rather than dismiss them, that it will not be discussed. (And anyway to do so would take us very far afield.) Even so radical a fictionalist as Hartry Field, who maintains that mathematics is fiction (on the grounds that mathematical statements ostensibly refer to and quantify over abstract objects, and there are no abstract objects), does not maintain that physics is fictional. And, as we saw in chapter 1, physics itself is not shy about ostensibly referring to and quantifying over types.
30
Chapter 2
The second objection that we will note without much discussion is that there is no need to paraphrase type talk: one is not, in the end, committed to the existence of abstract objects even though, in the normal course of doing science, one appears to be referring to them. These entities, it might be said, are mere fictions, like Sherlock Holmes. We have, of course, Quine’s insistence to the contrary (as we saw in the previous quote). But interestingly enough, we can borrow from Field in giving a response to this fictionalist view. In the following quote, Field (1980, p. 2) is responding to those who advocate fictionalism about mathematics but continue to use the theory without proposing alternate formulations of it: If one just advocates fictionalism about a portion of mathematics, without showing how that part of mathematics is dispensable in applications, then one is engaging in intellectual doublethink; one is merely taking back in one’s philosophical moments what one asserts in doing science, without proposing an alternative formulation of science that accords with one’s philosophy.
The same goes for fictionalism about other abstract entities.9 3
The Alleged Trouble with Abstract Objects
Let us turn to the main objection to my argument based on the data, namely, that there is no need to countenance types, because each statement in the data that apparently refers to or quantifies over species or other types is an “avoidable manner of speaking,” a façon de parler for some other claim, a claim that does not appear to refer to or quantify over species or other types. The first order of business is to ask: why should we want to “paraphrase them away”? Not all paraphrases are more acceptable than what is paraphrased. Consider a Cliff Notes version of Hamlet. What motivates the desire to replace the data with paraphrases not referring to types? The answer, in a nutshell, is that they are abstract objects, and, as Carnap (1956, p. 205) bluntly remarked, Empiricists are in general rather suspicious with respect to any kind of abstract entities like properties, classes, relations, numbers, propositions, etc. . . . As far as possible they try to avoid any reference to abstract entities and to restrict themselves to what is sometimes called a nominalistic language, i.e., one not containing such references.
He does not elaborate on why empiricists are suspicious of abstract entities. Many empiricists seem to believe that the expression “abstract object” is an oxymoron. Perhaps some such underlying prejudice is the reason why in many discussions, the motivation for nominalism is well-nigh taken for
Types Exist
31
granted. Goodman and Quine (1947) commence their famous nominalist manifesto “Steps Toward a Constructive Nominalism” with the declaration “We do not believe in abstract entities.” A page later they ask Why do we refuse to admit the abstract objects that mathematics needs? Fundamentally, this refusal is based on a philosophical intuition that cannot be justified by appeal to anything more ultimate. (p. 174)
They do, however, add that their intuition is “fortified” by the fact that “the most natural principle for abstracting classes or properties leads to paradoxes”—in effect, that if every predicate delimits a class we get Russell’s paradox—and that alternative ways of arriving at classes are “artificial and arbitrary.” Perhaps it seemed so in 1947. But I doubt that many people weaned on ZF set theory feel it is “artificial and arbitrary.” On the contrary, George Boolos (1971) has shown that there is a natural nonpredicative notion of set underlying ZF set theory, “the iterative conception of set.” Similarly, Field (1980, p. 1) begins his nominalist manifesto Science without Numbers without any explanation to motivate nominalism: Nominalism is the doctrine that there are no abstract entities. The term ‘abstract entity’ may not be entirely clear, but one thing that does seem clear is that such alleged entities as numbers, functions, and sets are abstract—that is, they would be abstract if they existed. In defending nominalism therefore I am denying that numbers, functions, and sets exist.
It is only later in Field (1989a, p. 68) that he characterizes what he sees as the trouble with platonism: the truth values of our mathematical assertions depend on facts involving platonic entities that reside in a realm outside of space-time. There are no causal connections between the entities in the platonic realm and ourselves; how then can we have any knowledge of what is going on in that realm? And perhaps more fundamentally, what could make a particular word like ‘two’, or a particular belief state of our brains, stand for or be about a particular one of the absolute infinity of objects in that realm?
Although Field’s talk of abstract objects’ “residing” in a “realm” that is “outside of space-time” is hyperbolic since all that realism demands is that abstract objects lack a unique spatiotemporal location, not that they are “elsewhere,” his charge is strikingly reminiscent of that made famously and forcefully by Benacerraf (1983, p. 414): If . . . numbers are the kinds of entities they are normally taken to be, then the connection between the truth conditions for the statements of number theory and any
32
Chapter 2
relevant events connected with the people who are supposed to have mathematical knowledge cannot be made out.
Thus there is supposed to be an epistemological problem with numbers and other mathematical entities owing simply to their being abstract objects. So the same epistemological problem should be thought to arise for other abstract objects such as species and elements and so forth (assuming they are abstract objects). It might be expected to be exacerbated by the fact that biology, physics, linguistics, and so on are supposed to be empirical sciences, concerned with the world and not some platonic realm of abstract objects. That is (or so the objection goes), whatever mathematics is about, the rest of science is not even partly about abstract objects. Benacerraf, then, has hit on the heart of the matter and Field clearly shares his views about it. I will first examine Benacerraf’s and then Field’s proffered reasons for thinking there is an epistemological problem about abstract objects. Along the way, I will consider and reject Hart’s suggestion that naturalized epistemology favors a causal requirement on knowledge. Interestingly, in their respective opera classica on these matters (“Mathematical Truth” and Science without Numbers, respectively), Benacerraf and Field renounce numbers in favor of expressions, without apology or explanation as to why the latter are not abstract, if there are infinitely many of them. But I assume that their reasons should apply equally to all abstract objects, since it is in virtue of their abstractness that the epistemological problem is supposed to arise, as will be clear below. Benacerraf and Causal Requirement no. 1 Benacerraf (1983, p. 412) is quite explicit that he is relying on “a causal account of knowledge on which for X to know that S is true requires some causal relation to obtain between X and the referents of the names, predicates and quantifiers of S.” So, for example, “in the normal case, that the black object [Hermione] is holding is a truffle must figure in a suitable way in a causal explanation of her belief that the black object she is holding is a truffle” (p. 412). More generally, he claims that for X to know that P “the connection between what must be the case if P is true and the causes of X’s belief can vary widely. But there is always some connection and the connection relates the grounds of X’s belief to the subject matter of P” (p. 414). He declines to spell out what the connection of the “suitable way” consists in, but mentions approvingly several accounts of knowledge that do try to spell it out, among them Goldman 1967. Although Goldman’s account has long been disproved, it has as much initial appeal
Types Exist
33
as Benacerraf’s vaguely worded causal requirement. It will be instructive, then, to see, briefly, how it is refuted even at the risk of a historical digression.10 Goldman (1967, pp. 369–370) claims that X knows that P iff the fact that P is connected with X’s believing P by a causal process such as perception, memory, a causal chain from the fact that P to X’s belief that P (or from some common cause), the important links of which X correctly reconstructs by inferences (each of which is warranted), or combinations of such processes.11
Call this causal requirement no. 1. That such an account is not sufficient for knowledge—even where the knowledge involved is restricted to perceptual knowledge of facts about a person’s environment (like Hermione with her truffle)—was pointed out by Goldman (1976) himself.12 The more important question for our purposes is whether the causal requirement postulated is necessary. As phrased by Goldman, it is not. X may know that P although X cannot correctly reconstruct the chain that leads to or from the fact that P. This is a very strong requirement. Most of us could not begin to reconstruct the chain that led from the fact, say, that the universe is expanding, or is fourteen billion years old, to our believing it. We come to believe it in our sketchy way because we understand that astrophysicists believe it, and we think we have reason to trust them. Moreover, our attempts at reconstruction might even be false, and we still would be justified. Suppose that X hears the Chair of the Federal Reserve Board publicly predict that the Dow Jones will go down. X reasons that the Chair has solid economic data for the prediction. But suppose that the Chair does not, but the public prediction itself causes the downturn. Still, the fact that the Chair predicts it may be sufficient justification for X to know it. Also, we could not reconstruct the chain for most of what we believe on the basis of having been taught it in school. If we drop the “correct reconstruction” requirement, and just consider whether the fact that P must cause X’s belief that P (or that they must have a common cause), counterexamples still occur. Skyrms (1967) asks us to suppose that X happens upon a decapitated man, Z, lying in the gutter. From the severed head, X infers that Z is dead and that therefore Z died. Clearly X knows Z died. But suppose events transpired as follows. Z fell down drunk in the gutter, then died of a heart attack, and only later had his head cut off by a fiend who chanced upon the scene. Z’s dying is not causally connected with X’s belief that Z died, because the decapitation did not cause Z to die; nor did Z’s dying cause the decapitation (Skyrms
34
Chapter 2
asks us to believe it would have taken place whether or not Z had the heart attack and died).13 Thus it is not necessary for one to know that P that the fact that P be either a causal ancestor to the belief that P, or a causal descendant from another fact that is a causal ancestor of the belief. But the real clincher is that any theory with such a causal requirement will leave unexplained our knowledge of general empirical truths such as ‘copper conducts electricity’. Hale (1987, pp. 92–101) argues persuasively that even if there are such general physical facts (i.e., they are not abstract objects as Steiner [1975] contends they would have to be), and even if such facts can be explained and appealed to in explanations, they are no better at serving the role of causes or effects than abstract objects are. Here’s why. Suppose we do think of a general fact as a physical complex—as an infinitely conjunctive fact, involving all the individual facts consisting of episodes of bits of copper conducting electricity. Some of these individual facts will be in the past, and some of them will be in the future. It is not reasonable to regard the future ones as being in any way causally efficacious toward bringing about a past event, and hence it is not reasonable to regard the whole general fact that copper conducts electricity as being a cause of a particular past event. Hale’s (1987, p. 94) thought experiment to support this is to ask us to consider a particular event, such as his bathroom light’s going on on May 15, 1985, at 2 AM. Suppose (improbably) that copper no longer conducts electricity after 2100 CE. Then there is no fact that copper conducts electricity. At best we can say that it was a fact up until 2100 CE that copper conducts electricity. It would be natural to point out, in response to this objection, that we are at least in causal contact with “part” (if that is the right term) of the infinite conjunctive fact that copper conducts electricity (or else with the fact that copper conducts electricity until 2100). Perhaps we have seen that pieces of copper a, b, and c conduct electricity, for example. With that in mind, Goldman (1967, p. 368) has offered a principle, one that is customdesigned to allow him to bring knowledge arrived at by inference (and hence of general empirical truths) under the causal umbrella. The principle in question is “if x is logically related to y, and if y is a cause of z then x is a cause of z.” (By ‘logically related’ Goldman seems to mean: entails or is entailed by.) If Smith sees pieces of copper a, b, and c conduct electricity and concludes that copper conducts electricity, then the principle is supposed to secure the requisite causal connection of the general fact that copper conducts electricity with Smith’s belief that it does. If the fact that Fa causes Smith to believe that Fa and infer that ∃xFx, then since Fa entails ∃xFx, it will follow by the principle that the fact that ∃xFx causes Smith’s
Types Exist
35
belief in it. But the principle involved is just false. Hale (1987, p. 95) offers the following counterexample to it. Suppose (i) George was drunk; (ii) everyone present was drunk, and Bill and George were both present; and (iii) Bill was drunk. Suppose also that the fact that (iii) caused the fact that (iv) Bill was sick. Now (i) is logically related to (ii), which is logically related to (iii), the fact of which causes the fact that (iv). So by two applications of Goldman’s principle, (i) is a cause of (iv). But clearly George’s being drunk did not cause Bill to be sick. Logical relatedness is not transitive. It is not surprising that a strong causal requirement that makes it a mystery how we can have knowledge of mathematical truths also makes it a mystery how we can have knowledge of general (empirical) truths such as ‘copper conducts electricity’. That is because it cannot account for inferences very well. There is little point in trying to patch up Goldman’s or a similar theory for today’s audience. Goldman (1986) himself drops any such causal requirement in favor of requirements involving reliable processes. As Castañeda (1980, p. 221) remarked: “it is widely taken for granted that for a person to know that p there need be neither a common cause of the person’s believing that p and of that p nor a causation path from that p to the person’s believing that p.” Causal Requirement no. 2 Let us leave then, cruder formulations of a causal requirement and focus instead on a much more plausible one that better accommodates inferences to general propositions. Here is an excellent formulation of causal requirement no. 2 discussed by Hale (1987, p. 100, my italics): any truth which can be known at all either concerns a state of affairs which may stand in a suitable causal relation to a knowing subject, or may be correctly inferred from propositions which can be so known, i.e., concern states of affairs to which knowers may bear suitable causal relations.
As Hale points out, the difficulties for this much more lenient requirement involve a priori knowledge and truths that we know to be necessary. There are infinitely many truths that we know to be necessary—truths of logic, for example—and a good case can be made that some of them are knowable a priori (given, of course, a grasp of the concepts involved). Consider truths of logic. Can they, or can they not, be correctly inferred from propositions that concern states of affairs that may stand in a suitable causal relation to a knowing subject? Hale seems to think they cannot. If they cannot, then since the requirement rules knowledge of such truths out of hand, the requirement is unacceptable; we have knowledge of some truths
36
Chapter 2
of logic. However, one might argue that they can, on the grounds that they can be correctly inferred from the null set of premises. So it follows that they can be correctly inferred from any propositions whatever, including those that concern states of affairs to which knowers may bear suitable causal relations. But if they can, then there is no bar to our obtaining (at least some) knowledge of mathematical truths also, with the concomitant commitment to abstract objects. Given the a priori conceptual nature of mathematical truths, this cannot be dismissed out of hand. (Not to lay too much stress on it, but here is one possible albeit controversial way that knowledge of arithmetic might come about. Wright [1983] has made an excellent case that “Hume’s principle” (the number of Fs = the number of Gs = df there is a one–one correlation of the Fs with the Gs) is simply an elucidation of the concept of number. And in work begun by him and Burgess [Burgess 1984], and completed by Boolos [1987], it has been shown that all of number theory follows from Hume’s principle (in certain second-order logics). Although Wright’s line of thought is not beyond criticism,14 it certainly constitutes a prima facie case of mathematical truths emerging from a mere definition by logic alone.) The upshot is that causal requirement no. 2 is either too strong, ruling out all a priori knowledge, or is compatible with platonism and abstract objects. To summarize so far: our intuitions supporting a causal requirement are at their strongest when it comes to perceptual knowledge. However, they tend to wane the further we get from perceptual cases and into cases involving inference, and they evaporate altogether in cases of a priori knowledge (e.g., knowledge that everything either is or isn’t energy). The corollary is that causal theories have the least difficulty with cases of perception, but leave knowledge of general truths, especially a priori ones, unexplained. Hale 1987 (p. 123) puts it well when he says: there are independent grounds to think that any causal conception of knowledge strong enough to embarrass the platonist will be too strong, and that the modifications needed to accommodate empirical knowledge of contingent general truths will leave no epistemological axe with which a decisive blow against platonism may be struck. And, in any case, a strong causal theory is likely to have trouble making room for any kind of knowledge a priori, whether the truths we know are platonistically construed or not.
Causal Requirement no. 3 It might be thought that some kind of causal account of knowledge/ reference must be right, however hard it may be to formulate (and even if it makes trouble for a priori knowledge). Hart (1977, pp. 125ff.) seems to
Types Exist
37
think so. In his review of Steiner’s Mathematical Knowledge he indignantly claims that “it is a crime against the intellect to try to mask the problem of naturalizing the epistemology of mathematics with philosophical razzledazzle. Superficial worries about the intellectual hygiene of causal theories of knowledge are irrelevant. . . .” Suppose we concede that some kind of causal requirement on knowledge is required. The question then is whether such a requirement makes trouble for knowledge of abstract objects. So far my efforts have been to try to preserve mathematical knowledge and mathematical entities in the face of Benacerraf’s epistemological problem. I have been writing as though all abstract objects are in the same boat epistemically—that is, equally susceptible to Benacerraf’s epistemological problem. But it appears that is not something that even the principal protagonists themselves would agree to! They seem to have no problem with certain sorts of abstract objects, namely, linguistic ones. That is, the parties most inclined to push this objection with respect to mathematical objects—Benacerraf, Field, Hodes— seem not to be inclined to push it with respect to linguistic objects. On the contrary, Benacerraf (1965) explicitly favors doing without numbers but letting number expressions (e.g. ‘thirteen’) stand in for numbers. Obviously, then, he is allowing that there are number expressions. These cannot just be tokens for there are not nearly enough tokens to stand in for the numbers. Similarly, Field (1980, p. 1) says “Nominalism is the doctrine that there are no abstract entities. . . . In defending nominalism therefore, I am denying that numbers, functions, sets or any similar entities exist.” Among “similar entities” he does not classify linguistic expressions; apparently he regards expressions as less abstract than numbers. This is plausible if by ‘expressions’ he means actual physical expressions (i.e., tokens). Yet Field (1980) gives no indication of trying to make do with the finite (and in fact very small) number of actual physical expressions there are, as Goodman and Quine (1947) do. Like Benacerraf, he helps himself to an unlimited number of expressions; his theory “contains, besides the usual quantifiers ‘∀’ and ‘∃’, also quantifiers like ‘∃87’ (meaning ‘there are exactly 87’)” (p. 21). Thus Field (1980) is relying on there being expressions other than those of the actual physical variety—and again, infinitely (or at least indefinitely) many of them.15 So too is Hodes (1984), whose view, although quite different from Field’s, is similar in that it rejects numbers but requires an unlimited quantity of expressions. He says, for example, “[i]n making what appears to be a statement about numbers one is really making a statement primarily about cardinality object-quantifiers” (p. 143).
38
Chapter 2
Why would someone who objects to mathematical objects on the basis of their abstract nature have no problem with linguistic objects—objects that are also abstract? Why think that the causal requirement (whatever it is) is met in the case of linguistic types but not in the case of mathematical objects? What is it about linguistic types that enables us to stand in the appropriate causal relation to them even though they are abstract? When I put the question to Harold Hodes (in conversation), his answer was that types are OK, because types have tokens and we interact with these tokens. He did not elaborate, but I think the idea is this. How do we find out about the properties of an individual word—say, how it is spelled or pronounced? Often enough, we are presented with a visual or auditory token. We then infer that the type has the same properties—spelling or pronunciation—as the token. We may occasionally be wrong, but on the whole this is a reliable form of inference. Bromberger (1992a, p. 176) calls the principle that grounds this inference the “Platonic Relationship Principle,” explaining that “linguists, in practice, often impute properties to types after observing and judging some of their tokens, and seem to do this in a principled way.” To the question: but how do we know the Platonic Relationship Principle is true—that is, how do we know that types do indeed share properties with their tokens if we have no causal contact with types themselves—the answer is: because that’s their job. Types are that which the tokens token; it’s in virtue of them that mere scribbles of ink and so on qualify as tokens. More discussion of this matter follows in chapter 6, after we have done the preliminary work of seeing what a word is in chapter 3 (section 2). But certain caveats are in order at this point. First, as we will see in chapters 3 and 5, inferences from tokens to types are not justified (as our examples might suggest) on the ground that tokens share all the same properties— are all carbon copies of the type. This is evident from our example of written and spoken tokens; clearly they have different properties. Even the spoken tokens of a given word will exhibit a range of pronunciations due to accents. (Consider the word ‘know’: a Cockney ‘know’ is like the King’s English ‘now’; King’s English ‘know’ is like Scottish ‘now’; and a Yorkshire ‘know’ is like King’s English ‘gnaw’ [Fudge 1990, p. 39].) Second, there has to be much more to the story than “we know about a type on the basis of interaction with its tokens,” because there are uninstantiated types—some very long sentences, for example—about whose properties we can talk perfectly well. (For example, we can say of a sentence that it is a conjunction of the two longest instantiated English sentences.) Third, the Platonic Relationship Principle not only licenses “bottom-up” inferences from tokens to
Types Exist
39
types; it also licenses “top-down” inferences from types to tokens (e.g., that this token of my last name ‘Wetzell’ has one too many ‘l’s.) As I said, these matters will be explored in greater detail in chapters 3 and 5. To return to the causal requirement: assume that we are barking up the right tree—that interaction with tokens is a significant part of what grounds our knowledge of types and that this interaction is sufficient to meet the causal requirement on knowledge of such types. Benacerraf (1965, 1983), Hodes (1984), and Field (1980) should agree. Then the main hurdle to knowledge of types (a causal requirement) has been surmounted! After all, their argument was with mathematical, not linguistic, objects. I could just stop right there, since this is not a tract in philosophy of mathematics, but I hope the reader will permit a short defense of (at least some) mathematical objects. It will shed light on matters that arise in our discussion below of Field. We can ask ourselves: what causal requirement is it that permits knowledge of linguistic types? It could be causal requirement no. 2, which we already discussed. Or it might be the following one, proposed by Burgess (1990, p. 3), who claimed that it is just what Benacerraf needs, it being “neither too strong nor too weak for purposes of his argument”: even if true, belief in an assertion or theory implying or presupposing that there are objects of some particular sort cannot be knowledge unless some objects of that sort act directly or indirectly on us.
Call this causal requirement no. 3. Suppose that Burgess is right that Benacerraf would endorse it and perhaps Hodes and Field (Field 1980) as well, and suppose also that it permits knowledge of linguistic types. Let us leave aside the question of whether it is true and whether it faces the same difficulties with a priori knowledge as causal requirement no. 2 does, and just focus on how it might permit knowledge of abstract linguistic types. Presumably it does so, if it does, because the key clause is met, namely, that linguistic objects “act directly or indirectly on us.” And presumably this is because tokens act directly on us, and types indirectly, through the tokens. If this is the case, then things do not fare badly for numbers with a causal requirement like no. 3 either. I have defended this at length in Wetzel 1989a so will not do so here. But the idea is straightforward. It is that just as tokens act directly on us so too do numbers of things—what I call “pluralities.” For example, we see a pair of turtle doves and a trio of French hens, hear a quartet of calling birds, feel a quintet of golden rings, smell a sextet of geese a-laying, and so on. Elementary school math is predicated on the assumption that numbers of concrete things—concrete
40
Chapter 2
pluralities—offer a window on the properties of numbers. Of course, there is much more to the story of how we come to know all the infinitely many truths of arithmetic than by interaction with concrete pluralities. (I am not maintaining that arithmetic is an empirical falsifiable theory.) My point is that if linguistic types pass causal requirement no. 3, then so should numbers (even though concrete pluralities are not tokens of numbers, but, rather, epistemic analogues of tokens). Perhaps some readers are unconvinced that numbers of things are epistemically analogous to tokens of types. I will rest content if the main point is found acceptable, namely, that we agree with Benacerraf (1965, 1983), Hodes (1984), and Field (1980) in thinking, as they apparently must, that linguistic types pass the requisite causal requirement on knowledge that they are endorsing. So too should the other types we encountered in chapter 1: species, genes, epigenotypes, languages, computers like the Altair 8800, Mozart’s Coronation Concerto, the Queen’s gambit, the hydrogen atom, and the football, for they also have tokens and share many properties with their tokens. There remains one other alternative to consider: that Benacerraf (1965, 1983), Hodes (1984), and Field (1980) should not have been so quick to permit abstract linguistic objects and that they are wrong if they think that causal requirement no. 3 is met by abstract linguistic objects. (Later Field, Field [1989], would not permit abstract linguistic objects.) That is, according to this alternative, causal requirement no. 3 rules out the possibility of knowledge of any abstract objects, so that linguistic objects and other types are in the same trouble as mathematical objects. This is quite a strong epistemological claim, one that we’ve already seen reasons for thinking false. What would justify it? One answer that has been proposed is that it is supported by considerations derived from naturalized epistemology. This is the subject of the next section. Is There Help to Be Had from “Naturalized Epistemology”? As we have seen, Hart (1977, pp. 125ff) thought that some kind of causal account of knowledge/reference that makes trouble for platonism must be right, however hard it may be to formulate.16 Hart seems to think that some support for a causal requirement is to be found in considerations of “naturalized epistemology,” a view advocated in Quine 1969a. Nowadays this is taken to be the view that epistemology is a branch of natural science, no more and no less. Unlike classical foundationalist efforts to provide a certain base for human knowledge, it allows appeal to science, broadly
Types Exist
41
construed, but to little else. Does naturalized epistemology really support a causal requirement that makes trouble for platonism? First, it should be noted that if it does, and if in consequence (we are to think that) there are no abstract objects, then inconsistency threatens. If we practice “naturalized epistemology,” then we have to look to natural science. When we do, we see that its theories clearly seem to be committed to abstract objects, as we saw in chapter 1. They are awash in apparent references to abstract objects. The activity of doing linguistics, for example, would be impossible were we only to quantify over tokens. As we shall see in chapter 5, sentence formation rules currently thought to be true would all be false. For example, such rules assure us that ‘The clam the clam split split’ is a sentence, as is any number of ‘the clam’s followed by the same number of ‘split’s. But there are few tokens of these sentences. And if we couple the claims in chapter 1—for example, Mayr’s claim that “among the 25 species of Carabus beetles from central Europe, 80 percent are polytypic”—with the claim that “there are no species of Carabus beetles” then we have inconsistency. To suppose otherwise is to bank on there being a successful nominalistic paraphrase. As we shall see after chapters 3 through 5, there is little hope for that. Second, far from supporting a causal requirement, naturalized epistemology undermines it even for mathematics, as Burgess (1990) has convincingly argued. He uses what I have been calling “causal requirement no. 3,” and let us assume for the sake of argument that it does rule out knowledge of mathematical objects. Here it is again: even if true, belief in an assertion or theory implying or presupposing that there are objects of some particular sort cannot be knowledge unless some objects of that sort act directly or indirectly on us. (p. 3)
His argument that naturalized epistemology undermines this requirement (understood so as to rule out knowledge of mathematical objects) may be roughly summarized as follows. If we are to practice naturalized epistemology, we have to look to science to see if it accepts causal requirement no. 3 with the concomitant barring of knowledge of mathematical objects. Now the problem for the nominalist is that the scientific community accepts as true and as known statements such as that Avogadro’s number is greater that 6.022 × 1023; references to mathematical objects are ubiquitous thoughout science. It is very difficult to reconstruct such statements so as not to appear to refer to numbers. No one has tried to reconstruct them more energetically than Field. Now why should we prefer Field’s
42
Chapter 2
theories, with their reconstructed statements that carry no commitment to numbers, over current theories? We should prefer them only if they are better theories than their rivals, which are our current theories. If we are to do naturalized epistemology, “better” needs to be cashed out not in some a priori fashion but by appealing to features of theories to which science, historically, has given weight. Burgess claims that when we look to the history of science we find that the features of theories that have weight may be divided into logicoempirical features and simplicity features. Logico-empirical features include rigor, consistency (both internal and with other accepted theories), the correctness of a theory’s empirical consequences and agreement with the results of observation and experiment, and the precision and range of empirical consequences. Field’s theories are designed to be equivalent to current theories with respect to these features. So whatever advantages his theories have must be counted under simplicity. Simplicity may be either practical or theoretical. Practical simplicity, Burgess maintains, includes “features that make for economy of effort in communicating, testing, emending, extending and generally operating with a theory” (1990, p. 9); familiarity is an important such feature. Another important one, he says, is “the power and freedom of its mathematical apparatus, the range of theorems it supplies, the ease with which it supplies them” (p. 9). Field’s theories are inferior to current theories as to familiarity, power, or freedom of mathematical apparatus, and other features of practical simplicity, as Field will admit. Thus the supposed superiority of Field’s theories must be based on considerations of theoretical simplicity—economy of assumptions. These break down into logical and ontological assumptions. Compared to current theories, Field’s reconstructions are logically inferior; they involve “extravagance and prodigality in logic” (p. 10), Burgess claims. That is, they involve monadic second-order logic, cardinality-comparisonquantifier logic, the logic of modalities, and the logic of infinitary conjunctions and disjunctions.17 Thus we are left with ontological economy, which may be either (i) of active or physical sorts of objects, or (ii) of inert or mathematical sorts of objects. Field’s theories are not supposed to be any more parsimonious than current theories with respect to (i) (and may even be less, since he counts geometric points and regions as physical). The only virtue, then, of Field’s theories is with respect to (ii), economy of mathematical ontology. However, argues Burgess, there is little evidence that ontological economy of mathematics is a weighty scientific standard. Over the centuries, mathematics has added more and more sorts of objects to its ontology. What
Types Exist
43
evidence there is favoring ontological parsimony consists of the delay in accepting real numbers in connection with the new “analytic” methods of geometry in the seventeenth century, infinitesimals in connection with the calculus also in the seventeenth century, and transfinite sets in connection with Cantor’s set theory in the late nineteenth and early twentieth centuries. However, all of these have since been accepted, and the delay in accepting them is readily attributable to the lack of rigor and consistency in the theories as they were first proposed. When supplied with the needed rigor and consistency, they were accepted. Burgess explains that while “synthetic” geometry had been inherited from the ancient Greeks as an organized body of beautiful theorems, “analytic” geometry involved the methods of algebra, inherited from the mediaeval Arabs as a disorderly mass of useful techniques. As to the second, the lapses of rigor and consistency of the calculus were notorious and severely criticized, most famously by Berkeley. But as for infinitesimals, though they were rejected when Weierstrass rigorized the calculus, they have received re-acceptance since Robinson provided a rigorous and consistent theory of them in his “non-standard analysis.” As to the third, the lapses of rigor and consistency in Cantorian set theory were even more notorious, and set theory did not receive acceptance until rigorized by Zermelo. (p. 12)
So unless the nominalist can produce overlooked evidence to the effect that economy of mathematical ontology is a weighty scientific standard, and not only that but so weighty as to override the deficits of Field’s theories both as to practical simplicity (like familiarity) and as to economy of logical assumptions, naturalized epistemology undermines rather than supports causal requirement no. 3. Field Field is often transmitting on the same wavelength as Benacerraf. Field (1989a, p. 68) complains that “there are no causal connections between the entities in the platonic realm and ourselves; how then can we have any knowledge of what is going on in that realm?” That is, he objects to abstract objects on the grounds that they are “causally inaccessible” (p. 69). This suggests that he endorses some sort of causal requirement on knowledge such as the one suggested by Burgess. Since that was discussed above in connection with Field’s program, we will not pursue the plausibility of it any further. Instead we will investigate somewhat different claims that Field makes. Although he objects to abstract objects on the grounds that they are “causally inaccessible,” he himself needs space-time points and regions, whether or not they are “causally accessible.” So to protect his flank he adds
44
Chapter 2
what raises the really serious epistemological problems is not merely the postulation of causally inaccessible entities; rather, it is the postulation of entities that are causally inaccessible and can’t fall within our field of vision and do not bear any other physical relation to us that could possibly explain how we can have reliable information about them. (p. 69)
The problem he finds with abstract objects, then, is that they (supposedly) have all three failures. That is, not only are they causally inaccessible, but they don’t fall within our field of vision, and they do not “bear any other physical relation to us that could possibly explain how we can have reliable information about them.” Notice, however, that it is the third condition that does all the work. Events in the future, for example, cannot act on us now directly or indirectly unless reverse causality occurs; nor are they even possibly in our field of vision. Yet pace Aristotle, we know about some of them. Also causally inaccessible and outside our field of vision are some events in the past and some current events outside our light cone. (It’s useful to think of each event E as being located in a 3-D space-time at the vertex of a double cone in two space and one time dimension(s)—one cone pointing down, toward events in E’s past, and one cone pointing up, toward events in E’s future. An event is in E’s past if a light pulse from the event can reach E; an event is in E’s future if a light pulse from E can reach it. All events outside E’s light cone can neither affect nor be affected by E.) Presumably Field would not deny that we can know some things about such events. So the third condition does all the work; he would have to say that they “bear a physical relation to us that could explain how we can have reliable information about them.” This requirement is vague; what a “physical relation” is is vague. But it is hard to see how the abstract objects of the sort I am discussing, such as linguistic objects, don’t meet it. Right now I’m scrolling past some word tokens on a screen two feet in front of me. I’m not sure what they are (sequences of pixel irradiations that give the illusion of movement?), but I know exactly which word types they are tokens of. That is, the end result of a physical process of which I am a part is that I have acquaintance with some abstract objects. I bear a physical relation to them that explains how (given my visual and linguistic know-how) I have reliable information about them. It would be ridiculous to say that I don’t know which word types are tokened on my screen—that somehow I could be all wrong about them—or that we haven’t the makings of a physical explanation of how I know which ones they are. To bear a suitable physical relation to a word type, it is enough to bear a “Field-approved” physical relation to one of its tokens, knowing which word is tokened.
Types Exist
45
Field might object that this is some kind of hybrid relation, not a bona fide “physical relation” between me and the word types, in that I have a physical relation to the word tokens and a relation to the types by virtue of the tokens’ nonphysical relation to their types. If he is just insisting that only spatiotemporal relations between particulars qualify as “physical relations,” then this is in effect just to insist on some brute form of physicalism without explaining what the problem with abstract objects is. And indeed that is one way to read the following paragraph, wherein he explains why space-time points and regions are epistemically unobjectionable: there are quite unproblematic physical relations, viz., spatial relations, between ourselves and space-time regions, and this gives us epistemological access to spacetime regions. For instance, because of their spatial relations to us, certain space-time regions can fall within our field of vision. . . . For in addition . . . space-time regions stand in unproblematic relations (spatio-temporal relations) to physical objects, and this provides us with less direct observational means of knowing about them. (pp. 68–69)
If he is just insisting on some brute form of physicalism—ruling out abstract objects by fiat—we are no closer to understanding what the problem with abstract objects is supposed to be. However, there are two other ways to read the indented paragraph, both of which are less dogmatic. He might instead be saying either that (i) the sort of relation that a space-time region bears to a physical object “provides us with an indirect observational means of knowing about” the space-time region; or (ii) standing in a spatiotemporal relation to a sort of object is sufficient to “provide us with a less direct observational means of knowing about” that sort of object. Consider (i). What is the sort of relation that a space-time region bears to a physical object so that it provides us with an indirect observational means of knowing about the former? Well, word tokens are physical objects, and I am looking at one two feet in front of me, a token of ‘the’. That token is “at” a certain space. We can abstract from the token to the space it occupies, given our perception of the outlines of the token, and perhaps Field’s idea (i) is that being in this physical relation to the token gives me “a less direct means of knowing” about the spatial region the token is at. I see the token, and can abstract to this other entity, “where it’s at.” But I can also abstract to what type it is, the word ‘the’. (Frankly, I am more certain as to which word it tokens than which space it occupies, given how rapidly the earth rotates and revolves, and how rapidly the Milky Way rotates.) I see the token, and abstract to this other entity, “what
46
Chapter 2
it tokens.” The tokening relation seems as epistemologically unproblematic as the being at relation. The token is the type incarnate. Tokening therefore seems like a perfectly good relation (between a type and a token) that just as readily yields “an indirect observational means of knowing about” the type it tokens as about the spatial region it occupies. Now it may be objected that one of the items in the relation, the type, is not a physical object and therefore cannot stand in a physical relation. But, first, it is unclear whether only physical objects can stand in physical relations (more on this below), and second, it is unclear whether spacetime points and regions themselves qualify as physical objects (or even whether Field thinks they are; see the last sentence of the next quoted passage from him, below). If Leibniz is right, they do not qualify as physical objects. If Newton is right, they do, but on Newton’s conception, there is nothing epistemologically accessible about regions of space; they are very abstract, and causally inert. If they are causally inert, then it becomes quite mysterious how we could have any “observational means of knowing about them,” direct or indirect. To paraphrase Field’s own criticism of platonic objects: “There are no causal connections between the entities in this realm and ourselves: how then can we have any knowledge of” it? It might be said that Newton’s theory has been superseded by current theory, so the question is whether space-time points and regions are causally inert under current theory. In a footnote, Field (1980, p. 114) claims they are not: Note incidentally that according to theories that take the notion of a field seriously, space-time points or regions are full-fledged causal agents. In electromagnetic theory for instance, the behavior of matter is causally explained by the electromagnetic field values at unoccupied regions of space-time; and since, platonistically speaking, a field is simply an assignment of properties to points or regions of space-time, this means that the behavior of matter is causally explained by the electromagnetic properties of unoccupied regions. So according to such theories space-time points are causal agents in the same sense that physical objects are: an alteration of their properties leads to different causal consequences.
There are several major problems with this passage, quite apart from the impermissible ‘platonistic speaking’. Malament (1982, p. 532) points them out. Field here construes an electromagnetic field as “an assignment of properties to points or regions of space-time.” I suppose one can characterize a field this way, but then one could characterize a sofa similarly. The important thing is that electromagnetic fields are “physical objects” in the straightforward sense that they are repositories of mass-energy. Instead of saying that space-time points enter into
Types Exist
47
causal interactions and explaining this in terms of the “electromagnetic properties” of those points, I would simply say that it is the electromagnetic field itself that enters into causal interactions. Certainly this is the language employed by physicists.
Field is swimming against the tide if he classifies space-time points and regions as causal agents. But unless they are causal agents, for Field the same mystery should exist about how we have knowledge of them as he claimed to exist for our knowledge of platonic objects. On, then, to Field’s idea (ii) above, that spatiotemporal relations afford us an “indirect observational means of knowing about” a sort of object. Suppose we concede that unlike the tokening relation, being at at least qualifies as a spatiotemporal relation. What is it about spatiotemporal relations that “provides us with less direct observational means of knowing about” objects of a certain kind? Notice that he should not be construed as insisting that spatiotemporal relations can only hold among physical objects, for then the claim would be uninterestingly tautological. Rather, there is supposed to be something about our standing in a spatiotemporal relation to an object that is sufficient to afford us an observational way of knowing about the object—it confers epistemological accessibility on an object. In other words, the claim seems to be that we have epistemological access to whatever we stand in spatiotemporal relations to. There are good reasons to question this. All sorts of nonphysical things are said to stand in spatiotemporal relations to us—our immortal souls or minds occupy our bodies during our lifetimes, ghosts inhabit certain houses, and our guardian angels are behind us (I am given to understand). I assume that we do not have epistemological access to guardian angels, even if they exist and hover just behind us; even then we do not “indirectly observe” them. Field is just wrong if he is maintaining that the possibility of something standing in some spatiotemporal relation to a physical object by itself “explains how we can have reliable information about” it (p. 69). The real reason for believing, or not believing, in either angels or space-time regions has to do with whether they exist according to our best theory of the world. But so it is with numbers and other abstract entities. The question, then, is whether numbers and other abstract objects exist according to our best theory of the world (which for Field is the physical world), and we have seen in previous sections that they do, according to current theories, and Field’s rival theories are worse in many respects and only better in economy of mathematical assumptions, which has yet to be shown to be a weighty scientific standard. The issue of whether they can be directly or indirectly observed is a red herring.
48
Chapter 2
It is worth noting that one of the key reasons given by those who reject an immortal soul, or mind, is the lack of any plausible account for how it could causally affect the physical body it occupies. That is, it is the lack of a suitable causal connection that poses a problem, not the lack of a spatial relation between body and mind/soul. The same might be said of spacetime points and regions. If there is no causal connection between us and them, it is unclear how someone with Field’s outlook could think that spatial relations alone (with no causal connection) could provide epistemological access. This then reduces to the question of whether space-time points and regions are causal agents, a sore point for Field, as we saw above. Of course, Field could just stonewall it, and claim that if souls, ghosts, and guardian angels exist we do indirectly observe them, but argue that we don’t have epistemological access to them because they don’t exist. And this might be because of the lack of a suitable causal connection discussed in the previous paragraph, or because they are not needed by physical science to explain anything. (Yes, of course he rejects them because of their nonphysicality, but that move is blocked at this point in the argument because we are trying to get to the bottom of his objections to nonphysicality.) That is, our best theory of the physical world does not make reference to guardian angels and the like. So once again the question is what exists according to our best theory of the physical world, and not whether they can be directly or indirectly observed. If our best theory of the physical world includes string theory (that the fundamental constituents of physical reality are not particles, but tiny strings of energy that vibrate at specific frequencies), then we will be committed to strings regardless of whether we can observe them. Of course it probably is necessary, for a physicalist, that whatever exists ought to bear some “physical relation to us that can explain how we can have reliable information about it.” But, first, a thing’s being in a spacetime relation is insufficient (e.g., as we saw in the case of guardian angels). And second, Field implies that the mere existence of spatiotemporal relations is enough to explain how we can have reliable information about what exists, but it is hard to see how, except through the mechanism of causal interaction; and we have seen that Field is on dubious ground when concluding that space-time points and regions are causal agents. So far we have been scrutinizing Field’s claim that space-time regions “bear a physical relation to us that could explain how we can have reliable information about them” and considering whether there isn’t a suitable physical relation in which we stand to words that are tokened right before
Types Exist
49
us. I argued that there is, and that the sort of moves that Field might make to deny it don’t work. That is, I argued contra (ii) that standing in a spatiotemporal relation to a sort of object should not be sufficient by itself to “provide us with a less direct observational means of knowing about” that sort of object, and that if (i) the sort of relation that a space-time region bears to a physical object “provides us with an indirect observational means of knowing about” the space-time region, then we also have an indirect observational means of knowing about words. However, Field has another objection to abstract objects, one that he says may be “more fundamental” than the objection based on causal inaccessibility. In Field 1989a (p. 68), he calls it “the problem of reference”: And perhaps more fundamentally, what could make a particular word like ‘two’, or a particular belief state of our brains, stand for or be about a particular one of the absolute infinity of objects in that realm?
In Field 1989a (p. 69) he gives two reasons why he thinks “the problem of reference” is not raised by space-time regions that stand in spatiotemporal relations to us. The first is that “we can point to many of them.” The suggestion seems to be that we can succeed in referring by pointing. But as is well known, pointing is neither necessary nor sufficient for successful reference to occur. There must be a concept/property/relation/predicate handy to indicate the kind of entity being referred to—for example, region of space, or ‘region of space’ or ‘one cubic centimeter region of space five foot hence’—else it is not clear which region of space is being referred to, or whether a region, or the rabbit that occupies it, or the undetached rabbit part, or the flea on its ear, or the color of the rabbit, or its size, or the number of its ears, etc. But given the predicate ‘word’, we have no trouble referring to some words too (‘the first word tokened on page one of my copy of the Grundlagen’). For that matter, given the predicate ‘number’ we have no trouble referring to some numbers (‘the number of its ears’, ‘the number of the planets’). For instance, imagine there are three rabbits on the table. I point to them and say ‘the space-time region inhabited by those rabbits’, or ‘the color of those rabbits’, or ‘the number of those rabbits’, or simply ‘that region’, ‘that color’, or ‘that number’. There does not seem to be any reason to think I am unsuccessful in the last case but successful in the first two. Field would of course complain that we cannot succeed in referring to numbers via such phrases if, as he maintains, there are no numbers. But it is equally true that we cannot succeed in referring to regions of space via such phrases as ‘one cubic centimeter region of space five foot hence’ if, as the relationalist maintains, there are no regions of
50
Chapter 2
space—if regions of space are not constituents of the world. It would seem, then, that once again the question of whether there are numbers, or spacetime regions, must reduce, for Field, to the question of what the best theory of the physical world is. The second reason Field cites as to why there is no problem of reference concerning space-time regions consists of two points: (i) “we can refer to certain space-time regions by means of indexicals (‘here’, ‘now’)”; and (ii) “we can refer to many other space-time regions by means of . . . descriptions that invoke the spatio-temporal relations that these regions bear to physical objects” (p. 69). With respect to the ‘here’ and ‘now’: ‘here’ can of course refer to anything from a point, to a city, a planet, or the Milky Way. For successful reference to a unique space-time region to be achieved, some concept has to be involved, explicitly or implicitly (e.g., point, city, planet, etc.). And so it is with expressions for abstract objects. Using a concept word and an indexical, we can also refer to mathematical entities, for example, by means of indexical expressions (‘this number’, ‘this many’, ‘this set’), and obviously to words (‘this word’), species, symphonies, and the like. As for point (ii), that we can refer “by mean of descriptions that invoke the spatio-temporal relations that these regions bear to physical objects,” presumably the relations he has in mind are that physical objects can be at a region, or occupy it, or be near it. So an example of such a description might be ‘the region of space-time occupied by the earth at this instant’. There are two responses to this point. First, unless the space-time regions are themselves physical objects, it is unclear how this should be significant. If I have a nonphysical mind, it occupies my body. If I have a guardian angel, it is near me. If my house is haunted, it is occupied by a ghost. These are spatiotemporal relations. Bearing a spatiotemporal relation to a physical object does not necessarily make something a physical object; nor does it make for epistemological access. Once again, the ontological question reduces to the question of which theory is best. Second, and more important, we can also use such descriptions (descriptions that invoke spatiotemporal relations borne to physical objects) to refer to abstract objects, for example, ‘the first word tokened on page one of my copy of Grundlagen’, or ‘the number of the planets’. Conclusion Thus there is no more “problem of reference” with respect to numbers than with respect to space-time regions, at least according to the linguistic
Types Exist
51
criteria Field offers. That is, we can equally well refer to numbers and space-time regions using descriptions and indexical expressions. So nothing Field has said supports the conclusions he wants to draw—unless he thinks the connection is causal. But we have explored this. There may be significant differences between how we refer to numbers and how we refer to space-time regions; and the difference may somehow be traceable to our being able to point to one but not the other. But what Field has said along these lines does not undermine what Quine calls successful “deferred ostension” to abstract objects. Time and again we’ve seen that what it really comes down to is the question of what sorts of objects are countenanced by the best theory of the world. As we saw, Field’s theories are inferior to current theories that countenance abstract objects except in economy of mathematical assumptions, and it has yet to be shown, if it can be, that such economy is a weighty scientific standard. It is clear that I vote for current theories and for the abstract objects they countenance. Moreover, since the best defense is a good offense, in chapter 5 I will point out some epistemological difficulties for nominalism that eliminate the alleged epistemological advantage that nominalism is supposed to have over realism. That, along with the poor prospects for “paraphrasing away” apparent references to types chronicled in the next two chapters, should tip the balance in favor of realism.
3
Paraphrasing, Part One: Words
As characterized in chapter 2, the main objection to the argument from the data of chapter 1 to the conclusion that species and other types exist is that we need not conclude that types exist, because each claim that seems to refer to or quantify over types is merely an “avoidable manner of speaking,” a façon de parler for some other claim, one that does not appear to refer to or quantify over species and other types. The present chapter and the next will be concerned with what type talk is supposed to be a façon de parler for, that is, what the paraphrase might be.1 The epistemological motivation for this objection was addressed and, I hope, seriously undermined in chapter 2. With the epistemological motivation gone or drastically diminished, perhaps the pressing sense that there simply has to be some way to “paraphrase away” all talk of types will be diminished too, and the reader can take a more objective look at the prospects for an adequate paraphrase. Here is a sketch of the main argument in what follows. Nominalists usually try to paraphrase sentences that appear to refer to universals or abstract objects by making do with words, terms, or other linguistic items (for example, by saying that what all mockingbirds have in common is that the word ‘mockingbird’ applies to them). But this makes use of words, which are also abstract objects in need of being “analyzed away.” One extremely popular nominalist suggestion (found, e.g., in Goodman 1977b and in Sellars 1963 and mentioned everywhere I have given a talk on this subject) is that ‘Type T has property P’ just amounts to ‘Every token t is P’. In this chapter, I show that in the case of words, there is little hope of obtaining any (nominalistic) property P that is had by every token in view of the lack of similarity to be found among the tokens of a word—no similarity beyond being tokens of that word. In the next chapter (chapter 4) I show that it is also not the case that ‘Type T has property P’ amounts to ‘Either every (token) t is P, every normal
54
Chapter 3
t is P, most ts are P or average ts are P’. Then, borrowing from research in linguistics, I suggest that the best paraphrase would be ‘ts are P’, where this is a generic, or “characterizing” sentence; but I go on to argue that even this does not work. I then spell out the difficulties of paraphrasing even so simple and understandable a sentence as ‘Old Glory had twentyeight stars in 1846 but now has fifty’. But the coup de grâce against the likelihood of adequate paraphrasing is the virtual impossibility of doing away with apparent quantifications over types, as in, for example, Mayr’s quote in chapter 1: “There are believed to be about 28,500 subspecies of birds in a total of 8,600 species, an average of 3.3 subspecies per species.” But first, we must take note of the fact that it is not enough merely to claim that there is a paraphrase. One must give it. As Quine (1961b, p. 105, my italics) said: Consider the man who professes to repudiate universals but still uses without scruple any and all of the discursive apparatus which the most unrestrained of platonists might allow himself. He may, if we train our criterion of ontological commitment upon him, protest that the unwelcome commitments which we impute to him depend on unintended interpretations of his statements. Legalistically his position is unassailable as long as he is content to deprive us of a translation without which we cannot hope to understand what he is driving at. It is scarcely cause for wonder that we should be at a loss to say what objects a given discourse presupposes that there are, failing all notion of how to translate that discourse into the sort of language to which ‘there is’ belongs.
Nor is it enough to provide a piecemeal translation, by providing the odd paraphrase here and there. Given the ubiquity of apparent references to types, as we saw in chapter 1, we need to be assured that they can always be eliminated. There is no assurance of this unless there is a systematic way to eliminate them. But if their eliminativity is to be more than a nominalist article of faith, we need to be given the elimination rules. Tarski would not be so rightly famous had he been content to give a few examples of truth conditions. Section 1 will focus on the typical pure nominalist strategy for dealing with problems posed by abstract objects/universals (e.g., redness), which is to renounce the abstract objects/universals in question and fall back instead on terms for them (e.g., ‘red’). Often this is seen by the nominalist as unproblematic, as though word types themselves aren’t abstract objects apparent references to which need to be “analyzed away” if the nominalist program is to be successful. The popular Goodman 1977b and Sellars 1963 suggestion for paraphrasing is to replace a reference to a type (‘T is P’) by
Paraphrasing, Part One
55
a quantification over all its tokens (‘every t is P’), the idea being that what is true of the type is true of all its tokens. So for example, ‘the grizzly bear is ferocious’ would just amount to ‘all grizzlies are ferocious’. Goodman suggests doing exactly that for words. Section 2 will consider whether this suggestion is feasible. We’ll examine at some length the nature of words to see whether there is anything linguistically interesting common to all and only tokens of a word type. I will argue that there is not, and hence that the suggested paraphrase fails. 1
Goodman’s and Sellars’s Suggestion
As is evident from chapter 1, there are all types of types we talk about: biological, chemical, physical, linguistic, aesthetic, and so on. I will confine my attention here mainly to linguistic types, especially words—words, that is, of a natural language like English (although I do not mean to imply that the very same word can’t be a word in more than one language; ‘cannelloni’, ‘lo-mein’, and ‘souvlakia’ prove otherwise). There are a number of reasons for starting with linguistic types, but the most important for my purposes is that it is quite hard to do without them (as will be seen in this and the next two chapters, chapters 4 and 5). Philosophers anxious to deny the existence of one sort of abstract object or another—for example, universals or mathematical entities—typically retreat to linguistic entities to do the job. This is practically the definition of ‘nominalism’; The Oxford Dictionary of Philosophy characterizes nominalism as “the view that things denominated by the same term share nothing except that fact: what all chairs have in common is that they are called ‘chairs’” (p. 264). But usually, the linguistic entities themselves can only be construed as types, and hence as abstract objects. Locke, for example (although not quite a nominalist in the preceding sense, leaning as he does on concepts to do the job that realists assign to universals), relies heavily on an apparently realist semantics for word types in his discussion in Locke 1975 (book III, chap. 3) of “general terms.” Even Quine relies on word types to express his preference for desert landscapes in Quine 1961a. There he renounces redness, but retains the word ‘red’ when he says: the word ‘red’ or ‘red object’ is true of each of sundry and individual entities which are red houses, red roses, red sunsets; but there is not, in addition, any entity whatever, individual or otherwise, which is named by the word ‘redness’ . . . (p. 10)
The nominalistic trend of jettisoning universals/abstract objects for linguistic objects reaches its zenith in the philosophy of mathematics called
56
Chapter 3
“formalism,” where untold infinities of mathematical objects are consigned to Hilbert’s Hell in favor of “signs.” David Hilbert, Hartry Field, and Harold Hodes are our representative formalists. Formalism is, roughly, the philosophy of mathematics that holds that math is not “about” anything, least of all numbers, sets, and spaces; it is the mere manipulation of symbols, as for example occurs when we are doing “proofs.” “In mathematics,” says Hilbert (1967, p. 465), “what we consider is the concrete signs themselves,” which elsewhere (Hilbert 1983, p. 143) he says “in number theory [are] . . . the numerical symbols” 1, 11, 111, 1111. . . . The problem is that these “concrete signs” cannot be construed as physical tokens if Hilbert is to derive the mathematics he wants. (I will not argue for it here, having argued for it at length elsewhere—Wetzel 1984, chapter 3.) Field (1980, p. 1), after claiming that “nominalism is the doctrine that there are no abstract entities” claims that “in defending nominalism therefore, I am denying that numbers, functions, sets or any similar entities exist.” He then goes on to help himself to an unlimited number of expressions; his theory “contains, besides the usual quantifiers ‘∀’ and ‘∃’, also quantifiers like ‘∃87’ (meaning ‘there are exactly 87’)” (p. 21). In all probability, there are only a few of these numerical quantifiers having actual tokens, so to get the job done Field must be relying on expressions other than those of the actual physical variety. Hodes, too, although his view in Hodes 1984 is quite different from Field’s, rejects numbers but requires an unlimited quantity of expressions. Hodes (1984, p. 143) says, for example, “[i]n making what appears to be a statement about numbers one is really making a statement primarily about cardinality object-quantifiers.” Thus the most plausible candidates for the sort of expressions formalists require are expression types.2 But expression types are abstract objects. Broadly speaking, formalists want to reduce mathematics to proof theory, which studies the relation of deducibility between sentences where this is understood purely syntactically and without reference to semantic notions such as truth. But what of proof theory itself? Quine sometimes wears a formalist hat, as for example in Quine 1961a (p. 18) when he says “In speaking of [mathematics] as a myth, I echo that philosophy of mathematics to which I alluded earlier under the name of formalism.” And so does Nelson Goodman. But unlike the other formalists mentioned, Quine and Goodman try to shoulder their ontological responsibilities by coming up with a purely nominalistic syntax for proof theory.
Paraphrasing, Part One
57
In Goodman and Quine 1947, they start with just the following six symbols (types or tokens? I’ll let you decide): v‘()⏐∈ (How successful their project is will be discussed in chapter 5.) Goodman (1977a, p. 262) argues that it is [linguistic] types that we can do without. Actual discourse, after all, is made up of tokens that differ from and resemble each other in various important ways. Some are “now” ’s and others “very” ’s just as some articles of furniture are desks and others chairs; but the application of a common predicate to several tokens—or to several articles of furniture—does not imply that there is a universal designated by that predicate. And we shall find no case where a word or statement needs to be construed as a type rather than as a token. To emphasize the fact that words and statements are utterances or inscriptions— i.e., events of shorter or longer duration—I shall sometimes use such terms as “wordevents,” “noun-events,” “ ‘here’-events,” “ ‘Paris’-events” and so on, even though the suffix is really redundant in all these cases. . . . A word-event surrounded by quote-events is a predicate applicable to utterances and inscriptions; and any “ ‘Paris’ consists of five letters” is short for any “Every ‘Paris’-inscription consists of five letter-inscriptions.”
Goodman’s specific suggestion concerning ‘Paris’ will be considered at length in chapter 5. In the meantime consider his suggestion as a general one for paraphrasing—one found also in Sellars 1963. It is that to say a type T has property P is just to say that all tokens of T have P. So, for example, to say ‘The atom consists of a heavy nucleus whose radius is less than one ten-thousandth the radius of the atom’ is to say ‘every atom consists of a heavy nucleus whose radius is less than one ten-thousandth the radius of the atom’. In general, the suggestion is that (1)
To say ‘The T is P’ (or ‘T is P’) is to say ‘Every t is P’.
In the case of the word ‘Paris’, to say that it consists of five letters is just to say that every token (or, rather every inscription token) of it does. This may work for some types and some properties, in that there may be some types T and properties P such that T has P if and only every token of T does. Perhaps every atom consists of a heavy nucleus whose radius is less than one ten-thousandth the radius of the atom (although it is doubtful whether hydrogen can be said to have a heavy nucleus). But it fails for many types and properties—the grizzly bear is brown, but not all grizzlies are brown; some are blond, some are black. In particular, it fails for words.
58
Chapter 3
To see why, we need to embark on what might appear to be a rather lengthy digression on words—what they are, and what, if anything, their tokens have in common. The elimination rule we are considering entails that whatever is true of words must be true of all their tokens. But a discussion of words is essential, not only because words are a paradigm case of types (recall how Peirce drew the type–token distinction with the word ‘the’), but because it will show how utterly hopeless (1) and several other paraphrasing suggestions are. 2
Words
What do I mean by a word? Ziff’s (1972) amusing and instructive article “What Is Said” illustrates that there is a host of different identity conditions for ‘what is said’, among them phonetic, phonemic, morphological, syntactic, and semantic conditions. Some of the same distinctions apply to words as to what is said; The Oxford Companion to the English Language (McArthur 1992), for example, lists eight kinds of word.3 Yet there is an important and very common use of the word ‘word’ that stands out. A rough characterization of this kind is the sort of thing that merits a dictionary entry. (‘Rough’, because some entries in the dictionary, e.g., ‘il-’, ‘-ile’, and ‘metric system’, are not words, and some words, e.g., place names and other proper names, do not get a dictionary entry.) It is claimed in The Story of English (McCrum, Cran, and MacNeil 1986, p. 102), for example, that “Shakespeare had one of the largest vocabularies of any English writer, some 30,000 words, [that] estimates of an educated person’s vocabulary today vary, but it is probably about half this, 15,000.” If I were a certain sort of philosopher, in lieu of the characterization given, I would just give some examples of words. I could probably give about 15,000 examples but let me just mention a few: consider the noun ‘color’. The OED (Murray et al. 1971, vol. 2, pp. 636–638) entry for it lists: a pronunciation [kɒ' lər]; two “modern current or most usual spellings” (colour, color); eighteen earlier spellings (collor, collour, coloure, colowr, colowre, colur, colure, cooler, couler, coullor, coullour, coolore, coulor, coulore, coulour, culler, cullor, cullour); and eighteen different senses—divided into four branches— with numerous subsenses. There is a separate heading for a verb with the same pronunciation and spellings (pp. 638–639). We may also add that in the sense under consideration, English ‘red’ and French ‘rouge’ are different words. Now if we are to take these dictionary entries seriously—and I think we should—then we can see that
Paraphrasing, Part One
(i) (ii) (iii) (iv) (v) (vi)
59
a word can be written or spoken; a word can have more than one correct spelling; a word can have more than one correct spelling at the same time; a word can have more than one sense at the same time; two words can have the same correct spelling(s); and two words can have the same sense.
Still, if we were to continue to try to generalize from these two dictionary entries we might be tempted to think that at least a word has only one pronunciation, since only one is listed for ‘color’. But if that means that everyone pronounces ‘color’ identically, then of course it is incorrect. Can we instead conclude at least that every word has only one correct pronunciation? No; for consider ‘schedule’. My Webster’s (Mish et al. 1993, p. 1044) lists four current pronunciations: ['ske-(,)jü(ə)l](Am), ['ske-jəl] (Am), ['she-jəl] (Can) and ['she-(,)dyü(ə)l] (Brit). Thus to (i) through (vi) we may add: (vii) a word may have more than one correct pronunciation at the same time. The very idea of there being only one correct pronunciation of each word—such that whole dialects are simply wrong—has, I think, pretty much died out, at least among linguists. True, there is something called “Received Pronunciation” (RP, or “Public School Pronunciation,” “the Queen’s English,” or “the King’s English”), which many people take to be standard English. Amazingly, research in Britain has shown, according to McCrum, Cran, and MacNeil (1986, p. 29), that speakers of RP—identifiable only by voice—tend to be credited with qualities such as honesty, intelligence, ambition, even good looks. After RP, there is a league table of acceptable accents. Dublin Irish and Edinburgh Scottish are high on the list, which then descends through Geordie . . . , Yorkshire and West Country, until we reach the four least valued accents in Britain: Cockney, Liverpool Scouse, Birmingham and Glaswegian.
Most Americans too are dazzled by the Queen’s English, and anyway have their own hierarchy of accents. But although it’s psychologically understandable to think that an accent represents “standard pronunciation,” it is incorrect to do so. Accents constitute merely a social hierarchy; not all accents are on the same footing socially. It’s a social advantage to speak the Queen’s English. It’s also a social advantage to a man to be tall. A survey of graduates of the University of Pittsburgh found that tall men
60
Chapter 3
(over 6′2″) commanded higher starting salaries than those under six feet; and among 6,000 23-year-old Britons, short men made less money than those taller, even after other obvious variables were factored out.4 Clearly it is not incorrect to be short—or poor, or perceived as homely—and it is not incorrect to speak one’s regional dialect. Until Victorian times, there were only regional dialects in Britain; there was no “standard English.” And in this century, according to McCrum, Cran, and MacNeil (1986, p. 28), the “standardness” of a certain accent has largely been perpetuated by the BBC, who consciously adopted it as “the standard” in the 1920s— even though at the time Queen’s English was only spoken by about 3 percent of the British population. Surely, we do not want to say that only a fraction of 1 percent of native speakers of English speak “the right way.” Different accents or dialects can equally well perform the job for which language exists; so all are on the same footing linguistically, even if not socially. Thus although (vii) is stated rather minimally, the gloss on it is that not only may a word have more than one “correct” pronunciation; owing to different accents or dialects it may have quite a few. Enough has been said to suggest some of the so-called identity conditions that are relevant to words. We may now ask the following key question. Is there anything all and only tokens of a particular word have in common other than being tokens of that word (i.e., any linguistically nontrivial projectible property)? If the answer is “no,” then as an elimination rule, (1) is hopeless. Spelling One popular answer to the above question is spelling. That was the jist of Goodman’s claim that any ‘ “Paris” consists of five letters’ is short for any ‘every “Paris”-inscription consists of five letter-inscriptions’. Unfortunately, it is not spelling, for four reasons. First, as we saw above, the word ‘color’ has tokens spelled twenty different ways; even today there are two correct ways to spell it. So being spelled the same way is not necessary for word identity. Second, words can be misspelled—for example, some tokens of ‘cat’ are actually spelled ‘k’-‘a’-‘t’ (although those fond of the spelling theory would probably deny this. Probably they would also hold that young children, who pronounce words strangely, are saying no words at all).5 Third, different words have the same spelling—for example, the noun ‘color’ and the verb ‘color’—or, for another example, three of the seven OED entries that are spelled ‘d’-‘o’-‘w’-‘n’: the adverb meaning ‘in a descend-
Paraphrasing, Part One
61
ing direction’, which is derived from Old English; the substantive meaning ‘the fine soft covering of fowls’, derived from German; and the substantive meaning ‘open expanse of elevated land’ (as in ‘on the broad downs, Stonehenge was visible’), derived from Celtic (OED, pp. 624–626). Same spelling, different words—so having the same spelling is not sufficient. Fourth, and most important, not all word tokens are inscriptions; some are utterances. (Not that Goodman’s suggestion runs afoul of this, since his is limited to inscriptions.) And utterances are not composed of letter tokens, and hence don’t have a spelling (except, perhaps, in an indirect sense through their types, if they happen to be tokens of a written language). This is evident if we think of a natural language before its writing is established; speakers of such languages utter words every day, but their utterances have no spelling, even in the indirect sense just mentioned. Notice that even if, contrary to fact, all tokens of a word had the same spelling, we would have analyzed word types in terms of letter types, but we would still need an account of what the latter are (since they seem to be types, too), and of whether their tokens have anything in common (so as to apply elimination rule (1)). Sensitive to this consideration, those who favor the spelling theory are likely to add that letter types are just shapes (or classes of similarly shaped objects) and hence that words are shapes too.6 Of course, as an addendum to the spelling theory, this view is subject to all the objections that the latter faces. What, for example, is a shape? With good reason Quine (1961c, pp. 73–74) classifies shapes as abstract objects. But it is subject to an additional objection. The spelling theory by itself would at least classify Braille tokens of the word ‘cat’ together with signed tokens of it and Morse Code tokens, since they all have the same spelling; but the shape theory could not classify them together. Worse, the shape theory would not even classify all printed tokens of the letter ‘A’ together. Tokens of radically different fonts are not similar in shape (in anything like the Euclidean sense). Witness the differently shaped tokens of ‘A’, shown in figure 3.1, a short while in the library uncovered. This illustration should kill enthusiasm for the shape theory, but presenting these ideas publicly has taught me that the shape theory dies hard. Let me take another stab at it. Suppose that someone still wants to insist that all he means by a letter is a very particular shape—as in ‘vee-shaped’, ‘essshaped’, ‘ell-shaped’. Then either: (i) he also wants to insist that all the tokens of ‘A’ just illustrated are similarly shaped, in which case he is employing some esoteric notion of ‘similarly shaped’ not based on Euclidean geometry—not even based on topology—and it is incumbent upon him to
62
Chapter 3
Figure 3.1
spell out what it is; or (ii) (employing the Euclidean notion) he denies that all of the ‘A’ tokens are similarly shaped, and points to one of them as his “exemplar.”7 In that case he must deny that the nonsimilarly shaped tokens on the page are ‘A’s. The problem with this is that they are ‘A’s. The shape theory of letters does not accord with how we categorize. The letter ‘A’ has a long and distinguished history; historians say ‘A’ comes in many “forms”—a fact that is at odds with the shape theory. According to Compton’s Encyclopedia:
Paraphrasing, Part One
63
The letter A probably started as a picture sign of an oxhead, as in Egyptian hieroglyphic writing (1) and in a very early Semitic writing used in about 1500 BC on the Sinai Peninsula (2). In about 1000 BC, in Byblos and other Phoenician and Canaanite centers, the sign was given a linear form (3), the source of all later forms. In the Semitic languages this sign was called aleph, meaning “ox.” The Greeks had no use for the aleph sound, the glottal stop, so they used the sign for the vowel a. They also changed its name to alpha. They used several forms of the sign, including the ancestor of the English capital A (4). The Romans took this sign over into Latin, and it is the source of the English form. The English small a first took shape in Greek handwriting in a form (5) similar to the present English capital letter. In about the 4th century AD this was given a circular shape with a projection (6). This shape was the parent of both the English handwritten character (7) and the printed small a (8). [See figure 3.2]
1
3
6 Figure 3.2
2
4
5
7
8
64
Chapter 3
The shape theory would also incorrectly classify the first letter token of Samuel Adams’ signature (‘S’) on the Declaration of Independence as a token of the same type as the first letter token of James Wilson’s signature (a ‘J’) rather than of Samuel Chase’s, since it is much more similar in shape to Wilson’s than to Chase’s. It would incorrectly classify the shape in figure 3.3 as an eight, since it is more nearly similar in shape to that in figure 3.4 than to a six. (The example is from Ludlow 1982, p. 420.) Charitably construed, (ii) is revisionism, pure and simple (“let’s revise our linguistic habits to accord with the shape theory”). Uncharitably construed, it is Humpty Dumptyism. Someone partial to shape theory might at this point concede that tokens of the letter ‘A’ do not have the same simple Euclidean shape, but urge
Figure 3.3
Figure 3.4
Paraphrasing, Part One
65
that nevertheless there is a disjunctive “shape” they all are: being shaped like the first ‘A’ in the illustration, or the second, or . . . or the last—and that that is all there is to the letter ‘A’. Of course, we would need quite a large disjunction just to specify all the fonts there are (and we’d have to get exemplars for Morse code, Braille, and sign language). But the real problem is that no disjunction we specify would do, for two reasons. First, the disjunction theory would still classify some tokens incorrectly—for example, that of the first letter token of Samuel Adams’s signature as a token of ‘J’.8 Second, Madison Avenue may invent a new font tomorrow. If we revise the analysis to include any form that may ever be considered “the letter A,” we rule out almost nothing. This is just to give up on the shape theory entirely. However, the main problem with any spelling theory is that it ignores the spoken word. Linguists put a priority on the spoken word. On, then, to the next theory. Phonology Suppose, then, that we identify a word with a sequence of audible sounds. True, doing so has the disadvantage of ignoring the written word, but perhaps that can be justified on the grounds that the spoken word came first; that written words came only relatively recently and only for some languages; and, most important, that the written word (for most natural languages today) merely represents an attempt to symbolize the sound pattern of the spoken word. The English orthographic system is just a rather crude phonetic one that was hit upon four or five hundred years ago, when ‘one’ was obviously not pronounced as ‘won’. If a word is a sequence of audible sounds, then perhaps all tokens of the same word sound the same. But of course they don’t. Tokens of the same word sung for thirty seconds at full volume by an operatic bass or whispered quickly by a young child will sound utterly different along almost every acoustic dimension. Still, it might be said that there is one dimension they share: the phonological dimension. That is, both speakers will be uttering the same phonemes. So let us consider the phonological hypothesis: Every token of a word is composed of tokens of the same phonemes. It faces problems similar to those of the spelling theory (although matters are more complicated here). For one thing, a phoneme, like a letter, is itself an abstract object, a type with tokens, and so we’d also need an account of what a phoneme is, and what its tokens have in common (if anything).
66
Chapter 3
This task promises to be at least as hard as identifying letters. As we saw in chapter 1, phonology (the study of phonemes) is distinct from phonetics (the scientific study of speech production). Phonetics is concerned with the physical properties of sounds produced and is not language relative. Phonemes, on the other hand, are language relative: two phonetically distinct speech tokens may be classified as tokens of the same phoneme relative to one language, and as tokens of different phonemes relative to another language. Phonemes are theoretical entities, and abstract ones at that. They are said (e.g., in Halle and Clement 1983, p. 8) to be sets of features; the English phoneme [p], for example, is {−sonorant, +labial, −voiced, −continuant}. The appeal to both sets and features is not likely to be pleasing to a nominalist. Another difficulty for the phonological hypothesis is that sameness of phonemes is not sufficient for word identity, as shown by homonyms like ‘red’ and ‘read’, or the earlier example of ‘down’. (It’s not even sufficient for sentence identity. My favorite example is [Ah ‘key ess oon a ‘may sah] which means ‘Here is a table’ in Spanish and ‘A cow eats without a knife’ in Yiddish.) This particular difficulty might be avoided if we modify the phonological hypothesis by requiring in addition that the sequence of phonemes have the same sense. But this is too strong; we saw earlier that the noun ‘color’ has eighteen senses. Besides, this move will not help us with the third difficulty, namely, that sameness of phonemes is not necessary for word identity. We noted earlier that owing to accents/dialects, not even every correct pronunciation of a word will be phonologically identical to every other. Recall ['ske-(,)jü(ə)l] and ['she-jəl]. A Cockney ‘know’ is like RP ‘now’; RP ‘know’ is like Scottish ‘now’; and a Yorkshire ‘know’ is like RP ‘gnaw’ (Fudge 1990, p. 39). Yet we understand one another. Even within a single person’s speech, the same word will receive various pronunciations. For example, the word ‘extraordinary’ is variously pronounced with six, five, four, three, or even two syllables by speakers of British English: it ranges “for most British English speakers from the hyper-careful ['ekstrə'ʔɔ:dinəri] through the fairly careful [ik'strɔ:dn¸ ri] to the very colloquial ['strɔ:nri]” (Fudge 1990, p. 40). This last example demonstrates what we saw in chapter 1: that there may be no phonetic signal in a token for every phoneme that is supposed to compose the word: it is “missing” several syllables. This is also demonstrated by reflection on ordinary speech: [jeet?] for ‘did you eat?’ and [sem] for ‘seven’. There is a humorous handbook on Australian pronunciation entitled “Let Stalk Strine.” No wonder, then, that many phoneticians have given up on the attempt to reduce phonological types to acoustic/
Paraphrasing, Part One
67
articulatory types. (See Bromberger and Halle 1986.) Even the physicalist Björn Lindblom concedes (in Lindblom 1986, p. 495) that “for a given language there seems to be no unique set of acoustic properties that will always be present in the production of a given unit (feature, phoneme, syllable) and that will reliably be found in all conceivable contexts.” Not only is this true for a given language; the example of ‘extraordinary’ illustrates that it is true for a given idiolect. Sameness of phonemes is neither necessary nor sufficient for word identity. One might, at this point, want to back up. The diversity that tokens of the same word manifest might suggest that the concept of a word we started with was too abstract. Perhaps there is a hierarchy of types of words, and we started “too high” on it. That is, the thought goes, perhaps we should renounce for the time being the question of what lexicographic words have in common, and instead focus on what “lower-level” words on the hierarchy have in common, and then later construct what it is that lexicographic words have in common. In other words, we might first gather together those tokens that are phonetically (and perhaps semantically) identical on the grounds that this is a perfectly good notion of a word. So, for example, ['ske-(,)jü(ə)l], ['ske-jəl], ['she-jəl], and ['she-(,)dyü(ə)l] would qualify as four different “words,” rather than four pronunciations of the same word. A Cockney ‘know’ would be a different “word” from an RP or Yorkshire ‘know’, [sem] would not count as the same word as ‘seven’, and [jeet?] would not count as the same sentence as ‘did you eat?’. This will not work. It would classify different words as the same, for example, Cockney ‘know’ with RP ‘now’; RP ‘know’ with Scottish ‘now’; and a Yorkshire ‘know’ with RP ‘gnaw’. Moveover, it has the undesirable consequence that different dialects of the same language would have far fewer “words” in common—if they had any—than one would have supposed, and similarly for different idiolects within the same dialect. Worse, even the very same idiolect would distinguish as different “words” (what one would have thought was) the same word. But worst of all, it would distinguish as different words what are just different representations for the same idiolectal word spoken by the same person. For example, the five pronunciations of ‘extraordinary’ would come out as different words. Not only would a phonologist take this as excessively complicated (see Fudge 1990, p. 43), but the representation types themselves can receive realizations that are acoustically very different (for the small child and the man may speak the same idiolect). According to the phonologist Eric Fudge (1990, p. 31), “it is very rare for two repetitions of an utterance to be exactly identical, even when spoken by the same person.” We would be driven inexorably toward viewing
68
Chapter 3
each word token as a different “word.” This is completely unacceptable for a linguistic theory. However, the last nail in the coffin for the suggestion according to which all tokens of the same word have the “same sound” is that words can be mispronounced—no doubt in many ways. Kaplan (1990, p. 105) made a case for the claim that “differences in sound” between tokens of the same word can be “just about as great as we would like.” He supports this by means of the following thought experiment. There is an experimenter and a subject. The experimenter says a word; the subject is supposed to wait five seconds and then repeat the word. ‘Alonzo’; ‘Alonzo’. ‘Rudolf’; ‘Rudolf’. The subject performs well; he is highly motivated, sincere, reflective, not reticent, and so on. Then the experimenter starts tampering with the subject’s speech mechanism—and not just by making him inhale a little helium. The experimenter puts weird filters of all kinds into the poor subject. Kaplan claims that we would say “yes, he is repeating that word; he is saying it in the best way that he can,” however dissimilar the imitation (p. 104). Intentions and Others Kaplan’s extremely clever thought experiment draws our attention to an often ignored fact: that intention is very important to the identity of a word token. As a vector toward determining the identity of a word token, it is much weightier than is usually appreciated (think back to the rigid thinking of the spelling enthusiasts). Of course, it is doubtful that something that sounds like ‘supercalifragalisticexpialidocious’ can be a token of the word ‘Alonzo’, but certainly, as any parent can tell you, words can receive pretty strange pronunciations and still retain their identity. Not that Kaplan is committed to this, but let us consider, then, whether intention is the thing every token of a word has in common. The intention hypothesis is that: Every token of a word is caused by an intention to produce it. First, is it necessary for a particular utterance to be a token of ‘cat’ that the speaker intended to utter ‘cat’? No, because a speaker might intend to remain silent, but against her will utter ‘cat’ because a neurosurgeon is stimulating a nerve in her brain during an operation. Second, is it sufficient? No, because a speaker might have the intention of uttering ‘cat’ but die before she gets the word out. Suppose a speaker intends to utter ‘cat’ and succeeds in making a noise. Is this sufficient for her to have uttered ‘cat’? No, because she might have aphasia and utter ‘cow’ instead of ‘cat’.
Paraphrasing, Part One
69
What if a person is in control of her faculties (including those of speech production), is awake, and is in “normal” circumstances? Is the intention to utter ‘cat’ sufficient for uttering ‘cat’? No; she might indeed utter something, and yet might fail to utter ‘cat’. She might simply utter ‘cow’ by mistake. Now we could drag in talk of “unconscious intentions,” but then the account will be only as plausible as the thesis that every slip of the tongue is a Freudian slip. (I’m reminded here of the joke about the two psychoanalysts who are discussing their Easter visits home. One says: “I made a terrible Freudian slip at dinner; I intended to say to my mother ‘Please pass the hot-cross buns,’ but instead I said ‘You’ve ruined my life, you witch.’ ”) I don’t wish to nitpick about necessary and sufficient conditions. It is probably true that in most cases in which ‘cat’ is said by someone who is awake, in possession of her faculties, and in normal circumstances, and so on, she intended to utter ‘cat’, and in most cases in which such a person intends to utter ‘cat’, she utters ‘cat’. There is a high degree of correlation. Important and interesting as this might be, it doesn’t help us with our current project. There is also a high degree of correlation between surfboards and intentions to produce surfboards. But it would be putting the cart before the horse to analyze surfboards, or words, in terms of intentions-toproduce-them. Any account of what an intention-to-utter-‘cat’ is will probably presuppose some account of what the word ‘cat’ is. (This is not to say that intentions should not be considered when doubt arises as to the identity of an utterance or a bit of styrofoam, but this fact bears more on the question of what makes us think that something is a token of a certain word than on what the word is.) Similarly, it may be generally true that t is a linguistic token of type T if and only if members of the relevant linguistic community would agree that it is.9 (The “relevant linguistic community” for tokens of English could not include everyone who speaks English, since it is often hard to understand dialects too dissimilar from one’s own.) The problem with this otherwise excellent suggestion is that it, too, puts the cart before the horse. It may be generally true, but it does not offer a linguistically interesting property. To see this, consider that it may be generally true that something is a surfboard if and only if members of the relevant community (presumably surfers) would agree that it is. Yet this tells us nothing about the nature of surfboards, which may well have something functional in common. It may be generally true that something is a musical token of a Mozart sonata if and only if members of the relevant musical community would agree that
70
Chapter 3
it is. But again, this is not a musically interesting property. Mentions of both Mozart and sonata form are essential. The two previous suggestions are in line with Ned Block’s suggestion10 that in view of the multiple realizability of words as tokens, perhaps a functional definition is in order. The most promising to my mind are the two just considered. But both seem to presuppose that we know what words are already, before we can identify “intentions to produce them” or “what the community accepts.” This is not like defining a mousetrap as “anything that can trap a mouse.” It is like defining ‘the’ as “anything the community accepts it as a ‘the’.” We haven’t gotten anywhere. It might be thought that Sellars (1963) solved this problem by appealing to the notion of a linguistic role, which Loux (1998, p. 79) defines two word tokens as having when they “function in the same way as responses to perceptual input; they enter into the same inference patterns; and they play the same role in guiding behavior.” It is dubious whether this notion can be unpacked without referring to abstract objects (same inference patterns?), but in any event it cannot be used to pick out all tokens of a word, as we have been using the word ‘word’. The reason is that ‘red’ and French ‘rouge’ are different words in our sense, but their tokens play the same linguistic role for Sellars. Conclusion Thus my answer to the question posed earlier, “Is there anything all and only tokens of a particular word have in common other than being tokens of that word (i.e., any linguistically nontrivial, ‘natural,’ projectible property)?” is no.11 (I say “in general” because there may be exceptions. Maybe all tokens of ‘eleemosynary’ are traceable back to something Shakespeare wrote.) But this is not very different from grizzly bears. Not all adult grizzlies are big, not all are brown, not all have humps, and so forth. Almost any generalization about all grizzlies will be false if there is one midget albino grizzly who happens to terrify easily. Yet it is still true that the grizzly is a big, humped, brown bear native to North America. It is also true that the word ‘cat’ is correctly spelled ‘c’-’a’-’t’ nowadays, and correctly pronounced ['kœt]. Two disclaimers should be mentioned. The first is that I am not saying that it is just a brute fact that word token t is a token of word T. Plenty of tokens of ‘cat’ are spelled ‘c’-’a’-’t’ or pronounced ['kœt], just as plenty of grizzlies are big and humped. Spelling and pronunciation are factors that help determine, for each word token t, what word type T it is a token of
Paraphrasing, Part One
71
and why. Other factors include: the linguistic context (phrase, sentence, paragraph, . . . the linguistic community) in which t occurs, and, as Kaplan (1990) rightly emphasizes, the intentions of the producer of t and perhaps of the producer’s audience, if there is one. The second, related, disclaimer is that nothing that has been said is meant to rule out the possibility that what makes a token a token of type T supervenes on purely physical properties and relations.12 Since there is no linguistically nontrivial, “natural,” projectible property that all tokens of a word have in common—other than being tokens of that word—the paraphrase provided by (1) of ‘The T is P’ as ‘Every t is P’ is woefully inadequate. No property that one can correctly predicate of a word is likely to be had by all the tokens. In particular, to use Goodman’s example, ‘Paris’ consists of five letters, but it is not the case that every ‘Paris’-inscription consists of five letter-inscriptions. For example, ‘Parrys’ and ‘Pareiss’, listed under ‘Paris’ in the OED, do not. (More discussion of Goodman’s example is to be found in chapter 5.) Similarly, the CDK4 protein inhibits the p16 protein—but not always. And as is clear from the preceding discussion, it won’t work for most claims about a species, since its members vary so much among themselves. The upshot is that the most popular way to paraphrase type talk does not work. The next chapter will consider more sophisticated suggestions, but in the end these too will be seen not to work.
4
Paraphrasing, Part Two
In the previous chapter, we saw that paraphrasing ‘The T is P’ as ‘Every (token) t is P’ does not work. In this chapter, I show that various other paraphrasing suggestions also do not work, and I conclude that the nominalist’s prospects for paraphrasing away all talk of types is poor. Section 1 will consider and reject as paraphrases for ‘T is P’: ‘Every normal t is P’, ‘Most ts are P’, ‘Average ts are P’, and ‘Either every (token) t is P, every normal t is P, most ts are P or average ts are P’. Then, in section 2, borrowing from research in linguistics, I suggest that the best paraphrase would be ‘ts are P’ where this is a generic, or “characterizing” sentence, but argue that even this does not work. In section 3, I exhibit some of the difficulties of paraphrasing even so simple and understandable a sentence as “Old Glory had twenty-eight stars in 1846 but now has fifty.” But the coup de grâce against the likelihood of adequate paraphrasing is presented in section 4; it is the virtual impossibility of doing away with apparent quantifications over types, as in, for example, Mayr’s quote in chapter 1: “There are believed to be about 28,500 subspecies of birds in a total of 8,600 species, an average of 3.3 subspecies per species.” Then, on the assumption that we need abstract objects to serve as types, section 5 will tackle so-called class nominalism, which (assuming that “class nominalism” is not an oxymoron) maintains that talk of types is best construed as talk of classes of tokens. I will claim that this form of nominalism is also inadequate. 1
Other Paraphrases
It has been suggested that ‘the horse is a four-legged animal’ might be viewed as a universal judgment for ‘all properly constituted horses are fourlegged animals’.1 This provides us with the next suggestion for paraphrasing away references to types, which is that to say ‘The T is P’ is to say ‘Every properly constituted t is P’. Presumably this is equivalent to:
74
(2)
Chapter 4
To say ‘The T is P’ (or ‘T is P’) is to say ‘Every normal t is P’.
I should note, first of all, that I am not denying that the horse is a fourlegged animal if and only if all properly constituted horses are four-legged animals (although I am not asserting it either). Many facts about types supervene on facts about tokens, and this might be one of them. However, (2) is inadequate owing to the following serious difficulties. First, it appeals to what is “normal” (or what is “properly constituted”). Fully resolving whether this notion has scientific credibility cannot be embarked upon here. There are good reasons, however, for viewing it with suspicion—even in biology, where it might be thought to be at its most useful. With respect to our own species, the notion of normality, as Hull (1989, pp. 17–22) has argued, has had a long history of abuse. Responsible authorities in the past have argued in all sincerity that other races are degenerate forms of the Caucasian race, that women are just incompletely formed men, and that homosexuals are merely deviant forms of heterosexuals. The normal state of human beings is [apparently] to be white, male heterosexuals.
Of course this does not show that the idea of normality is worthless. But Hull argues convincingly that even in the fields of biology where one might expect to find a significant sense of “normal”—embryology, evolutionary biology, and functional morphology—one doesn’t. Take embryology. Is there a normal developmental pathway through which most organisms develop or would develop if presented with the appropriate environment? Hull (1989) argues that the data do not support it, because The phenotype exhibited by an organism is the result of successive interactions between its genes, current phenotypic make up and successive environments. The reaction norm for a particular genotype is all possible phenotypes that would result given all possible sequences of environments in which the organism might survive. Needless to say, biologists know very little about the reaction norms for most species, our own included. To estimate reaction norms, biologists must have access to numerous genetically identical zygotes and be able to raise these zygotes in a variety of environments. (p. 18)
Obviously, this cannot be done for humans. But when it is done on other organisms, the reaction norms vary widely. He argues that the only clear sense of “normal development” is not a significant one: it is merely what we humans are familiar with in “recent, locally prevalent environments” (p. 19). So, for example, the nuclear family seems normal to most of us,
Paraphrasing, Part Two
75
although “the nuclear family existing outside a kinship group is a relatively new social innovation and is rapidly disappearing” (p. 19). Things do not improve when we consider the evolutionary perspective, Hull argues, because “evolution is the process by which rare alleles become common, possibly universal, and universally distributed alleles becomes totally eliminated” (p. 19). The essence of evolution is that alleles and genes do not remain the same. Species morph over time. “Early on one allele will surely be considered natural, while later on its replacement will be held with equal certainty to be natural” (p. 20). Here’s a possible example, based on a recent finding: CCR5 is a normal component of human cells, but people who lack it because of a genetic mutation rarely become infected [with HIV], even if they have been exposed to HIV many times through unprotected sex. . . . And the lack of the receptor seems to do the people no harm. (Grady 1997)
It is now “normal” to become infected with HIV if one has been exposed to HIV many times through unprotected sex; but if the AIDS epidemic wipes out nearly everyone with CCR5, it will then be “normal” not to become infected upon frequent exposure. One might respond that the “normality” of horses having four legs cannot be called into question so easily. True. But (2) is a general proposal about all types and relies heavily on the idea of normality; if that notion is bogus for many important types and their properties, then the analysis itself is bogus too. A robust sense of the idea that a “normal function” can be attached to each gross morphological structure runs aground on the fact that one and the same structure may perform different functions. For example, the human urogenital system is used for excretion and reproduction. Worse, “about all a biologist can say about the function of the human hand is that anything that we can do with it is normal” since the hand can be used, as Hull puts it, “to drive cars, play the violin, type on electronic computers, scratch itches, masturbate, and strangle one another. Any notion of the function of the hand which is sufficiently general to capture all the things that we can do with our hands is likely to be all but vacuous and surely will make no cut between normal and abnormal uses” (p. 21). Basically, he stresses that there is a huge gap between a biological sense of function and a commonsense, ordinary sense. The commonsense answer to “what is the function of sex?” is “reproduction”; the biological function is to increase genetic heterogeneity. Being sexually neuter may be biologically functionally normal (honey bees, old maids, priests) in that it increases “inclusive fitness”—helping the species survive.
76
Chapter 4
The concept “normal,” if it is not used inappropriately to import questionable values into biology, must mean “average,” and in that case, it is quite relative to time and place. Average height varies from group to group and time to time. So “normal” height in America today is considerably more than it was two centuries ago. One more mud-slinging example from Hull (1989): Having blue eyes is abnormal in about every sense one cares to mention. Blue-eyed people are very rare. The inability to produce brown pigment is the result of a defective gene. The alleles which code for the structure of the enzyme which completes the synthesis of the brown pigment found on the surface of the human iris produces an enzyme which cannot perform this function. As far as we know, the enzyme product performs no other function either. However, as far as sight is concerned, blue eyes are perfectly functional, and as far as sexual selection is concerned downright advantageous. (p. 22)
Thus the concept of normality is problematic even in biology. But when it comes to, say, word tokens, it is not merely problematic; it is ridiculous. As we saw earlier, there may be some (social) sense in which certain dialects are (viewed as) “standard,” but to say, for example, that only tokens of RP English are normal, when only a fraction of 1 percent of native speakers of English speak it, is indefensible. Switching to “normal” meaning “average” produces unacceptable consequences. We want to be able to say that the word ‘cat’ is spelled ‘c’-‘a’-‘t’ and pronounced [‘kæt], but it surely is not the case that the average word token is both spelled ‘c’-‘a’-‘t’ and pronounced [‘kæt], since the average word token is an inscription or a sound event, but not both. Yet the most important reason (2) fails is this. Even if it is sometimes true to say of a given type that it has a property if and only if all its normal tokens do—maybe even because they do—(2) still fails as a systematic analysis: it does not work for many type attributions. The plain truth of the matter is that ‘T is P’ (the type is P) is not always equivalent to any one of the following: Every t Is P; Every normal t is P; Most ts are P; Average ts are P. It need not even be equivalent to one or another of the above (although not always the same one). In other words, (3) is false:
Paraphrasing, Part Two
77
(3) To say that ‘T is P’ is to say ‘Either every t is P, every normal t is P, most ts are P, or average ts are P’. One reason (3) is false (and there are other reasons given below) is due to the following sorts of counterexamples. Consider this true sentence: The loggerhead turtle lives at least thirty years and may live fifty years. It is simply not true that every loggerhead lives at least thirty years, that every normal one lives at least thirty years, that most live at least thirty years, or that loggerheads live on average thirty years. The reason is that most die long before they reach thirty—in fact in their first year or two of life. However, there is a way of dealing with the loggerhead turtle example, if the nominalist turns to current research in linguistics for help. It involves the notion of a characterizing statement. Characterizing statements are the subject of our next section. 2
Characterizing Statements
Characterizing statements have also been called nomic, dispositional, general, or habitual statements. They are better suited than all of the foregoing we considered for the job of being “short for” the type statements of chapter 1. Loggerhead turtles live at least thirty years and may live fifty years is a characterizing statement, as are A potato contains vitamin C and amino acid. Potatoes contain protein. Asher and Pelletier (1997) have a helpful discussion about them in the Handbook of Logic and Language. They explain that characterizing statements are so called because they “express a characterizing property” (p. 1128). The hallmark of a characterizing statement is that it admits exceptions. That is, it can be true while having exceptions. These characterizing statements seem to be tailor made for the job the nominalist needs done, in that characterizing statements can be about objects in a kind, rather than the kind itself, but nonetheless are statements that admit of exceptions while being true. So while it is false that every potato contains vitamin C, it is true that in general potatoes contain vitamin C. And while it is false that every loggerhead lives at least thirty years, or that most do, or normal ones do, or average ones do, it is true nonetheless that
78
Chapter 4
loggerhead turtles live at least thirty years (because a few do, in contrast to members of many other species that never do). And while it is false that every ‘Paris’-inscription consists of five letters, it is true that in general ‘Paris’-inscriptions consist of five letters. (This example will be discussed at length in chapter 5.) Of course, there is a question as to whether characterizing statements are true at all, since they admit exceptions. There is a temptation to say they are “strictly speaking false,” that they are “indeterminate or figurative or metaphorical or sloppy talk” as Asher and Pelletier (1997, p. 1128) put it. But such statements are the bread and butter generalizations of everyday life. If we say that characterizing statements with exceptions are strictly speaking false, we will be left with little to say of any generality that is true. (Even the famed ‘snow is white’ is a characterizing sentence, since, after all, there is yellow snow.) I will assume such sentences can be true or false. There are different views as to what makes such statements true. One view is that there is an implicit quantifier. But which one? The trouble is that, as Asher and Pelletier (1997) point out via the following examples, no one quantifier binding the subject can take care of all the following true characterizing statements (where the quantifier in parentheses gives the range of things that are needed to make the statements exceptionless and strictly speaking true): Snakes are reptiles. (all) Telephone books are thick books. (most) Guppies give live birth. (some subset of the females) Crocodiles live to an old age. (a very few) Frenchmen eat horsemeat. (a few) Unicorns have one horn. (none) (p. 1132)
A second (wrong) view, according to Asher and Pelletier (1997), is that characterizing sentences “employ vague, probabilistically-oriented quantifiers such as most, or generally, or in a significant number of cases” (p. 1132). The trouble with this view is that under it the following examples from Asher and Pelletier of false characterizing statements would be true, because in each case prefixing the sentence with ‘In a significant number of cases’ produces a truth. Leukemia patients are children. Seeds do not germinate. Books are paperbacks. Prime numbers are odd. Crocodiles die before they attain an age of two weeks. (p. 1132)
Paraphrasing, Part Two
79
But the real reason, Asher and Pelletier argue, that all of these “extensional” approaches are doomed to failure, is that “characterizing sentences are inherently intensional” (p. 1133). And here is where we may lose the nominalist, especially those with Quine’s mistrust of the intensional. But the following examples, read extensionally, are false if there is no occasion for the episode to take place, when in fact they may be true characterizing statements: This machine crushes oranges. Kim handles the mail from Antarctica. Members of this club help one another in emergencies. (p. 1133)
That is, the machine may never have occasion to crush an orange, so read extensionally it is false, but since its function is to crush oranges the characterizing statement is true. So the characterizing sense is not extensional. Similarly, if there is no mail from Antartica, but Kim’s job description includes handling mail from Antartica, the characterizing reading is true although the extensional reading of it is not. And if there are no emergencies for the members of the club to help one another with, but they stand ready to do so, then the characterizing reading is true. Asher and Pelletier conclude that “surely this shows the complete implausibility of trying to capture genericity with a quantifier, no matter how inherently vague or probabilistically-determined one tries to make the quantifier. No such extensional analysis can be correct” (p. 1133). That there is some operator other than an implicit quantifier is underscored by the fact that there are ambiguities between different generic readings of some characterizing sentences. An example they give (p. 1134) is that the characterizing sentence Typhoons arise in this part of the Pacific can mean either Typhoons in general have a common origin in this part of the Pacific or There arise typhoons in this part of the Pacific. A related feature of many characterizing statements is that they are often not the expression of mere regularities. Asher and Pelletier (p. 1137) point out that a characterizing sentence such as Birds fly expresses, in its characterizing sense, something different from all of the following sentences construed extensionally:
80
Chapter 4
Most birds fly. Usually birds fly. Birds typically fly. Birds generally fly. Normally, birds fly. In general, birds fly. The reason, they claim, is that ‘Birds fly’ has “nomic force—it follows somehow from (assumed) natural laws,” whereas “the other statements can be used to express this meaning also, but in addition they can be used to assert, on a purely extensional level, that most, or many, etc., birds can fly” (p. 1137). The upshot of all this is that characterizing sentences have, according to Asher and Pelletier, three parts, joined by a generic operator, GEN: “a matrix (a main clause) which makes the main assertion of the characterizing sentence, a restrictor clause which states the restricting cases relevant to the particular matrix, and a list of variables that are governed by GEN” (pp. 1135–1136). So, for example, the ambiguous ‘Typhoons arise in this part of the Pacific’ may be represented as either GEN [x] (x are typhoons; ∃y[y is this part of the pacific & x arise in y]); or as GEN [x] (x is this part of the pacific; ∃y[y are typhoons & y arise in x]). There are two key questions at this point: what is the semantic interpretation of GEN? And can the nominalist stomach it? We already rejected the view that there is an implicit quantifier, either the universal quantifier or some other univocal quantifier, or a vague probabilistic quantifier, so if any of these analyses is the nominalist’s preferred analysis, it may put the nominalist out of the running for appealing to characterizing sentences to explain type talk. Asher and Pelletier discuss seven other possibilities that occur in the literature: that GEN statements involve (i) relevant quantification, or a singular predication where the subject is (ii) an abstract object, (iii) a prototype, or (iv) a stereotype, or that they are best analyzed by (v) modal conditionals in possible world semantics, or by using (vi) situation semantics, or by way of (vii) a default reasoning analysis. Obviously, not all of these are consistent with nominalism, and maybe none is. But it would take us too far afield to try to sort out which ones are consistent
Paraphrasing, Part Two
81
with nominalism, and more important, to decide what the best interpretation of the GEN operator is. Luckily it isn’t necessary for our purposes. For our purposes, the question is: even if the nominalist avails himself of the GEN operator, has he given a systematic analysis of type statements if he claims (4)
To say ‘T is P’ is to say ‘GEN [x] (x are ts; x are P)’.
No. The same source that provided the nominalist with succor also withholds it. What linguistics giveth, linguistics taketh away. For Asher and Pelletier in effect negate (4), the purported equivalence of what they call kind statements, with characterizing statements. Their examples of kind statements are: The potato was first cultivated in South America. Potatoes were introduced into Ireland by the end of the seventeenth century. The Irish economy became dependent upon the potato. Notice how similar these are to the sorts of statements considered in chapter 1. They claim that “not only are there intuitive differences concerning their logical form but also there are linguistic differences between them” (p. 1128). Krifka et al. (1995, p. 63) summarize some of the differences (found after examining a number of different languages) as follows: i) Nominal predicates that are tied to an established kind can safely be regarded as yielding a kind-referring NP (the Coke bottle vs. *the green bottle). ii) The verbal predicate in a sentence with a kind-referring NP need not be stative (The Panda is dying out). iiii) There are verbal predicates (kind predicates like be extinct, invent) which require a kind-referring NP in some argument place.
Let me elaborate on point (iii), since it shows that no analysis of the form we have been considering will work. That form is: To say ‘T is P’ is to say ‘. . . ts are P’. The reason is that some properties of the type do not apply to individual tokens. For example, some properties are collective properties, derived from the distribution of its tokens. The grizzly bear, Ursus horribilis, for example, had at one time a U.S. range of most of the West, and numbered 10,000 in California alone. Today its range is Montana, Wyoming, and Idaho, and it numbers fewer than 1,000. No particular flesh-and-blood bear
82
Chapter 4
numbers 1,000 or had a range comprising most of the West. So certainly it is not the case that every one did, that every normal one did, that most did, or that average ones did. There are countless examples of such claims. The first few examples from chapter 1 testify to their frequency and underscore the impossibility of analyzing the predication of a property to a type as a predication of a property to every, most, normal, or average tokens. Those examples were: The ivory-billed woodpecker, once North-America’s largest and most spectacular, was declared extinct. Less than a century ago, it was found across the South. The Tarahumara frog, which lived in Arizona, has disappeared from the United States. It is clear that predicates such as ‘being declared extinct’, ‘being found across the South’, ‘being rare’, and ‘disappearing from the United States’ apply here not to individual specimens, but collectively to the species as a whole. Similarly for references to particular genes, as in “the exon of the D4 dopamine receptor gene varies in length among different healthy populations, exhibiting anywhere from 2 to 10 so-called repetitions for DNA subunits.” For a linguistic example, we had: “Greek, Armenian, and Sanskrit inherited the same word, meaning ‘winter’, and all of them independently shifted its meaning so that it means ‘snow’ instead.” Similarly, Old Glory had twenty-eight stars in 1846 but now has fifty, although it is unlikely that any particular American flag underwent such a transformation—certainly most did not, average ones did not, normal ones did not. Now, of course, some of these claims can be given logical equivalents that do not make any apparent references to types. For example, The ivory-billed woodpecker, once North-America’s largest and most spectacular, was declared extinct. Less than a century ago, it was found across the South. will be true if and only if it is true that It is now official that there are no more ivory-billed woodpeckers, which were once North-America’s largest and most spectacular woodpeckers. Less than a century ago, they were found in many places in the South. Similarly The Tarahumara frog, which lived in Arizona, has disappeared from the United States,
Paraphrasing, Part Two
83
will be true if and only if it is true that There used to be some tarahumara frogs living in Arizona, but none live anywhere in the United States now. That is as it should be. Biology is an empirical science. Facts about species are ultimately rooted in facts about members of them—they supervene on the facts about members, if you will. (They are also rooted in facts about other species, too.) Linguistics, too, is an empirical science. Facts about words, or languages, have roots in facts about tokens of them. If we work hard enough at it, we can often find some way of speaking that does not appear to refer to any types and yet is appropriately equivalent to some statement about an individual type. (More on this below.) These last remarks are apt to provoke our nominalist’s final suggestion concerning paraphrasing. The nominalist might claim that if, as I concede, there often are ways to paraphrase type talk, isn’t that good inductive evidence that it can always be done—even in the absence of a systematic way to do it? This brings us to our final paraphrasing suggestion—which, really, is not so much a bona fide suggestion as to how to paraphrase as it is merely the claim our nominalist started with: (5) Every claim that contains an apparent reference to a type is equivalent to a claim that apparently refers only to its tokens (on a par with the examples given in the preceding paragraph). First of all, this really seems to be just a nominalist article of faith. (But more importantly, even this attenuated suggestion will not work, for reasons given in the next paragraph.) Recall what prompted this paraphrasing endeavor. The nominalist made it sound as though there were only a few odd sentences that needed paraphrasing, and Goodman’s and Sellars’s suggestion (embodied in (1)) for doing so suggested that there were fast and easy ways to paraphrase them. I hope chapter 1 and the results of this chapter have laid that idea to rest. So at this point (5) seems like an article of faith because there does not seem to be enough inductive evidence to justify it—quite the contrary. If we look to the field of linguistics for evidence, we find that they are moving full steam ahead with the hypothesis that the NPs of some sentences refer to kinds. Carlson and Pelletier (1995, p. 78) even count such sentences as the following as containing NPs that refer to kinds, in spite of the fact that many of these sentences contain predicates that can refer to individual specimens, and the sentences in most cases lend themselves to easy paraphrasing:
84
Chapter 4
Linguists have more than 8,000 books in print. The American family contains 2.3 children. The potato contains vitamin C. Dutchmen are good sailors. Be quiet—the lion is roaming about! Man set foot on the Moon in 1969. The wolves are getting bigger as we travel north. Moreover, I would urge that given the unbelievable frequency of apparent references to types, we need to be assured that references to them can always be eliminated. There is no assurance of this unless there is a systematic way to eliminate them. Goodman and Quine certainly exhibit sensitivity to the need for producing the paraphrase. The goal of Goodman and Quine (1947) was to “give a translation for [the] syntax” of mathematics (p. 197). They confidently conclude that What is meaningful and true in the case of platonistic mathematics . . . [are] the rules by which it is constructed and run. These rules we do understand, in the strict sense that we can express them in purely nominalistic language. The idea that classical mathematics can be regarded as mere apparatus . . . can be maintained only if one can produce, as we have attempted to above, a syntax that is itself free from platonistic commitments. (p. 198, my italics)
Note that according to Goodman and Quine, (a) one must actually produce the nominalist paraphrase. And until it is done, according to them, (b) we don’t even understand the platonistic sentences, for they say “if it cannot be translated into nominalistic language, it will in one sense be meaningless for us” (p. 197). That explains why doing so is necessary for them. My point is that given the ubiquity of references to types, a systematic rulegoverned paraphrase is essential to justify confidence that all talk of types can be eliminated. In the paraphrases given several paragraphs above (about ivory-billed woodpeckers and so on), no apparent pattern is discernible. But the problem with (5) and the likelihood that perhaps a piecemeal paraphase can be obtained in every case is that the sentences above were easy to paraphrase, and it was easy to judge that the results were roughly semantically equivalent to the original claim. Not so for the following much harder sentence. And even were that to be paraphrased nominalistically, there is no hope of paraphrasing the many quantifications over types that we exhibited in chapter 1. (More on this in section 4.)
Paraphrasing, Part Two
3 (i)
85
Old Glory Old Glory had twenty-eight stars in 1846 but now has fifty.
Recall that by Goodman’s and Quine’s lights, we can’t understand this sentence until we have the nominalist paraphrase in hand, since it contains an apparent reference to an abstract object, a type of artifact. Presumably, the paraphrase shouldn’t be too difficult to give if it is what we “really understand” (as they put it) if we understand the type sentence. The trouble is that it doesn’t lend itself to any particular paraphrase in terms of flag tokens, as (5) demands. One might think the following would work: (ii) All or most Old Glorys had twenty-eight stars in 1846 but all or most now have fifty stars, But (ii) is probably false, whereas (i) is true. There were twenty-eight states by July 4, 1846, so Old Glory had twenty-eight stars on that date; but it is likely that most or all of the tokens of Old Glory that existed on that date did not have twenty-eight stars. Most tokens of Old Glory in existence in 1846 probably had twenty-six or fewer stars because Texas and Florida had just joined the Union the previous year (assuming people did not immediately destroy their old flags, and it took flag manufacturers a while to catch up). We might try: (iii) All properly made Old Glorys had twenty-eight stars in 1846, but properly made Old Glorys have fifty stars today. Whereas (i) is unambiguous, (iii) is ambiguous, depending on what ‘properly made’ means. Conforming to the manufacturer’s design? No doubt most or all of the twenty-six star flags did conform to the manufacturer’s design when they were made, so that on this interpretation, (iii) would be false (assuming, again, that it takes a bit of time to catch up). Conforming to the law regarding flags? To eliminate the ambiguity, we might try: (iv) All Old Glorys made in accordance with the law regarding flags had twenty-eight stars in 1846, but have fifty stars today. But what if there wasn’t any law regarding flags—perhaps there was only a flagmakers’ custom, if that? How is one supposed to know there was a law or a custom in 1846 dictating what properly made tokens look like? If it is equivalent to some claim about tokens only, what is the claim? There is nothing special or unusual about (i). I think I know what it means, but I certainly could not have readily provided a paraphrase in terms only of
86
Chapter 4
token flags that is equivalent to (i); nor had I any idea what it is true in virtue of. At least, not until I read in Stars and Stripes that In 1818, after five more states had been admitted, Congress enacted legislation pertaining to a new flag, requiring that henceforth the stripes should remain 13, that the number of stars should always match the number of states, and that any new star should be added on the July 4 following a state’s admission. This has been the system ever since.
A law of 1947 states: The flag of the United States shall be thirteen horizontal stripes, alternate red and white; and the union of the flag shall be forty-eight stars, white in a blue field. . . . On the admission of a new State into the Union one star shall be added to the union of the flag; and such addition shall take effect on the fourth day of July then next succeeding such admission.2
So it turns out that (i) is true roughly because (v) There were twenty-eight states in 1846 and there are fifty today, and there were laws passed by Congress in 1818, and in 1947, among other dates, specifying what “the flag of the United States” is to consist in. The main problem with (v) is that we still have a reference to a type (“The flag of the United States”)—the very type we were trying to eliminate reference to. But in addition, it would be absurd to pretend that (v) is what we “really understand” when we understand (i), as Goodman and Quine (1947) require. Claim (v) has much more information in it than does (i), information that cannot be gleaned from (i). So it would be incredible if (v) were a façon de parler for (i). They are materially equivalent but surely that is insufficient. Moreover, (v) does not even appear to quantify over actual flag tokens at all. They are irrelevant, since there need not have been any token flags with twenty-eight stars. At best there is an implication that had there been any flag tokens that conformed to Congressional law in 1846, they would have had twenty-eight stars. This brings us to our last try: (vi)
Old Glories had twenty-eight stars in 1846 but now have fifty.
As we saw, understood extensionally—as a description of what the actual flags in 1846 looked like—this is probably false. However, understood as a characterizing sentence, perhaps it is true. But then it is intensional. As we saw, that is something nominalists, especially of Quine’s stripe, will have trouble with.
Paraphrasing, Part Two
87
What is the point of all this? Not that there is no nominalistic paraphrase of (i); maybe one exists in platonic heaven (although it probably has no tokens!). The point is that if (i) is a mere façon de parler for some other claim not involving references to types, it should be possible without undue difficulty to say what it is a mere manner of speaking for—especially if it is what we understand when we understand (i), as Goodman and Quine (1947) require. 4
Quantification
I have been writing as though the main impediment to the nominalist program is its apparent inability to paraphrase away singular references to types, either in a systematic or even a piecemeal fashion. This has been a ruse, for which I beg the reader’s pardon. It was a necessary ruse in that nominalists often offer paraphrases of apparent singular references to types as though such singular references are the main problem for their program, so hope that this could be done had to be laid to rest before we could proceed. But it was not strictly philosophically necessary. The coup de grâce for (5) (or anything like it) is the virtual impossibility of doing away with apparent quantifications over types. We saw many examples in chapter 1, including this one, but note especially the sheer volume in a typical paragraph from Mayr: Classifying species as monotypic or polytypic is a first step in a quantitative analysis of phenotypic variation. Another way is to analyze the subdivisions of polytypic species: What is the average number of subspecies per species in various groups of animals and what is their average geographic range? There are believed to be about 28,500 subspecies of birds in a total of 8,600 species, an average of 3.3 subspecies per species. It is unlikely that this average will be raised materially (let us say above 3.7) even after further splitting. The average differs from family to family: 79 species of swallows (Hirundinidae) have an average of 2.6 subspecies, while 70 species of cuckoo shrikes (Campephagidae) average 4.6 subspecies. . . .
To appreciate the monumental challenge of analyzing away all quantifications over species in the preceding paragraph into talk merely of concrete individuals, consider Goodman and Quine’s (1947, p. 180) nominalistic translation of the far simpler ‘there are more cats than dogs’: Every individual [in Goodman’s mereological sense3] that contains a part of each cat, where the part is just as big as the smallest animal among all cats and dogs, is bigger than some individual that contains a part of each dog, where the part is just as big as the smallest animal among all cats and dogs.4
88
Chapter 4
It is certain that there is no actual nominalist paraphrase token of the Mayr passage yet in existence, and a good bet that none will ever exist. I, for one, have no idea how to even begin to come up with a paraphrase that quantifies over only concrete individuals. In the absence of such a paraphrase we remain committed to species. The same remark applies, mutatis mutandis, to genes, proteins, traits, receptors, germline mutations, words, syllables, sound-segments, vowels, accents, phonemes, phonological representations, alternations, parts of the speech organs, languages, language trees, computers, chess games, openings, gambits, piano concerti, notes, opening measures, octave leaps, atoms, spectral lines, forces, fields, subatomic particles, and all the other types that we saw quantified over in chapter 1. As Quine (1961a, p. 13, my italics) remarked: when we say that some zoölogical species are cross-fertile we are committing ourselves to recognizing as entities the several species themselves, abstract though they are. We remain so committed at least until we devise some way of so paraphrasing the statement as to show that the seeming reference to species on the part of our bound variable was an avoidable manner of speaking.
So what was the point of laboriously considering (1) to (5), if the preceding consideration involving quantification is so devastating to the nominalist program? Only this: it would be hard to grasp how devastating it is without first seeing how difficult much simpler claims about types are to paraphrase. I conclude that the nominalist program for paraphrasing away references to, and quantifications over, types is doomed to failure. Assuming, then, that type talk does involve references to, and quantifications over, abstract objects, in section 5 we will consider whether talk of types is best construed as talk of classes of tokens—that is, whether so-called class nominalism is correct. 5
Class Nominalism
The best case for construing types as classes centers around species, because species, at least, are said to have “members” and a determinate number of them at a given time. For example, there were more than 6.5 billion members of Homo sapiens alive in 2006. I will argue that the advantages of construing species as classes pretty much ends at that observation. Yet many philosophers would consider species to be classes or sets. (There’s no need to distinguish sets from classes here.) Quine (1969b, p. 118), for instance, wrote that “kinds can be seen as sets, determined by their
Paraphrasing, Part Two
89
members. It is just that not all sets are kinds.” Species are kinds (unless we use the word ‘kind’ in some highly specialized sense). If a species is a class, then it is a particular class. Which class is it? One’s initial thought might be that (i) it is the class of currently living members of the species. But (with all due respect to A-theorists of time) Socrates counts as a member of Homo sapiens, but not a member of the class of currently living Homo sapiens. Moreover, (i) would make all extinct species identical. And it would entail that every time a human is born, a new species comes into being. Since classes are extensional, the class of humans alive yesterday is not identical to the class of humans alive today, although it would be biological absurdity to maintain that a new species has come about. So a better suggestion is (ii) that each species is the class of all the spatiotemporal members of the species that ever did, do, or will exist (assuming there is only one future). Although this suggestion avoids the problems in (i), there is a decisive reason for rejecting it.5 It is this. A class, or set, has its members necessarily, but a species does not. Homo sapiens would have been Homo sapiens whether or not I was born. Kit Fine (1992) argued that a class, X, has its members essentially—different members make a different set—but that the members of X are not essentially members of X. Something nearly the opposite seems to be the case with species. Many philosophers have thought that membership in Homo sapiens is an essential property of each of us; but clearly having me as a member is, alas, not an essential property of Homo sapiens. I might not have been born. In a possible world in which I was not, Homo sapiens would still be the numerically identical species, Homo sapiens, but the class of Homo sapiens would be a different one from the class of Homo sapiens in the actual world.6 That is, the class of Homo sapiens necessarily has me as a member, but the species does not. Classes have the same membership in every possible world, but species do not. So a species cannot be the class of all actual members of it ever to live. There are three options for the class enthusiast, none of them promising. One is to insist that it really would have been a different species had I not been born. The problem with this suggestion is making it plausible, for it flies in the face of biological law. Biologists would consider someone daft who thought that his or her nonexistence would mean that a species other than Homo sapiens would exist—even “strictly speaking.” It might be claimed that we wouldn’t bother to distinguish these “different” species because the difference would be so slight. I take it that this is roughly Hume’s position on identity—any difference destroys identity. If so, it has the disadvantages of Hume’s position on identity (which he himself
90
Chapter 4
confessed led to a “labyrinth” of opinions he said he “did not know how to correct. . ., nor how to render . . . consistent” [Hume 1973, p. 633]). To the extent that such a position is coherent, it would mean a radical departure from the task before us, which is to articulate a metaphysics that does the least amount of damage to ordinary and scientific discourse. An example of the sort of damage I have in mind is the following. If species have their membership necessarily, then the following biological truth would be necessarily false: If its habitat had been preserved, the ivory-billed woodpecker would not be extinct (given the reasonable assumption that the ivory-billed woodpeckers who would be alive would be “new” ones, i.e., not identical to any members of the class of actual ivory-billed woodpeckers—call it X—all of whom are dead). It would be necessarily false, because in every relevantly possible world in which the habitat was preserved, there would be no members of X alive. Similarly, every class has a particular cardinality necessarily. Not so a species. Therefore, given the nature of our project, this option will be pursued no further here. The second option for the class enthusiast is to claim that a species is the class of all the actual and possible members of it. Quine would not embrace this option; his antagonism to possibilia runs deep owing to their poor identity conditions.7 But there is another reason for spurning the suggestion that a species is the class of all actual and possible members of it, namely, that it falsifies many of the sorts of things we claim to be true of species. For example, there are only about one thousand members of the species Ursus horribilis left. But if the species Ursus horribilis is the class of all actual and possible grizzlies, then this is not true. Had no European “discovered” America there might still be tens of thousands of grizzlies alive. So there are tens of thousands of members of the set of possible and actual living grizzlies. Similarly, the ivory-billed woodpecker is extinct; there are currently no living members of the species. But there might have been many living members of it, had the pine forests around swamps not been destroyed. So if the species includes possible living ivory-bills, then it has many living members. The true cardinality statements about species become false. The advocate of the view we are considering might urge that we accommodate his view by recasting all claims about “members” as claims about “actual members” (so that, for example, the number of actual living grizzlies is 1,000). But why bother? That is, why should we revamp our linguistic habits to accommodate what is at best a deeply problematic metaphysics?
Paraphrasing, Part Two
91
The third option for the class enthusiast is to articulate a different conception of class—class as extension of a predicate, for example, where the extension varies from possible world to possible world. There is the predicate ‘Homo sapiens’, which has an extension that varies from world to world, and the species in some sense “is” the extension. But why call this thing a class, given the well-established semantics for “set” and “class”? There is no thing called ‘the extension of the predicate’; there are different extensions in different worlds. So the scenario is more adequately described as one in which there are different classes in different possible worlds somehow standing in for the species. But what is this “standing in” relation? The species cannot be any one of the classes, for they are not identical to each other. It can’t be the union of all of them, for that was the suggestion considered and rejected in the preceding paragraph. It can’t be the class of all the classes in the different possible worlds, for then members of ordinary biological species would be, not individual organisms, but classes of them. For similar reasons it can’t be some sort of function, because even if a function is a class, it is not the sort of class that has individual organisms as members. Could we say that the species itself has an extension, different in different worlds and these things are classes? Perhaps; but this does not make a species a class. The problem the suggestion we are considering runs up against is twofold: it is that species have members, different ones in different possible worlds, and the members are organisms, not classes. It is hard to see how any concept of “class” worthy of the name can accommodate these facts. The metaphysical underpinning that makes these criticisms of the species-as-class possible, is that the relationship of a member to a class it is a member of is an extrinsic one—like the number of miles apart you are from the Great Pyramid at Giza. If the Great Pyramid at Giza did not exist, you would not stand in that relation to it, but you would be unchanged. Similarly, if a and b are existing contingent entities, then {a, b} exists too. But if b did not exist, then there would be no {a, b} for a to be a member of. It is otherwise with species. If George W. Bush did not exist, there would be no class {you, George W. Bush} for you to be a member of, but there would still be a species, Home sapiens, for you to be a member of. Your membership in a species is more nearly intrinsic to you, part of your nature. If the class {you, George W. Bush} had never existed—say, because Bush didn’t—you would still exist. But if the species of which you are a member had never existed, then it is extremely probable that you would not have existed either—many philosophers would say it is biologically impossible that you would have existed. (I am not one of them, since I
92
Chapter 4
think biological classifications are infected with more contingency than philosophers usually allow. And for similar reasons, I stop short of saying that one’s specieshood is essential in the sense of ‘essential’ that entails it is necessary.8) But certainly what species one is a member of is a scientifically important fact about one; inferences with a high degree of probability may be drawn on the basis of one’s specieshood. (Perhaps this is all we should mean by ‘essential’.) Since any bunch of things makes up a class, classes are scientifically uninteresting (outside of mathematics). Thus classes are ill suited to serve as species. And species were the nominalist’s strongest case, because at least they are said to have members. The proposal looks far worse when we consider other types. Here is just one example. If a sentence is just the class of its tokens, then there are only finitely many sentences of English. This would have absurd consequences for linguistics. It would identify with one another, for example, all sentences that have no tokens. For example, one hundred occurrences of ‘the clam’ followed by one hundred occurrences of ‘split’ is supposed to be a sentence of English. Assuming, as seems plausible, that it has no tokens, it would just be the null set—like every other very long untokened sentence. Even what counts as a sentence would turn out to be a radically contingent matter—contingent on which ones happen to get uttered or inscribed. I conclude that types are not classes. Conclusion In chapter 2 I argued that the data in chapter 1 support the conclusion that types exist. The nominalist’s objection to this argument was that each claim that appears to refer to or quantify over types is merely a façon de parler for a claim that refers to or quantifies over only tokens; since such apparent references to and quantifications over types can be “paraphrased away,” we need not conclude types exist. In this chapter we’ve seen that in too many cases they cannot be paraphrased away, especially quantifications, and these occur frequently in biology, linguistics, physics, and most other disciplines. The nominalist may retreat to class nominalism, but we’ve also seen in this chapter that even if this is a form of nominalism, it is not an adequate one. Nominalism, therefore, does not have an adequate account of the data in chapter 1. Thus the argument that types exist stands. In the next chapter, chapter 5, we drop the defensive posture toward nominalism and take an offensive one. By scrutinizing Goodman’s nominalism very carefully we are able to expose the trouble with nominalism.
5
The Trouble with Nominalism
There is a dark and ugly side to nominalism, one that deprives it of the epistemological advantage it is often thought to have over realism. Since Quine and Goodman took nominalism with respect to linguistic signs very seriously, the epistemological problems of nominalism can be brought out by examining their strategy for eliminating abstract objects. We will concentrate on Goodman 1977b and Goodman and Quine 1947. (No doubt Field takes it very seriously too in Field 1980, 1989, but he has focused his efforts therein on eliminating mathematical objects rather than objects such as linguistic objects, with which we are primarily concerned.) 1
What Goodman and Quine Say
In Goodman 1977b, Goodman is trying to motivate eliminating linguistic types. First he urges that we ought to identify statements with the utterances themselves, rather than with something they have in common, because “each of [two] utterances of ‘now’ ” may differ in truth value although they “may be exactly alike in sound pattern” (p. 261). “We find different tokens of the same type naming and affirming different things” (p. 262). Notice that Goodman has referred to types here in the phrases I italicized to make his point (which is understandable given how hard it is to say anything general about words without referring to types). He has given no reason so far for doing without types altogether. However, he adds: it is the types that we can do without. Actual discourse, after all, is made up of tokens that differ from and resemble each other in various important ways. Some are “now” ’s and others “very” ’s just as some articles of furniture are desks and others chairs; but the application of a common predicate to several tokens—or to several articles of furniture—does not imply that there is a universal designated by that
94
Chapter 5
predicate. And we shall find no case where a word or statement needs to be construed as a type rather than as a token. (p. 262)
The preceding paragraph, with its reference to “a common predicate” and of “now” ’s and “very” ’s may put one in mind of what Armstrong (1978a, chap. 2) calls “predicate nominalism” and its shortcomings. According to predicate nominalism, a is F iff a falls under the predicate ‘F’. Thus, for example, there is nothing all red things have in common except that the word ‘red’ applies to them. (Similarly, a is a ‘now’ iff a falls under the predicate ‘is a “now” ’.). Quine (1961a, p. 10) clearly appears to be endorsing predicate nominalism to solve the problem of universals in this passage (which we have already seen) because he refers to word types: The words ‘houses’, ‘roses’, and ‘sunsets’ are true of sundry individual entities which are houses and roses and sunsets; but there is not, in addition, any entity whatever, individual or otherwise, which is named by the word ‘redness’. . . .
The obvious difficulty for predicate nominalism is that the analysis appeals to a type, the predicate ‘red’, which is just the sort of thing (viz., a universal) that needs to be analyzed away. But using the analysis to do so only generates more types: “a falls under the predicate ‘red’ iff a falls under the predicate ‘falls under the predicate “red” ’ ”. An infinite vicious regress ensues, in which we never get rid of ‘the predicate “red” ’. But Goodman (1977b) need not be taken to be referring to predicate types, and probably should not be so taken, for he goes on to say: To emphasize the fact that words and statements are utterances or inscriptions—i.e., events of shorter or longer duration—I shall sometimes use such terms as ‘wordevents’, ‘noun-events’, ‘ “here”-events’, ‘ “Paris”-events’ and so on, even though the suffix is really redundant in all these cases. . . . A word-event surrounded by quoteevents is a predicate applicable to utterances and inscriptions; and any ‘ “Paris” consists of five letters’ is short for any ‘Every “Paris”-inscription consists of five letter-inscriptions’. (p. 262, my emphasis)
Goodman through a Realist Filter There is much that is puzzling about the above passage from Goodman, and it reveals something of nominalism’s counterintuitiveness to see what it is. Take the underlined claim. On first reading, we are liable to process it through a realist filter (one that relies on types) and interpret it as: the sentence (or, if you like, any token of) ‘Paris’ consists of five letters is short for the sentence (or any token of) Every ‘Paris’-inscription consists of five letter-inscriptions.
The Trouble with Nominalism
95
(This is realist because either way it refers to two sentence types.) I shall assume that ‘consists of five letters’ means here ‘consists of exactly five letters’ and that if one thing ‘is short for’ another, it at least entails the other (since they ought to be synonymous). So if ‘Paris’ consists of five letters is true, Every ‘Paris’-inscription consists of five letter-inscriptions ought to be true too. But the second doesn’t follow from the first. The first is a semitheoretical claim. It says, correctly (and speaking realistically), that the word type ‘Paris’ consists of five letters; so (any token of) the sentence ‘Paris’ consists of five letters will be true. As we saw in chapter 3, this can be true even though not every token inscription of ‘Paris’ consists of exactly five letter tokens (just as Ursus horribilis can be characterized as ferocious even if some members of the species are timid). That is, Every ‘Paris’-inscription consists of five letter-inscriptions is a universal empirical claim, one that is false (caveats to follow), as are tokens of it. False, because some token inscriptions of the word ‘Paris’ consist of more than five letter tokens; examples we’ve seen from the OED include ‘Parrys’ and ‘Pareiss’. He can try to get off the hook by thumbing his nose at the lexicographers and counting ‘Paris’, ‘Parrys’, and ‘Pareiss’ as (tokens of) different “words.” That is, he can claim that he was speaking of modern orthographic forms of ‘Paris’. In that case it is much more plausible that all tokens of it will be spelled the same. More plausible, but still false. Some tokens of it are bound to be misspelled. It is not plausible to claim that misspelled tokens are never tokens, especially where they clearly function as such for important (legal or survival) purposes. A much more formidable problem for Goodman (which we already saw in chapter 3) is that his strategy leads to completely unacceptable results for linguistic theory. Consider utterances instead of inscriptions. Suppose someone says ‘Extraordinary’ consists of six syllables. By Goodman’s lights, what he has said is short for Every ‘extraordinary’-utterance consists of six syllable utterances.
96
Chapter 5
Intuitively, the former is true but the latter is false, for recall that although ‘extraordinary’ consists of six syllables, not every utterance of it does. As we saw, the word is variously pronounced with six, five, four, three, or even two syllables by speakers of British English, ranging “for most British English speakers from the hyper-careful ['ekstrə'ʔɔ:dinəri] through the fairly careful [ik'strɔ:dn ¸ ri] to the very colloquial ['strɔ:nri].” It is once again open to Goodman to make the same move we considered above in connection with ‘Paris’, ‘Pareiss’, and ‘Parrys’, namely, to classify all these different pronunciations of ‘extraordinary’ as different “words.” But not only is this much less plausible than in the case of ‘Paris’, etc., it also opens a can of worms. For “it is very rare for two repetitions of an utterance to be exactly identical, even when spoken by the same person.” If we are to be consistent, then, we would inexorably be driven toward viewing each word token as a different “word.” To put it in Goodman’s terms, there would be no, or hardly any, “replicas” (tokens of the same type)—a consequence that no one, including I assume Goodman, wants. The upshot is that unless we take the extreme step outlined above, which runs so against the grain of linguistic theory, ‘Extraordinary’ consists of six syllables is not short for Every ‘extraordinary’-utterance consists of six syllable utterances because it does not entail it. Thus if we view what Goodman said through a realist filter, it is false. So far we have been relying on realist semantics—that is, semantics that appeal to types. Let us leave types behind, and take Goodman at his word when he says (above) that there is “no case where a word or statement needs to be construed as a type rather than as a token.” Let us drop the realist filter. Goodman at His Word So instead of construing the sentence ‘Paris’ consists of five letters as being about a word, namely the word ‘Paris’, either a type or a token, let us try to understand Goodman’s underlined passage as a suggestion for revamping the language—a suggestion as to how the sentence is to be understood henceforth. But what is a ‘ “Paris”-inscription’? Presumably, a token inscription of the word ‘Paris’ (although more on this below). Does
The Trouble with Nominalism
97
the underlined phrase refer to the word type ‘Paris’, or some particular token? If we are not to appeal to types—and now we are not—then it must be referring to a word token. Which token? We don’t know. So let’s arbitrarily pick one. Just for definiteness, let it self-refer. (That is, it refers to the bold token of ‘Paris’ above in the reader’s copy of this book.) So Every ‘Paris’-inscription consists of five letter-inscriptions says that every inscription of that particular token has five letter inscriptions. But such a generalization is a sham; it is about only one object. To get a genuine generalization, we might try quantifying over Goodman’s “replicas,” which he explains as follows: Let us speak of words (or letters or statements, etc.) that are catalogued under a single label as replicas of each other, so that any ‘Paris’ (or any ‘I say’) is a replica of itself and of any other ‘Paris’ (or ‘I say’). (p. 263)
In Goodman 1972b (p. 438), he calls replicas “tokens of the same type.” Then we might understand Goodman 1977b as making the following proposal: any replica of this token: ‘Paris’ consists of five letters is to be understood as short for any replica of this token: Every inscription replica of ‘Paris’ consists of five letter-inscriptions We now have a genuine generalization (even if we no longer have a good grip on what we’re saying). Whether it does the job Goodman wants it to do will depend on what counts as a replica. Are ‘Parrys’ and ‘Pareiss’ replicas of ‘Paris’? The choice of the word ‘replica’ suggests that all replicas look alike, or sound alike, or are similar in some other apparent way. But Goodman 1972b (p. 437) is very clear that this is not the case, for he says Similarity, ever ready to solve philosophical problems and overcome obstacles, is a pretender, an impostor, a quack.
Similarly, he says (p. 438): Similarity does not pick out inscriptions that are ‘tokens of a common type’, or replicas of each other. Only our addiction to similarity deludes us into accepting similarity as the basis for grouping inscriptions into the several letters, words, and so forth.
Since he wants to count tokens of the same type as replicas, ‘Parrys’ and ‘Pareiss’ should count as replicas of ‘Paris’, as are misspellings of it. But
98
Chapter 5
even if they are not to be counted as replicas of each other, surely the various utterances ['ekstrə'ʔɔ:dinəri], [ik'strɔ:dn ¸ ri], and ['strɔ:nri] are replicas of each other. But then we are back to the earlier difficulty that the second sentence does not follow from the first. (Of course Goodman can, if he wants, insist that he hasn’t been refuted—as one reviewer of this book commented—since he has stipulated that the first sentence be understood as the second—that is, that he has in effect, stipulated that any replica of this token: ‘Extraordinary’ consists of six syllables [said aloud] is to be understood as short for any replica of this token: Every ‘extraordinary’-utterance consists of six syllable utterances [said aloud]. And what can one say if indeed he were to so insist, except to point out that his semantics are not those of the rest of us, and he owes it to us to explain what his deviant semantics are.) One might feel I am not meeting Goodman halfway. But there is no halfway between realism and nominalism, unless one smuggles abstract objects in the back door, usually by relying somehow on realist semantics. I shall continue, therefore, to chronicle the difficulties and perplexities that we run into when we take Goodman in a 100 percent nominalist fashion. He says that “words and statements are utterances or inscriptions—i.e., events of shorter or longer duration.” It is very odd to call a bit of ink (an inscription) that sits in a book on a shelf for forty years “an event.” What particular would not qualify as an event? Although it is more plausible for utterances, it still seems to mangle language. Some, for example, Richard Cartwright (1987, p. 37), have argued that even spoken word tokens cannot be events: Even though words occur, they do not happen or take place; in this respect they are like numbers, diseases, species, and metaphors. Of course, a word cannot occur unless something happens. . . . But although his uttering the word is something that not only occurs but also takes place, the word he utters is not. Thus a word is not to be thought of as an event—like the Kentucky Derby . . . —which takes place via its ‘instances’. . . . [I]f a word cannot take place, then its tokens are not individual events or happenings. And so we are required to distinguish the word-token, not only from its type and from uttering that type, but also from someone’s uttering that type on some given occasion.
Moreover, suppose Goodman utters “ ‘Paris’ consists of five letters.” Does this event, this uttering, contain any “quote-events,” as Goodman would like it to? It is hard to see how.
The Trouble with Nominalism
99
Recall that for Goodman “a word-event surrounded by quote-events is a predicate applicable to utterances and inscriptions” so that “ ‘Paris’ ” is supposed to be a predicate. But there are no “quote-events surrounding ‘Paris’ ” in an utterance. The way we figure out that it is not Paris but ‘Paris’ that is under discussion is by contextual clues. Another problem: suppose someone says “ ‘Irkutsk’ consists of seven letters” and no one ever utters the corresponding Goodman replacement sentence quantifying over only token inscriptions. That is, no Goodman replacement sentence token exists. (Had there been one, it would have begun with a replica of ‘Every “Irkutsk”-inscription . . .’) Without types (and without countenancing meanings), there is nothing for “ ‘Irkutsk’ consists of seven letters” to be short for. Let’s face it; this will be the usual situation. Hardly anyone talks about letter-inscriptions except philosophers, although everyone talks about how many letters a word consists of. Goodman himself has supplied the necessary “event” in the case of ‘Paris’, but nearly all other such claims (like “ ‘Irkutsk’ consists of seven letters”) will be short for nothing at all. That is, all those type sentences in chapter 1? Short for nothing at all! They are no good as they stand, but what they are short for doesn’t exist. So either all those sentences lack meaning or they are all equivalent to each other. Both options are clearly absurd. For recall from chapter 4, that according to Goodman and Quine, we don’t even understand the realist sentences until we actually produce the nominalist paraphrase. As such paraphrases for the most part don’t exist, we don’t understand them. This is truly bizarre. Now there is a way around this without appealing to types, and that’s to say that these things don’t exist, but they could exist. One can see from Goodman’s examples how to go about constructing a nominalist paraphrase. Such paraphrases are possible, although for the most part nonactual. But I doubt nominalists such as Goodman and Quine would be anxious to avail themselves of such possibilities, because of their antagonism to possibilia. The upshot is that we are deprived of the uniformity of explanation provided by realist semantics, with its copious supply of types to meet such demands. 2
The Trouble with Nominalism
We can think of Goodman’s proposal to purge ourselves of our reliance on types as taking place by a sort of three-step conceptual process. First, we deny there are abstract objects like words (‘Paris’) and sentences (‘I say’). The only words and sentences that remain are what I would call “tokens,”
100
Chapter 5
and what Goodman calls “events”: utterances and inscriptions. Second, we worry about how to characterize these things, or even talk about them generally, since we ordinarily do so by means of referring to the types they are tokens of and we can no longer do so because such types do not exist. Seizing upon the technique of Quine (1961a), whereby talk of individuals (“Socrates”) is to be replaced by talk using only predicates (things that “socratize”), so too we invent/recognize a slew of new predicates that apply to events (“is a ‘Paris’-event,” “is an ‘I say’-event”). So instead of speaking of “the word ‘Paris’ ” we are to speak of “is a ‘Paris’-event,” and similarly for other word-events and sentence-events. According to this (what Armstrong calls) “ostrich nominalism,” the resulting predicates do not refer. So although “the word ‘Paris’ ” might carry a commitment to the word type ‘Paris’, the predicate “is a ‘Paris’-event” supposedly does not. Third, we worry about all the predicates that result—aren’t they abstract objects? No, Goodman assures us, they are just particular “events” too—or would be if they existed. And now for the costs of nominalism. At the end of the first step, we are left with only finitely many English sentences, since all sentences are particular (token) sentences, and there are only finitely many of them (unless we are to assume we never run out of time or English-speakers). There being only finitely many sentences entails, among other things, that although A and B might be sentences, their conjunction (disjunction, equivalence, negation) need not be. Actually, the situation is much worse. If A is a sentence—a whole sentence token, one move in the language game—and so is B, then there is no such thing as the sentence that is “their conjunction.” Because if there is a token ‘and’ between tokens A and B, then A and B were not sentences to begin with, but only parts of a sentence. At best, one can say that if A is a sentence token and B is a sentence token, then sometimes there will be a sentence token C that contains a replica of A followed by a replica of this token: ‘and’, followed by a replica of B. But this will be the exception rather than the rule. Similarly for the other rules of sentence formation. In first-order logic, for example, if A and B are sentences so are ∼A, A ∨ B, A → B, A ↔ B, and so on. All such ordinary rules of syntax go out the window. Matters are even worse in linguistics, since the rules of sentence formation are so much more complicated. Basically, the whole enterprise cannot even get off the ground. Feeling the pinch, Goodman and Quine (1947, p. 175) try to increase the supply of inscriptions for proof theory by including “not only those that have colors or sounds contrasting with the surroundings, but all appropriately shaped spatio-temporal regions even though they be indis-
The Trouble with Nominalism
101
tinguishable from their surroundings in color, sound, texture, etc.” This is truly a desperate measure. If they hijack the unmarked surface of a frisbee to serve as a concrete inscription of a proof of Goldbach’s conjecture (no one has proved Goldbach’s conjecture yet, but as it seems to be true, let’s assume it is provable), it will be completely unperceptible to anyone. Admittedly, the surface of the frisbee itself is perceptible. But nobody put the proof on the frisbee; nobody can see a proof of Goldbach’s conjecture on the frisbee; nobody knows any proof of Goldbach’s conjecture; and nobody can learn what the proof is by looking at the frisbee. It is so epistemically unavailable by the only means (perceptual) that it ought to be available, that it borders on mysticism to maintain that it might exist right in front of us nonetheless. Surely some sort of perceptibility requirement on linguistic tokens is in order. Recall that the main problem with realism was supposed to be epistemological. It was supposed to derive from the acausality and imperceptibility of abstract objects. Nominalism has no sort of epistemological advantage, by its own lights, if it posits unperceptible tokens. It is even worse off than realism, since the realist has some story to tell about our knowledge of propositions about abstract objects (involving reason and theory construction), which won’t help the nominalist here. And it seems likely that positing such unperceptible objects would violate whatever causal requirement on knowledge the nominalist would care to impose on the realist—for example, that there be a “suitable causal relation to a knowing subject” that such an object stands in. So what does nominalism get by relinquishing its epistemological superiority over realism? More inscriptions, but still not enough for proof theory or linguistics. Goodman and Quine (1947, p. 175) admit that “we cannot say in general, given any two inscriptions, there is an inscription long enough to be the concatenation of the two.” So it will remain the case that there might be a proof of A and a proof of B but no proof of A & B, ∼∼A, or A ∨ B. The second step (in the conceptual conversion to nominalism) should give us even greater pause. The singular term ‘Paris’ seems to peer out from its confines within the predicate ‘is a “Paris”-inscription’. Earlier, I treated it as a singular term, because if ‘is a “Paris”-inscription’ is not simply short for ‘is an inscription of “Paris” ’, where ‘Paris’ clearly functions as a singular term, then I don’t know what it means. Similarly, for ‘I say’ in ‘is an “I say”-event’. Putting single quotes around an expression turns it into a name of that expression, and not only that expression, but that expression type—or at least that is the convention as I understand it. If I am right, then Goodman is covertly relying on a theory, realism, that he maintains
102
Chapter 5
is false. There are no abstract objects, according to nominalism, and so any theory that is committed to them, as a good deal of linguistic theory is, is false; ‘the word “Paris” ’ is a nonreferring expression. There is supposed to be something terribly epistemically wrong with realism: we cannot have knowledge of abstract objects owing to their acausal nature, and since that is ostensibly what (some of) linguistic theory is about, we really cannot understand the theory, or cannot know it to be true. Yet the comprehension of such predicates as ‘is a “Paris”-inscription’ and ‘is an “Every ‘Paris’inscription consists of five letter-inscriptions”-inscription’ clearly and probably essentially relies on a prior grasp of the offending realist theory, wherein ‘Paris’ functions as a singular term, and ‘Every “Paris”-inscription consists of five letter-inscriptions’ functions as a sentence. For another thing, it is not even clear that Goodman has eliminated the offending singular terms. It is as though the nominalist is preaching the need to avoid sin, but at the same time demonstrating that the only route to being a nonsinner is by first being a wicked sinner. But suppose we somehow agree to treat ‘is a “Paris”-inscription’ and the like as predicates that do not contain singular terms, and we find ourselves at the end of the third step, wherein we try to convince ourselves that we don’t need predicate types because predicate tokens will do. Intuitively, for this ontological replacement (of word types by predicate tokens) to work, there ought to be an actual predicate token for every word, phrase, and sentence that gets instantiated. That is, for every word type W and sentence type S (at least for every one that gets instantiated), there ought to be a predicate ‘is a W’ or ‘is an S’ which is true of it and all its “replicas.” The OED is said to contain one million words of English, and of course each of them is an “event” (is instantiated), in each and every copy of the OED. Each word of English ought to be either an ‘aa’-event, an ‘aam’-event, an ‘aardvark’-event, an ‘aardwolf’-event, or . . . or a ‘zymotic’-event or a ‘zymurgy’-event. But no. For there are no such predicates, or very few of them (such as the ones I just supplied), because only actual particular utterances and inscriptions count as predicates. Since there are only a dozen or so in my copy of Goodman’s article (“now”s, “very”s, “Paris-events,” “Wordevents,” “noun-events,” “here-events”) and probably darn few anywhere else even in the philosophical literature, it is safe to say that no such predicates exist for over 999,000 words of English. Yes, there could be such predicates, but I doubt nominalists such as Goodman and Quine would be anxious to avail themselves of such possibilities, given their antagonism to possibilia.
The Trouble with Nominalism
103
Conclusion The upshot is that we started with a nominalist program that looked as though it might eliminate the need for type talk, but when we followed it through to the bitter end, we found, first, that replacing (infinitely many) word types by (infinitely many) predicate types and then predicate types by predicate tokens results in a paltry and woefully inadequate crop of linguistic items. Maybe a few dozen of the original words will have been replaced. The resultant ontology will not include (replacements for) all the words in the OED, much less whole sentences. Second, we also found that expanding the ontology by positing unperceptible “tokens” produces epistemological problems of at least equal magnitude to anything faced by realism, so that the motivation for nominalism dries up. It should be clear that a few dozen perceptible predicate tokens and lots of unperceptible ones do not make a suitable ontology for linguistics. Nor does the falsification of most of the linguistic rules for forming compound sentences. We turn, then, in the next chapter, to a more realistic (in every sense) account of words, one that is based on and suitable for use in linguistics.
6
Remarks on a Theory of Word Types
Enough was said in chapter 3 to suggest some of the so-called identity conditions that are relevant to words, in terms of same senses, pronunciations, and so forth.1 We may now ask the following questions: (1) What is a word type (henceforth a word)? (2) How is a word to be individuated? (3) Is there anything all and only tokens of a particular word have in common other than being tokens of that word? (4) How do we know about words (i.e., word-types)? (5) What is the relation between words and their tokens? (6) What makes a token a token of one word rather than another? (7) How are word tokens to be individuated? (8) What makes us think something, produced by another, is a token of one word rather than another? These questions are apt to run together, but for the moment, at least, we should notice that they are distinct. The reason they run together is that a certain answer to one question may determine, or appear to determine, answers to other questions. For example, if we say, in answer to (3), that all tokens of a certain word (say, ‘cat’) are spelled the same (‘c’-‘a’-‘t’), then we may be inclined to say to (6) that spelling makes a word token a token of ‘cat’ rather than some other type; and to (7) that word tokens of ‘cat’ are to be individuated on the basis of their being spelled ‘c’-‘a’-‘t’; and to (8) that we think something is a token of ‘cat’ when we see that it is spelled ‘c’-‘a’-‘t’; and to (2) that the word ‘cat’ itself is to be individuated by its spelling; and to (1) that a word type is a sequence of letters—for example, the word ‘cat’ just is the sequence of letters <‘c’,‘a’,‘t’>; and to (4) that we know about a particular word, about what properties it has, by perceiving its tokens: it has all the properties that every one of its tokens has (except for such properties types cannot have, e.g., being concrete).
106
Chapter 6
Of course, for the reasons given in chapter 3, it is not spelling—nor is it phonetic signal, nor sequence of phonemes. As we saw in chapter 3, there is no natural, linguistically nontrivial projectible property that all and only tokens of a word have in common besides being tokens of that word. I will argue that a similar claim applies to members of a species. Nonetheless, I will argue that just as members of a species form a kind, so do the tokens of a word. It will be my contention that the word is what glues its tokens together, that words are important nodes in linguistic taxonomy just as species are in zoological taxonomy. Section 1 will explore this fruitful analogy between words and species, and defend the claim that not all members of a species have a theoretically interesting property in common either (other than being conspecific). Doing so will involve exploring four different characterizations of what a species is, and how similar characterizations apply to words. Section 2 will be concerned with questions (1) through (5) above. Although I will not answer the questions—it is my contention that this ought to be largely the job of the linguist—I will offer some remarks that bear on whatever the full answers might be. Questions (6) through (8), although intimately related to (1) through (5), would take us too far afield, into issues in pragmatics and how we go about ascertaining people’s intentions. 1
Kinds
The key idea was originally claimed some years ago by Bromberger (1981),2 although developed quite differently by him, namely, that uttered tokens of a word make up a kind, a real kind, just as members of a biological species make up a real kind. (It seems appropriate to follow linguists here, worrying about the spoken word first and its orthographic representation later.) I’d say: a natural kind but, first, there are obvious differences between words and species, not the least of which is that word tokens are produced by humans (directly or indirectly) and organisms only occasionally are, so that in an important sense words are not “natural” objects, but artifacts. And second, I would like to avoid taking a stand here on what a natural kind is and whether there are any natural kinds, because such a stand would almost certainly be more controversial than what I intend to say. I prefer to say that word tokens make up a real kind, a theoretically interesting kind, just as members of species do, in part because ‘real’ is the term biologists use to characterize species.3 But of course, there is no avoiding controversy altogether. I have said that there is nothing interesting all and only uttered tokens of a particular
Remarks on a Theory of Word Types
107
word have in common other than being tokens of the word, and also that tokens of a word make up a real kind, just as members of a species do. Presumably, then, I am committed to the claim that there is nothing interesting (known or unknown) that all and only members of a living species have in common other than being members of that species (i.e., no nontrivial, interesting, “natural,” projectible property). Since this claim contradicts certain popularly held views, among them Putnam and Kripke’s resurrection of essentialism, it is necessary to spend some time defending it. I will try to do that now, relying on the work of writers on biological taxa such as Ruse (1987) and, especially, Dupre (1981) (see also DeSousa 1984; Hull 1965), while trying to show how the situation for words is quite similar to that for species. There is no way to do this without getting into the issue of what a species is. The problem is that there are four major ways one might characterize a species, which we might dub the morphological, the genetic, the population, and the lineage approaches. The Morphological Approach The morphological approach is Darwin’s, who said in Darwin 1858 (p. 52) that a species is “a set of individuals closely resembling each other.” On this view, the physical nature or overt morphology of the individual determines which species it is a member of. Grizzlies resemble each other in being large, brown, hairy, ferocious, and so forth. Similarly, utterances of the noun ‘cat’ resemble each other in being one syllable, pronounced [′kœt], and meaning either Felis catus (or any of the family Felidae or a pelt from a cat), or someone who resembles such a cat, and so on.4 The problem for this approach is the same as that for the lexicographer, namely, that invariably there is diversity among members of the kind, that none of the characteristic properties is had by all members of the kind, and that each of the properties may be had by members of other kinds. Therefore biologists who continue to use a morphological approach—certainly the right approach in the field—employ a set of properties to characterize a given species, no one of which is necessary but some cluster of which is thought to be sufficient. And of course, the lexicographer can do this too. The Genetic Approach It is well known that in general there are no overt morphological properties that all and only members of a species share. When Putnam claims that lemons, tigers, and elms make up natural kinds all of whose members are “the same,” he does not mean that they all share some one morphological
108
Chapter 6
property; he claims (Putnam 1975, p. 239) that they all share the same “genetic code.” Let’s consider his example of elms. Elms make up a genus, not a species. Since there is reason to believe that species among themselves differ not only morphologically but also genetically—even on Putnam’s view—there is little reason to say that all elm trees share “the same genetic code.” This point could be made more strikingly had Putnam used beetles as an example: beetles make up a whole order; there were over 161 families of beetles, many more genera, and over 350,000 species of beetles as of 1987—there are more species of beetles than there are species of plants. Even if we restrict ourselves to his example of tigers, which at least do form a species, there is considerable genetic diversity; in fact, there are distinct subspecies of tigers. A number of authors have argued that there is every reason to expect that there is more diversity at the genetic level among members of a species than at the overt morphological level. Dupre (1981), for example, cites three reasons why evolution should have this consequence. First, there is a better chance of the species surviving in a changing environment if its gene pool contains variety. Second, it appears that individuals with pairs of different genes at various loci are often better adapted than individuals with the same genes. And third, “there are homeostatic developmental mechanisms whereby differing gene combinations approximate the production of the same phenotype” (pp. 84–85). So the same overt morphology in two members of a species might be caused by different genes. Whether members of species differ as much in their genetic features as in their morphological features is not something that has to be decided now. The point is that biologists who favor individuating species along genetic lines find that no particular gene type is necessary for an individual to be a member of a species, nor is any particular set of genes. At best species correspond to clusters of genes. So it appears that the same situation obtains at the genetic level as obtained at the morphological level. That all members of a species have the “same genetic code” is as much a fiction as that they all have the same morphological structure. If we all had the “same genetic code,” any one person’s DNA would have sufficed for the human genome project; instead Celera Genomics took an amalgam of five people’s, including that of J. Craig Venter’s—the scientist who led the effort at Celera—and it was a Big Deal for those involved. To see how big a deal, see Caplan 2002 and Hayden 2001. This is not attributing to Putnam the claim that all humans are genetically identical, but to point to problems with the notion of “same genetic code.” Until enough humans have been “decoded” we don’t know what
Remarks on a Theory of Word Types
109
is typical and what isn’t. Unpacking what “same genetic code” is supposed to mean is too big a project to embark upon here. I have tried to tackle it in part in Wetzel 2000b. Consider, for example, that by certain criteria, humans and chimpanzees have the same genetic code. By other criteria, male humans have a different genetic code from female humans. (Among other species, females are sometimes genetically much more similar to females of sister species than to males of their own species.) Some humans even have a different genetic code from themselves (as, for example, when they are hermaphroditic at the cellular level, some cells being chromosomally XX and others being XY). Since it is not even the case that all apparently normal humans have the same number of chromosomes—some people are XYY, for example, while others are X—it is hard to see how we all share “the same genetic code” (except of course in the type sense, which won’t help here). Once again, family resemblance is all there is.5 Still, the fact that members of species have a microscopic structure, one in which there are clusters of genes, might suggest a disanalogy with word tokens, which in turn might suggest that although members of species comprise real kinds, word tokens do not. But no. Every uttered word token has a phonetic structure, one that involves clusters of phones. Similarly, the fact that a word can be mispronounced, even badly mispronounced, and the result still be counted an instance of the word might be thought to spell doom for the theory that tokens of a word form a “real kind.” It doesn’t, because nature produces something analogous: so-called freaks of nature. Like mispronounced words, their abnormality and their viability is a matter of degree. The Population Approach The third characterization of species is Mayr’s (1970, p. 12): species are “groups of interbreeding natural populations that are reproductively isolated from other such groups.”6 This characterization suggests that what every member of a given species has in common is: being able to breed with every member of the opposite sex of one’s species (at least, potentially) and with no other individuals. But sterile worker bees cannot breed with anyone. They are the offspring of successful breeding pairs, however, so we might try a more complicated property, for example, being able to breed with every other fertile member of the opposite sex of one’s species (at least, potentially) and with no other individuals, or being the offspring of two such breeders. But such a strategy won’t work. First of all, as Dupre (1981, p. 86) notes, there are many hybrids. That is, many individuals can breed with members of other species and produce fertile offspring—for example, certain primroses and
110
Chapter 6
cowslips. These are distinct species, but the suggested criterion would not preserve this distinctness. Second, some species are geographically situated so that nearby members can successfully interbreed, but faraway members cannot. The phenomenon is called Rassen creis. The lesser black-backed gull and the herring gull are separate species in Western Europe. But there are fourteen subspecies distributed around the globe at that latitude, and neighboring ones can interbreed. (Other examples are to be found among butterflies and salamanders.) And third, the classification ignores species whose mode of reproduction is asexual. Still, it might appear that problematic though the characterization is, there is nothing remotely comparable with respect to words. Surprisingly, this is not so. Word tokens could very often substitute for each other without listener comprehension being affected. (The verbal equivalent of a kidnapper’s patched together ransom note might sound strange, but its content would probably be comprehensible; at least, rock and roll radio stations assume so.) Thus (a first approximation of) a good analogue is the property of being intersubstitutable (at least potentially) so far as listener comprehension is concerned with every other uttered token of that word type, and with no tokens of other word types. Again, this property does not hold for every token, any more than its biological counterpart did, and for analogous reasons. The Lineage Approach The fourth and last characterization of species that we will consider is that of G. G. Simpson (1961, p. 153): “An evolutionary species is a lineage (an ancestral-descendant sequence of populations) evolving separately from others and with its own unitary evolutionary role and tendencies.” The picture is that of a tree. This phylogenetic criterion is a hard nut to crack, and there isn’t space to crack it here. However, there is time to note two problems with it. One is that it would have the consequence that something genetically identical to a member of the bacterium Escherichia coli, say, cooked up in the lab from purely chemical components, would not be a member of E. coli. This seems counterintuitive. The other problem is one described by Dupre (1981, pp. 88–89), who claims that “any sorting procedure that is based on ancestry presupposes that at some time in the past the ancestral organisms could have been subjected to some kind of sorting. . . . In short, the phylogenetic criterion must be parasitic on some other, synchronic, principle of taxonomy.” The tree would have no trunk, would be nothing but twigs, without something to explain the clumps. And in fact, biological “claddists,” as they are called, rely on more standard characterizations of species to explain the clumps.
Remarks on a Theory of Word Types
111
Notice that there is a very natural counterpart to the claddistic characterization of species for words. To parrot Simpson’s characterization of species: “a word is a lineage evolving separately from others and with its own unitary linguistic role.” Species descend from other species by a gradual process, and words also descend from other words. Species come into existence, and sometimes become extinct; so do words. This view has much in common with Kaplan’s (1990) theory. Of course, the process whereby one or more word tokens spawn another word token of the same type is quite different from sexual or asexual reproduction; part of the process may occur in peoples’ heads, when they learn a word by means of various tokens, and then go about producing tokens of their own. The point is that there exists a process that provides words with a lineage, on analogy with species. The long and short of it is that biologists do pretty well without the supposition that there is some biologically interesting property that all and only members of a species share, even while maintaining that a species’ members form a real kind. Therefore the fact that there is generally no linguistically interesting property shared by all and only tokens of a given word—other than being tokens of the word—does not count against the theory that such tokens form a real kind, one that linguists, especially lexicographers, can study.7 This is not to say, by the way, that there are no words all of whose tokens share some natural property, nor that there are no species all of whose members share some natural property. I assume that all tokens of the phrase ‘nattering nabobs of negativism’ originate with William Safire, when he was a speechwriter for Spiro Agnew. No doubt there are some words whose tokens share a lineage back to some one occasion. Some biologists think that all living humans are the descendants of one woman, dubbed ‘Eve’, because of the uniqueness of our mitochondria. Perhaps all tokens of the letter ‘A’ are descendants of some Phoenician scholar’s token of aleph. However, it would be an act of faith to believe that all species are similarly situated, or all words. This is also not to deny that the following token is a token of the word ‘color’ because of its spelling and my intention to produce a token of ‘color’: color Of course it is. The fact that not all tokens of ‘color’ are spelled ‘c’-‘o’-‘l’‘o’-‘r’ or spelled at all does not impugn the fact that many are. Nor does it impugn the fact that spelling is an important property of an inscribed
112
Chapter 6
word. It is just not the only important property—and its importance is overrated as the following bit of fiction makes clear: Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Amzanig huh? yaeh and I awlyas thought slpeling was ipmorantt!
Moreover, as the above paragraph also illustrates, context is very important. As Firth (1957) said “you shall know a word by the company it keeps.” 2
Questions (1) through (5)
Thus I have answered ‘no’ to question (3) in chapter 3 and have argued that (spoken) words are “real kinds” just as species are. But what of the rest of questions (1) through (5); what might be said in response to them? It is one of the theses of this book that spelling out comprehensive answers to these questions is really a job for the lexicographer, not the philosopher. It is the lexicographer who must flesh out answers to such questions as “how does a word come to have precisely the properties it does on the basis of the tokens it has?” or “what makes a token a token of one word rather than another?” Nonetheless, I hope I will be permitted to make certain general remarks that are relevant to philosophy. The most important conclusion we should draw from the fact that there is no theoretically interesting property that all tokens of a word have in common (other than being tokens of that word) is that what is important about a word token is what type it is a token of. Word types are pidgeonholes by means of which we classify tokens; a word type alone unifies all its tokens in the absence of any observable natural physical “similarity” that all the tokens have. That undeniable feeling that the tokens are all somehow “similar” rests on two facts: first and foremost that they are all tokens of the same linguistic type, and second, that many tokens are similar to many other tokens, for example in spelling, pronunciation, or sense. Goodman was right to reject similarity as “a fake, an imposter”; it cannot do the work asked of it by many nominalists. The tokens of a type do not all sound the same, although many of them do; they do not all share the same sense, or have the same number of syllables, and so on, although many of them do. But they are all instances of the same word.8 This is such an important
Remarks on a Theory of Word Types
113
property that it dwarfs every other similarity that may be observed among some of the tokens. Consider, for example, that for most practical purposes what type it is a token of is generally the most important thing to ascertain about a linguistic token, be it a word, phoneme, or sentence. If a person is listening to another’s spoken words, or reading a book, she is encountering word tokens; but most of the time the only thing she needs or wants to know about them is the words they token; their other physical characteristics are of no importance beyond enabling the token to function as an adequate vehicle for conveying the desired message. Whether someone whispers “your spouse is unfaithful” or writes it down for you is of little moment. The medium is not the message; the message is the message. The tokens are instances of the same type, and this alone is what unites all of them. So the word (type) itself is very important—a very important entity. This is evident from the fact that we have names for words (e.g., ‘eleemosynary’)9 and definite descriptions for them (e.g., ‘the noun “color” ’); words are values of variables in linguistic theory and whatever theory of language underlies common sense (“Shakespeare had a vocabulary of 30,000 words”). It might be objected, along nominalist lines, that just as the notion of being the same shape as can be “cashed out” in terms that do not refer to shapes as entities but only to entities that “have shapes” being similarly shaped, so the notion of “the same word” can be cashed out without referring to words (types) as entities but only to tokens. But the usual means of doing this is in terms of similarity—similarity to other tokens, to an exemplar token, similarity in this or that respect, and so on—and, as we saw earlier, similarity is precisely what cannot be found among all tokens of a word. But we also saw that much the same holds of species: a blond grizzly bear cub appears morphologically more similar to a blond black bear cub than it does to an adult of its own species. Females of sister species are sometimes both morphologically and genetically more similar to each other than to males of their species. A severely congenitally deformed human may be less similar to most other humans than chimps are to us. And so on. Family resemblance is the best we can do. We need the concept of the species to unify its instances (whatever, precisely, a species may be), and cannot get by with the concept, say, of being merely morphologically or genetically similar, as this suggestion would have it. Does this mean that the concept of a species is empty, and the concept of a word is empty too? Not at all. In the case of species, there are rival accounts of what a species is and the properties that unite members of a species together (same morphology, genetics, interbreeding population, or
114
Chapter 6
lineage). One might find here reason to adopt a “realistic pluralism” approach, like that of Dupre (1999). The point is that far from being an empty concept, the concept of a species is too rich. It is in that regard rather like that of a person. There are rival accounts of what a person is, and lack of agreement on when x is the same person as y, but that does not show the concept of a person to be empty. Similarly for words. As mentioned earlier, in addition to the one that I am focusing on, the lexicographical one, there are different sorts of words—including orthographic, phonological, morphological, lexical, grammatical, onomastic, and statistical. If we think of meaning in terms of inferential semantics, then claims about persons, species, or words have the content they do in terms of their inferential connections, and in all three cases, such connections are plentiful. Question (1): What Is a Word? That is, what is a word type (henceforth a word)? A word is an abstract theoretical linguistic entity, one that has at least one meaning, at least one pronunciation (defined in terms of phonemes), may have a spelling, and has instances some of which are physical objects/events. They are theoretical entities in that they are postulated by, and their existence is justified by the success of, an important scientific theory, just as species are. In the case of words it is linguistic theory. The linguist also quantifies over word tokens; but the half million words with entries in the OED are not tokens. Since they are objects, but lack a unique spatiotemporal location, they are abstract objects. So, too, for similar reasons, are the indefinitely many sentences of a natural language, whose rules of generation the syntactician attempts to formulate explicitly. Phonemes are also theoretical entities; McArthur (1992, p. 770) puts the phoneme inventory of RP English, for example, at forty-four: twenty-four consonants and twenty vowels. Linguistics, as we’ve seen, abounds in other types as well; to name but a few mentioned in the one article on phonology we discussed in chapter 1: stems, affixes, morphemes, derivations, consonants, letters, complexes of phonetic features, symbols, markers, morphemes, its functional structure, its thematic role, and its syntactic and/or morphological context. Words (and other linguistic types), although not categorized as such, also figure prominently in our commonsense theory of language, absorbed not only at our mother’s knee but apparently to some extent in the womb (in the behavioral sense, at least, if not in the full-fledged cognitive sense). In one study, two days after they were born infants preferred listening to a paragraph they had often heard while in the womb to other paragraphs.
Remarks on a Theory of Word Types
115
Month-old infants have been shown to be partial to the phonemes of their caretakers’ language. No doubt the infant comes to recognize the phoneme inventory of his caretakers’ accent (without necessarily being able to reproduce all of them), by means of observed similarity among some percentage of the phoneme tokens (and nothing said above contradicts this). These sound patterns are his first linguistic types. Later he pairs up sound patterns with meanings and learns to understand his first words/sentences. But even at this early stage phonemes and words are treated as entities in his intentional linguistic space,10 associated with but no longer identical to specific sound patterns. Even though he may say ‘tub’ for ‘cub’, he rejects an adult’s interpretation of it as ‘tub’ since he knows what word he intended to say, and he doesn’t hear an adult’s ‘tub’ as “the same word.” The preschooler is still limited as to how much variation in sound he can tolerate. (An American thought his new kindergarten classmates in Scotland were speaking French, for example.) But gradually the pigeonhole that is a word gets broadened to include the quite different sound patterns presented by other accents, and other meanings. Meanwhile, if lucky, the child has been learning his letters (or logograms). At first, he may think of them merely as archetypal shapes. Yet eventually he comes to think of ‘A’, for example, as the first letter of the English alphabet, something that, although still representable by an archetypal shape, can also be represented by any of the dissimilar shapes exhibited in chapter 3, or by a character of six dots in Braille, or by a dot followed by a dash in Morse code, and so forth. He perceives that the critical thing isn’t the having of a particular shape for a particular letter, but that there be twenty-six distinct patterns of shapes, or dots, or smoke signals, and a comprehensible one–one correspondence between them and the twenty-six letters. By the time he can decipher other people’s handwriting, the initial concept he had of a word as a specific sound-pattern-cum-meaning has been superseded by something far more abstract and encompassing (even though his initial conception may persist as an association). However, all of this is possible only if he operates with a linguistic space in which words are among the “points.” A word type, then, is an abstract theoretical linguistic entity, one that has at least one meaning, at least one pronunciation (defined in terms of phonemes), may have a spelling, and has instances some of which are physical objects/events. But how are words to be distinguished from other meaningful, pronounceable, abstract linguistic entities that have instances, like morphemes, phrases, and sentences? According to the OED (Murray 1971, p. 3816), a word is “a combination of vocal sounds, or one such sound, used in a language to express an idea (e.g., to denote a thing,
116
Chapter 6
attribute, or relation), and constituting an ultimate minimal element of speech having a meaning as such.” But this helps little. Morphemes are even more minimal meaningful elements of speech than many words; and Quine (1960), among others, has made a case for sentences being the minimal meaningful units. Stating a criterion for distinguishing words from nonwords is a surprisingly difficult task, and will not be attempted here, as it is the linguist’s job to do it. Its difficulty is obscured by the fact that nowadays words are separated by spaces in print (in languages with phonological systems of writing). A moment of thought, however, shows that there are no spaces between words in speech; speech is continuous. And speech came first. McArthur (1992, pp. 1119–1120) in The Oxford Companion to the English Language claims that in early alphabetic writing, there were no spaces—letters were written one after another; and so “in a real sense, the first orthographers of a language make the decisions about how words are to be perceived in that language;” and “in oral communities, there appears generally to be no great interest in separating out ‘units’ of language.” This is not to say that the decisions of the early orthographers were completely arbitrary; rough guidelines for cutting up speech into words are available11 and may have been employed. The point is that even what constitutes a word—the ultimate minimal meaningful element of speech—is not a simple matter of looking and seeing as current spacing would suggest, but is a theoretical matter, requiring a linguistic theory, however rudimentary. Question (2): How Is a Word to Be Individuated? Question (1) had to do with how words differ from nonwords—that is, with the so-called criterion of application of the word ‘word’. Question (2) has to do with how words differ from each other. Here again theory does the work. Since linguists are the word experts, (2) amounts to: how do linguists individuate a word (type)? What do they regard as the identity conditions for words? As we saw, among the factors lexicographers consider are etymology, phonemic analysis, part of speech, sense, spelling, past usage, and so on. This suggests as a first rough approximation: x is the same word as y if and only if x and y are the same parts of speech, have the same etymology, same senses, same pronunciations, and same forms (spelling) if any. But there can be disagreement as to how these factors are to be weighed, in general and in particular. It’s similar to the problem of individuating species in biology. Just as there are different theories as to what makes up a species (as we have seen), there can be different theories as to what makes up a word. There can also be tough calls—as, for example, between saying there is one species or two sister
Remarks on a Theory of Word Types
117
species, one word or two. As for differentiating the all-important senses, Hanks (2003, p. 51) sums up the current uncertain state of affairs in lexicography as follows: No generally agreed criteria exist for what counts as a sense, or for how to distinguish one sense from another. In most large dictionaries, it might be said that minor contextual variations are erected into major sense distinctions. In an influential paper, Fillmore (1975) argued against “checklist theories of meaning,” and proposed that words have meaning by virtue of resemblance to a prototype. The same paper also proposed the existence of “frames” as systems of linguistic choices, drawing on the work of Marvin Minsky (1975) among others. These two proposals have been enormously influential. Wierzbicka (1993) argues that lexicographers should “seek the invariant,” of which (she asserts) there is rarely more than one per word. This, so far, they have failed to do; nor is it certain that it could be done with useful practical results.
Moreover, theory can change. To use Quine’s (1969b, p. 128) example (even though it involves a higher taxonomic class than species): in the old days, one would have classified marsupial mice with mice, rather than with kangaroos; but no longer. Recent evidence from molecular biology suggests that guinea pigs aren’t rodents. Words too can be reclassified; for example, new facts about their etymology can become known. ‘Goon’, for example, was thought to date to a 1930s Popeye comic strip character, hulking, hairy Alice the Goon, but when earlier uses of the term were discovered, it was traced to a much older English dialectal word ‘gooney’, meaning ‘simpleton’, a term that sailors successfully applied to the albatross. The long and short of it is that the identity conditions for words are highly theoretical, depending on lexicographical theory. Only a lexicographer ought to try to formulate those conditions; I will not. Even the matter of which words belong to a language is a difficult matter, as indicated by Murray’s (Murray et al. 1971, p. x) illuminating discussion of the matter in the introduction to the OED. Question (4): How Do We Know about Words? All of the above assumes that we know a lot about words. But how do we? How could we know which word types there are and what properties they have? Part of the answer is that we causally interact directly with linguistic tokens, some of them word tokens. This is how we learn language as children. We perceive tokens and what properties they have—a certain pronunciation, a certain meaning—and infer that the type has these properties too—maybe even that the token has it because the type does. But also, a type can be seen to have certain properties because some of its tokens
118
Chapter 6
do—evident, for example, in the way English words acquired their spellings from early publishers of English. But a linguist’s knowledge of words is also based on more indirect causal connections with the tokens that ground the properties of the types. For example, part of our evidence that ‘ornery’ means common, mean, or low consists in a passage from Huckleberry Finn in which Huck refers to himself as “ignorant, and so low-down and ornery.” Yet this passage is not a token. Hardly anyone has seen the token of ‘ornery’ (assuming there was one) that Twain actually put on paper. Similarly, we know that until the year 1000, the word ‘man’ was not ambiguous as it is now; it meant only human being in Old English while the word ‘were’ meant male person (surviving nowadays only in ‘werewolf’). The word tokens in virtue of which this is true have all, or nearly all, perished, having been produced before the year 1000. The linguist is relying here on indirect evidence (like passages from Beowulf), evidence that can be just as remote from the source as, for example, the anthropologist’s, when she posits a new species of hominid on the basis of some skull fragments. Although this is only part of the answer to how we know about words, and a relatively innocuous part, it faces two challenges. If, as I claim above, there is no natural, projectible property that all tokens of a word have in common other than being tokens of that word, how does a type come to have a particular property (a certain pronunciation, e.g.) on the basis of its tokens, since not all the tokens have that pronunciation? Second, even if all the tokens of a particular word have a certain property, how do we know the type has it? We can perceive (or at least be indirectly causally related to) some of the tokens; but how do spatiotemporal creatures like ourselves causally interact in any way with types, which are abstract objects? That is, how do we address Benacerraf’s epistemological problem (other than by disputing the causal requirement on knowledge that it suggests, as we did in chapter 2)? This brings us to the rest of the answer, which is, roughly, theory. We approach the world with the assumption that there are types of things, kinds of things, and a desire to pidgeonhole everything into its kind. This is rational because kinds/types are the key to (certain) laws and statistically significant nomological generalizations; properties cluster around kinds and so permit probable inferences. So although not every token of the letter ‘A’ is physically similar in a Euclidean way to this token
a
many are. Not every printed token of ‘color’ is spelled ‘c’-‘o’-‘l’-‘o’-‘r’, but many are. Not every spoken token of ‘extraordinary’ has six syllable tokens,
Remarks on a Theory of Word Types
119
but many do. We can characterize the kind/type as having a certain property (being six-syllabled, e.g.) even though not every member of the kind has the property. Here is where the analogy to zoology is helpful, because clearly that is done in zoology. Not every so-called black bear is black; not every grizzly is four-legged, brown, or has a hump; not every mother grizzly has a litter of fewer than four cubs. It may be permissible to characterize the species in terms of such properties anyway. Just which properties commonly found among the specimens are to be predicated of the type is a function of the principles of numerical taxonomy and of overall biological theory. On this complex topic, with some of its roots in value theory (as to what is “normal,” e.g.), nothing will be said here, since I do not pretend to be a zoologist or a lexicographer. However, there are some standard ways this is accomplished that are distinguished by Krifka et al. (1995, pp. 78–85). One way, they claim, is the collective property interpretation: “if the predicate applies collectively to all existing objects belonging to the kind, the property can be projected from the objects to the kind” (p. 79), as is done in the following sentences we have already come across: The U.S. range of the grizzly bear is now Montana, Wyoming, and Idaho. Under the flight path of A-10 warplanes, the Florida gopher tortoise thrives in a habitat of longleaf pine. Here the predicate applies derivatively to the kind from the fact that the predicate applies to the mereological sum, as it were, of the members. Second is the average property interpretation, as for example in the following example of theirs: The American family has 2.3 children (p. 78). Third is the characterizing property interpretation, which we have already discussed, in, for example, The potato contains vitamin C (p. 78). Fourth is the distinguishing property interpretation, in, for example, The Dutchman is a good sailor (p. 78). Krifka et al. (1995) point out that although ‘The potato contains vitamin C’ has the same truth conditions as the characterizing sentence ‘Potatoes contain vitamin C’, ‘The Dutchman is a good sailor’ does not have the same truth conditions as ‘Dutchmen are good sailors’—since few are—but rather of ‘The Dutch distinguish themselves from other nations by having good sailors’.
120
Chapter 6
Fifth is what Krifka et al. (1995) call the representative object interpretation, wherein “if the object in the situation described is only relevant as a representative of the whole kind, then a property can be projected from the object to the kind” (p. 83). They illustrate by means of the sentences: In Alaska, we filmed the grizzly (p. 78). Be quiet—the lion is roaming about! (p. 78). Sixth is what they call the avant-garde interpretation wherein “if some object belonging to a kind has a property which is exceptional for objects of that kind, the kind can be assigned the same property” (p. 83). Their example is Man set foot on the moon in 1969 (p. 78). Last is the internal comparison interpretation, which involves “a comparison of the specimens of a kind along a certain dimension of their occurrence” as, for example, in The wolves are getting bigger as we travel north (p. 78). Thus in many cases, one extrapolates from properties of the tokens, individually or collectively, to properties of the type. However, it is important to note that even if the overwhelming majority of the tokens have a property it does not entail that the type has it. Color, for example, might be too variable across most mammalian species—even when a majority have a certain color—to be used to characterize a species; but perhaps number of legs, means of reproduction, or having a hump is not. Moreover, there are properties had by each and every token (e.g., having a unique spatiotemporal location) that are not had by the type. On the other hand, the type might have a property that only a minority of the tokens have. (E.g., the European starling is native to Europe whether or not most members of the species will have been born there. The word ‘color’ is spelled ‘c’-‘o’-‘l’-‘o’-‘r’ and is also spelled ‘c’-‘o’-‘l’-‘o’-‘u’-‘r’, even though a majority of written tokens, to say nothing of spoken tokens, conform to at most one of these spellings.) There are also properties of the type that are not had by any members individually and that apply only to kinds, for example, The passenger pigeon is extinct. The grizzly bear is endangered. It is standard scientific procedure to predicate properties of a type/kind on the basis of its applying to members of the kind, even if not all members have the properties, and which properties get predicated is a matter of the
Remarks on a Theory of Word Types
121
relevant scientific theory. The linguist performs the same feat with words that the biologist does with species, using both data and linguistic theory. Yet on the basis of what has been said so far, one might conclude that our knowledge of words is one-directional: always from tokens to types— that we infer the type has certain properties because the tokens do. If that were the case, types might appear to be superfluous. (Sentences about them would just be equivalent to ones about some, most, or all tokens, or all normal tokens, considered individually or collectively.) Moreover, our second question would kick in with a vengeance: what justifies these inferences from tokens to types—that is, how do we know the abstract types have certain properties merely because the concrete tokens do? Types are far from superfluous. Suppose one points to a printed word token of ‘color’ and says truthfully “this is pronounced [kɒ'lər].” Now either the reference to ‘this’ is to the printed token of ‘color’ or to the word type ‘color’. If the latter, then the type is clearly not superfluous. If the former, then, given that all one has done is to pointed to one token and produced another, different, spoken token of ‘color’, how is this supposed to show that the original bit of ink has a pronunciation? The answer must be that insofar as a printed word token can be said to have a pronunciation it is in the sense that it is a token of a type that has that pronunciation (even though the type may have it because some of its uttered tokens do). The type affords commonality for the spoken and written tokens together. So once again the type is not superfluous. Similarly, a spoken word token, which is an utterance, has a spelling only insofar as it is a token of a type that has that spelling. If an utterance has a spelling and a printed word a pronunciation, then tokens borrow some of their properties from their respective types. And in that case the inferential relation between type and token is bidirectional: not only does the type have some properties because the tokens do, but the tokens have some properties because the type does. That is, the type mediates certain properties, getting them from some tokens, and conferring them on other tokens. This seems to be what is going on with meaning. In order to communicate what she wants via a token, characteristically a speaker must assume that her hearer associates a certain meaning with the type. She may then exploit that association in various ways—literally, metaphorically, sarcastically, and so on. In the most straightforward association, that of identity, the token will borrow the meaning of the type; in sarcasm, the token will mean the opposite of the type. Theory gives meaning to the physical objects/events by means of the types. The tokens themselves are meaningless in the absence of a theory, at least a rudimentary one, about what
122
Chapter 6
these things are. A monolingual speaker of English perceives every physical characteristic of a token of Farsi as well as a native of Iran; but, without knowledge of the types being tokened and what they mean, she understands nothing. (Although there has been much discussion about whether there are signs that “naturally” mean something on their face, surely most word tokens nowadays do not.) The knowledge of the linguistic types of a language and how they may be tokened is part of the theory we bring to the physical particulars to make sense of them. A monolingual speaker of English processes the physical sounds of her own language so smoothly and automatically that it is nearly impossible for her to listen to English speech in a familiar dialect and not understand what is being said through the types being tokened. By and large, the meaning of a token sentence is borrowed from, or is partly a function of, the meaning of the sentence type. Here is another example, a very common sort of example, of a type’s properties being transmitted down to a token. Speech signals are often imperfect, with mispronounced words, coughs, ‘uh’s and so on. Moreover, normal speech is very fast and pronunciation so informal that over half the words cannot be recognized in isolation (Crystal 1987, p. 147). How then do the words get recognized? In one study, when presented with the utterance it was found that the *eel was on the axle, where the phoneme at * had been replaced by a cough, people said the *eel word was ‘wheel’. When presented with ‘it was found that the *eel was on the shoe’, they said ‘heel’; with ‘it was found that the *eel was on the orange’ they said ‘peel’; and with ‘it was found that the *eel was on the table’ they said ‘meal’ (ibid.). The grammatical and semantic context of the tokens they were able to decipher pointed to a certain sentence type, and this led them to infer the existence of, or even to “hear,” word tokens that were missing key phonetic features. Here the very identity of what the word tokens are thought to be depends on the perceiver’s categorization of the sentence type, whose properties then are attributed to the token. Similarly, if one knows that Samuel Adams signed the Declaration of Independence, and finds the surname ‘Adams’ (preceded by something other than ‘John’) on the document, one can infer that the barely recognizable marks in front of ‘Adams’ are ‘Samuel’ or an abbreviation thereof. Hence types are far from superfluous. Not only do they mediate properties of their tokens (when they have tokens—some types, some very long sentences, e.g., never will have tokens), but tokens are not tokens without
Remarks on a Theory of Word Types
123
types. A word token is just a meaningless blob of ink, or a noise, unless there is a word type of which it is a token, just as someone can be a sister only if there is a person of whom she is a sister. Being a token is a relational property, involving at least a type and a language, and probably various linguistic conventions governing tokening as well. This points to an important difference with biology. Compared to large mammals, or even bacteria, token word utterances are ephemeral creatures, not of much interest in and of themselves. Their types are of much more interest to linguists, especially as types can be combined in indefinitely many ways to produce other linguistic objects, unlike species. Moreover, words are artifacts, tools for communication. They are based on conventions—between word and meaning, for example—that are largely arbitrary; so there is an arbitrary element in linguistics that has no correlate in biology. Linguistic tokens are artifacts, our own inventions, and so in a sense are the types and the theory we have of them. This is not to say that linguistic types are not eternal abstract objects. But they are the abstract objects we specify them to be—or, rather, that the theory underlying our use of language takes them to be. They may be specified one at a time, as when we refer to the word ‘eleemosynary’, or all at once, as when a linguist specifies the sentences of English by means of a set of recursive rules. So there really is no mystery about how spatiotemporal creatures like ourselves come to know about these abstract linguistic objects, what properties they have, and so on. The causal theories of reference that rule out the possibility of knowledge of abstract objects are false. The 1960 Chevy Impala has tailfins because it was designed to. His Fifth Symphony is in the key of C minor because that’s the key Beethoven chose to write it in. The Fifth Symphony is the x such that x is a symphony in C minor, among other things. It answers to certain properties specified by Beethoven. That’s its job. It is senseless to ask whether it might “really” be in B-flat major, or in no key at all. Similarly, although the word ‘color’ was probably not consciously coined by some one person who got to prescribe its pronunciation, its job is to be the thing that is pronounced [kɒ'lər] and not [ik'strɔ:dn ¸ ri]. There is more to the story of how we know what linguistic types there are and what their properties are than has been gone into here, of course, especially concerning linguistic types that have no tokens, but it is time to tackle question (5). Question (5): What Is the Relation between Words and Their Tokens? The relation is instantiation; words have instances. The instantiation relation is too fundamental to be analyzed further. But we can demarcate word
124
Chapter 6
tokens as those instances (of types) that have a unique spatiotemporal location and form a kind. Moreover, we can say that a token instantiates a type if and only if (speaking generically) it instantiates the relevant constituent properties of the type. So, for example, tokens pronounced [kɒ'lər] are tokens of the word ‘color’, ceteris paribus. (N.B. the previous is a generic sentence that admits exceptions.) Furthermore, having instances means that types are universals. Yet they differ from many other universals (e.g., properties) in that, as we have seen, words are objects according to the commonsense and scientific theories we have about them—values of the first-order variables and referents of singular terms—rather than properties. (This is not to say that there are no properties involving words, e.g., being a token of ‘eleemosynary’. Nor is it to say that there are no worthwhile theories in which a classic property like being red, for example, is one of the values of the first-order variables. It is just that these latter theories are less indispensable than our linguistic theory.) And since they are objects according to the theory, they have properties according to the theory. And the remarkable thing about these universals, these types, is that unlike universals that are not types, they share many of the properties of their instances, as we have seen; they model their tokens (if they have tokens). Since words are abstract entities (in commonsense and important scientific theories) and also universals, it suggests that this is what all types worthy of the name have in common. Certainly it is true of other linguistic types, but a bit of thought shows it to be true of at least species and works of art, also. Conclusion In chapter 3 we saw that the only linguistically interesting, projectible property that absolutely all the tokens of a word have in common is being tokens of that word. This makes the word a very important object, unifying as it does all its tokens. In the current chapter I have argued that the tokens of a word nevertheless form a kind with a family resemblance among them, just as members of a species do. And just as what a species is, what species there are, and how one species differs from another are theoretical and not observational matters, so it is with words. Words are very important theoretical objects; we can’t make sense of the linguistic world without them. Thus concludes my case for realism, for the existence of types as abstract objects. In the next chapter, chapter 7, we will discuss a prima facie problem for realism.
7
A Serious Problem for Realism?
It’s all well and good to argue, as I have in the first five chapters, that the case for realism is better than the case for nominalism, and that we’re better off countenancing types, as I’ve argued in chapters 5 and 6. But if a philosophical position is internally inconsistent or incoherent it is a failure. Several substantial arguments have been offered (e.g. by Simons [1982] and Lewis [1986a]) to the effect that realism about types is incoherent. In section 1 below, our attention is restricted to the case of expressions. An alleged problem about occurrences of expression types is presented and then solved. In section 2, the problem and discussion is extended to that of occurrences of types generally. Section 3 examines David Lewis’s arguments against types and defends my conception of them against his objections. Section 4 consists of a few concluding remarks. 1
What Are Occurrences of Expressions?
The alleged incoherence of realism about expression types is best brought out by means of a puzzle. Consider the term ‘Macavity’. It occurs three times in the line (*) Macavity, Macavity, there’s no one like Macavity, (“He’s broken every human law, he breaks the law of gravity”). The line itself occurs three times in T. S. Eliot’s (1952, p. 163) poem “Macavity: The Mystery Cat.” Equivalently, we might say that there are three occurrences of ‘Macavity’ in (*) and three occurrences of (*) in “Macavity: The Mystery Cat.” So far, so good. The trouble arises when we inquire into just what an occurrence is. In On Universals, Wolterstorff (1970, p. 17) says that “occurrences of sentences [words or sequences of words] are Peirce’s tokens.” Peirce (1931–58, 4.423) says that tokens are “one time happenings or spatio-temporal objects”; they are to be contrasted with types. Nowadays
126
Chapter 7
types are usually construed as I am so construing them: abstract and unique. There is only one type ‘Macavity’, one type (*), and one poem by Eliot “Macavity: The Mystery Cat,” although there are many concrete tokens of each of them. (We assume that the term, the line, and the poem are types, not tokens.) Since (*) contains three occurrences of ‘Macavity’ it contains three tokens of ‘Macavity’. But this is impossible, since (*) is abstract. It might appear that we can circumvent the difficulty by postulating a higher, second-order type ‘Macavity’, but this violates the proviso that ‘Macavity’ is unique. Something like the above argument was given by Simons (1982), who claims, in addition, that second-order types would not be sufficient anyway, because in Eliot’s poem “Macavity: The Mystery Cat” the type (*) itself “would have to occur [thrice], so we need third-order types, and so on. The regress thus started is both uneconomical and vicious, because there is no point at which we reach unique types which account for the multiplicity of like tokens” (p. 196). An even simpler way to pose the problem is as follows. Assume that every word is a word type or a word token, and that (*) is a line type. It consists of seven words. Seven word types, or seven word tokens? Not seven word tokens, since tokens are concrete and (*) is abstract. Then it must consist of seven word types. But this too is impossible because there are only five word types of which it might consist. The thing to do (and it is not a new idea) is to jettison the belief that occurrences are tokens. For then we can say that (*) consists of, not seven distinct word types or seven distinct word tokens, but seven distinct word occurrences. But then the problem is to state just what an occurrence is, if it is not a type or a token. A related (or perhaps the same) problem is to explain how it is possible for something to occur more than once—the same numerically identical something, and not two tokens or instances of it. Quine, of course, was aware of the need for an account of what an occurrence of expression x in expression y is, and in Quine 1940 (p. 297) proposed what he called an “artificial but convenient and adequate” definition: “an occurrence of x in y is an initial segment of y ending in x.” Unfortunately, as Simons (1982, pp. 196–197) points out, this is inadequate, because, for instance, it incorrectly identifies the second occurrence of ‘Macavity’ in (*) with the first occurrence of ‘Macavity, Macavity’ in (*); nor is identifying an occurrence of x in y with
because it would incorrectly identify the second occurrence of ‘Macavity’ in ‘Macavity Macavity Macavity’, for instance, with the first occurrence of ‘Macavity
A Serious Problem for Realism?
127
Macavity’ in ‘Macavity Macavity’. Simons (1982) also considers amending Quine’s proposal so that an occurrence of x in y is —so that the first occurrence of ‘Macavity’ in (*) would be <(*), ‘Macavity’, (*)>—but rejects it on the grounds that “these objects are themselves no longer expressions” and “the qualitative equality of different occurrences of the same expression remains unrespected” (p. 197). This first ground for rejecting Quine’s amended proposal assumes that occurrences of expressions are themselves expressions. However, it is not clear that the occurrence of a word, for example, should count as a word. It is true that if it were to count as a word, then the puzzle mentioned three paragraphs above would easily be solved: (*) consists of seven words; the words themselves are not types or tokens but occurrences, that is, not every word is a type or token. But this solution is not forced on us (as we shall see shortly), and it might make good philosophical sense to distinguish words from occurrences of words—just as it might be a good idea to distinguish frigatebirds from their occurrences in England. The OED (Murray et al. 1971, p. 1971) lists two senses of ‘occurrence’: that which occurs and the fact of occurring. Perhaps both senses get employed in discussions of words and other expressions. But it might be impossible to have a reasonable philosophical account of what an expression occurrence is that applies to both senses. Thus although occurrences of expressions are not expressions in Quine’s amended proposal, this is not reason enough to reject the proposal. The objection that Quine’s amended proposal doesn’t respect the fact that different occurrences of the same expression are “qualitatively equal” has more bite. It is hard to say what the “qualitative equality” of the first occurrence of ‘Macavity’ in (*) with the third amounts to; but presumably if pressed one would say something like: “their only differences are positional: one is first, the other is last; one comes before the other; one comes immediately after the only occurrence of ‘like’, the other doesn’t; and so on.” It does not appear that the same can be said for <(*), ‘Macavity’, (*)> and <(*), (*), ‘Macavity’>, which agree only in their first components. Of course, any definition that breaks new ground is likely to have some unnatural consequences that ought, perhaps, to be tolerated if the definition proves convenient. But we ought to reject Quine’s amended account of occurrences since there is another account, equally convenient and much more natural. It is this. Any occurrence of expression x in expression y will be the nth occurrence of x in y, for some n ≥ 1. Every occurrence of x in y has these three parameters, and they uniquely individuate the occurrence. Thus
128
Chapter 7
The nth occurrence of expression x in expression y = the mth occurrence of expression x′ in expression y′ iff n = m and x = x′ and y = y′ and x occurs in y at least n times. Obviously we need to spell out what ‘x occurs in y at least n times’ means. In order to do so, we need an account of what an expression is. Here is one—not the only one, maybe not even the best one, but a usual one for a logic book. We could start with an initial stock of words, but to make our task a bit easier, let us start instead with the English alphabet as our initial stock of symbols. One thing that should be clear by now is that an expression type can be neither an aggregate of letter types (if there could be such a thing) nor a set of letter types. But we know this anyway by observing that ‘dog’ is a different expression from ‘god’. Clearly also the order of the letters occurring in an expression determines the expression’s identity. Following the usual practice for a first-order language, we let an expression be any sequence of alphabet letters. (The well-formed formulas will be a proper subset of the expressions.) The usual definition of a sequence is that it is a function from an initial segment of the natural numbers (0, 1, 2, . . . , n) to a set (here, of letters and an empty space); its length is n + 1. Expression types would then be sequences of letter types, and expression tokens sequences of letter tokens. In a sequence it is of course possible for the very same individual, abstract or concrete, to occur more than once. Functions need not be one–one. In the sequence of New Jersey million-dollar lottery winners, the same person occurs twice, remarkably enough. Similarly, the same letter type or word type can occur twice in a sequence that is an expression type; (*) consists of seven words in that it is a sequence of length seven, each member of which is a word type. Moreover, letter tokens can occur more than once in a sequence (for example, the sequence composed of the first letter token on this page, followed by the second, followed by the first). Such sequences of tokens that are expression tokens are unusual but not impossible. Consider crossword puzzles, certain logos, or the following:
c d e n o b x i y r
u
l e
d i
i c
a
A Serious Problem for Realism?
129
With these definitions in hand, we can proceed to define expression x occurs in expression y at least n times. Let x be the sequence of symbols <x0, . . . , xm> and y be the sequence of symbols (for 0 ≤ m, 0 ≤ k). The length of x is m + 1; the length of y is k + 1. First we define ‘x starts in y at i’ as: for every j less than the length of x, xj = yi+j. Then we define ‘x occurs in y at least n times’ as: the set {i: x starts in y at i} has at least n members. The above account solves the puzzle with which we began, because it explains how the same identical thing can occur more than once, and it provides relatively clear identity conditions for occurrences, that is, for when the nth occurrence of x in y = the mth occurrence of x′ in y′. Moreover, it does so in terms that do not include ‘occurrences’ or ‘occurs’; these terms get “analyzed away.” In view of the fact (mentioned above) that every occurrence of an expression x in an expression y is the nth such occurrence (for some n), this should be sufficient. Occurrences have been given identity conditions; they are ontologically respectable. For those who, like Frege, are critical of contextual definitions (see Frege 1980, sec. 56) on the grounds, perhaps, that there are other ways of referring to occurrences of x in y—for example, as ‘the last occurrence of x in y’, ‘that occurrence of x in y’, or even, perhaps, ‘Julius Caesar’—and because it is sometimes useful to have something more in hand than a contextual definition, we may adopt as a model of the nth occurrence of x in y the ordered triple <x, y, n>. This is not quite what an occurrence of an expression is, metaphysically, anymore than a number is a set. But it is close enough, and for certain purposes it can be useful. Although an arbitrary choice (since would do as well), it is simple and adequate. And unlike Quine’s account, it has the desirable consequence that the first occurrence of ‘Macavity’ in (*), <‘Macavity’, (*), 1>, is just like the third occurrence of ‘Macavity’ in (*), <‘Macavity’, (*), 3>, except for the positional indicator. (I am being fastidious here. There is no harm in thinking of the nth occurrence of x in y as <x, y, n>. It’s just that the fact that , for example, would do as well shows that the nth occurrence of x in y is <x, y, n> only relative to our agreeing what order to put the variables in, and that order is arbitrary. It’s rather like picking {{x}, {x, y}} as our stand-in for the ordered pair <x, y>, which, as Kitcher [1978] noted, is the ordered pair
130
Chapter 7
only relative to a correlation function. Just as we already know what an ordered pair is [a pair with a first and a second member], so we know what an occurrence of an expression is by means of our elucidation of the concept, our criterion of identity, and logical usage. The lesson to be learned from Benacerraf’s [1965] attempt to show that there are no numbers is not that there are no numbers, but rather, as I showed in Wetzel 1989a that numbers are not sets, nor are they something “else”—something that prima facie is not a number. That there are several isomorphic models of a notion does not entail that no things correspond to the notion, only that those things might be modeled in different ways. The lesson should be extended to other sorts of abstract objects too—such as ordered pairs and occurrences, symphonies, poems, and what have you—although as I said there is no harm in making the sort of identification mentioned for certain purposes. There is a more detailed discussion of this point below in the section entitled “The Linguistic Conception.”) 2
What Are Occurrences Generally?
We’ve seen that many expressions have other expressions as constituents. Expression tokens have tokens occurring in them, and expression types have types occurring in them. Such tokens and types have structure. They fall under the category of what have been called structural universals, which was the topic of a debate by D. M. Armstrong (1986), David Lewis (1986a), and others.1 (More on that shortly.) Types are, of course, universals, and many types are structural. Not only do many expressions have structure, but so too does Beethoven’s Sonate Pathétique, Old Glory as it is today with fifty stars, and the methane molecule. Lewis (1986a, p. 27) depicts the ontology of the methane molecule as follows: Suppose we have monadic universals carbon and hydrogen, instantiated by atoms of those elements; and a dyadic universal bonded, instantiated by pairs of atoms between which there is a covalent bond. . . . Then we have, for instance, a structural universal methane, which is instantiated by methane molecules. It involves the three previously mentioned universals as follows: necessarily, something instantiates methane iff it is divisible into five spatial parts, c, h1, h2, h3, h4, such that c instantiates carbon, each of the h’s instantiates hydrogen, and each of the c-h pairs instantiates bonded.
It may be helpful to see how Armstrong and Lewis characterize structural universals. Armstrong (1978b, p. 69) characterizes a property, S, as structural if and only if
A Serious Problem for Realism?
131
proper parts of particulars having S have some property or properties T . . . not identical with S, and this state of affairs is, in part at least, constitutive of S.
David Lewis (1986a, p. 27) characterizes the more general notion of a structural universal in similar terms: Anything that instantiates it must have proper parts; and there is a necessary connection between the instantiation of the structural universal by the whole and the instantiating of other universals by the parts. Let us say that the structural universal involves these other universals. . . .
Most structural universals run into the same sort of problem we encountered in section 1 above when trying to explain how it is possible for the term ‘Macavity’ to occur three times in the line “Macavity, Macavity, there’s no one like Macavity”—namely, that the line is seven words (types) long, but there are only five words (types) of which it might consist. Similarly, there are 10,000 (or so) notes in Beethoven’s Sonate Pathétique, but there are only eighty-eight notes the piano can produce. There are supposed to be fifty stars (types) in the current Old Glory (type), but the five-pointed star (type) is unique. And what could it mean to say that the very same atom (type), hydrogen, “occurs four times” in the methane molecule? The Occurrence Conception We solved the puzzle about ‘Macavity’ in section 1 using the notion of an occurrence of one expression within another. We said that if we construe expression y as a sequence, every occurrence of an expression x in an expression y is the nth such occurrence (for some n). The problem is how to generalize this notion of an occurrence. Even if we could consider Beethoven’s Sonate Pathétique as a sequence of chords (in time), and a chord as a sequence of notes (arranged by pitch), we cannot readily construe Old Glory as a sequence of stars and stripes, or methane as a sequence of atoms. (In a methane molecule token, the four hydrogen atoms are arranged about the carbon atom not in a line, but three-dimensionally.) Let us consider the parameters that sufficed to individuate occurrences of expressions. The individuating parameters were three: what expression is occurring, in what other expression it is occurring, and where it is occurring in the latter—that is, <what is occurring, in what, where>, if you will. But these same factors will suffice to individuate an occurrence of one thing within another generally. They can be used to individuate chords or notes in a piano sonata, or stars or stripes in a flag. So although there are many C-minor chords in the Pathétique, there is only one that occurs first,
132
Chapter 7
and its top note is middle C. Although there are fifty occurrences of the five-pointed star in the current Old Glory, there is only one in the corner of the flag. (The four occurrences of hydrogen in methane are a bit more complicated; more on that shortly.) To deal with these different sorts of occurrences, we need to modify the identity conditions we gave for occurrences of expressions, namely, the nth occurrence of expression x in expression y = the mth occurrence of expression x′ in expression y′ iff n =m and x = x′ and y = y′ and x occurs in y at least n times. Basically, we want to say that an occurrence of x in y = an occurrence of x′ in y′ iff x = x′ and y = y′ and x occurs in y where x′ occurs in y′. If x occurs in y, it has a “position” in y relative to whatever else occurs in y. So we can say that the occurrence of x in y at position p = the occurrence of x′ in y′ at position p′ iff p = p′ and x = x′ and y = y′ and x occurs in y at p. Unlike the case of expressions x and y, there may be no general definition of ‘x occurs in y at position p’ that applies to all types. There are too many different sorts of things that can occur within one another, and what constitutes a position may differ for different sort of things. But the basic idea is this. We construed (some) expressions as sequences, that is, as having the structure of the natural numbers. Although flags don’t have a linear structure, they have spatial structure at a time, describable in a twodimensional Euclidean space. Most modern national flags need only a rectangle of a certain proportion within which their design at a time can be described in terms of colors and geometrical shapes (e.g., white stars, red stripes). The five-pointed star occurs in the current Old Glory in fifty different places. Piano sonatas are notes in a space whose dimensions are time, pitch, and volume. Middle-C occurs in the Pathétique in hundreds of different places. The methane molecule occupies a three-dimensional Euclidean space. Carbon occurs at the origin, and hydrogen occurs at four places one unit away from carbon (at the corners of a tetrahedron around it). Hydrogen occurs at <0, 0, 1>, <0, y, –z>, <x, y–√2x–x2, –z> and <–x, –y, –z> (for some real values of x, y, and z). There are all sorts of so-called abstract spaces2 whose structures can be borrowed that are describable in mathematics: two-dimensional Euclidean, three-dimensional Euclidean, n-dimensional Euclidean, Reimannian, metric, finite and bounded, finite and unbounded, infinite, topological spaces, vector spaces (for football),
A Serious Problem for Realism?
133
infinite-dimensional Hilbert spaces. The basic idea is that an occurrence of one type within a structural type gets its position from an abstract space appropriate to the structural type. If <x, y, n> was our model for the nth occurrence of expression x in expression y, then <x, y, p> is our model for the occurrence of x in y at position p. Under this approach, just as the very same abstract word, ‘Macavity’, can occur three times in the line “Macavity, Macavity, there’s no one like Macavity” because the line is a sequence of words, and sequences permit repetition of their members, so the very same shape, the five-pointed star, can occur fifty times in the current Old Glory, because the current Old Glory occupies a two-dimensional rectangular space with at least fifty different positions available for the star to occur. Suppose an expression is a finite sequence of letters, and (since we are in a reducing mood) a sequence is a function from (a subset of) the natural numbers to the letters. There is no conceptual difficulty in the idea of the same thing occurring twice in a sequence, say, a in , because there is no difficulty in the idea of a function that assigns the same thing, a, to both 0 and 1. So if a flag is just a grid of colors—red here, white there, blue there—then there is no more difficulty in the idea of the same color or shape occurring twice in the space, than there is in the idea of a function from regions of space to colors that assigns the same color to more than one region. When we gave identity conditions for occurrences of expressions, that is, for when the nth occurrence of x in y = the mth occurrence of x′ in y′, we did it in terms that did not include ‘occurrences’ or ‘occurs’; these terms got “analyzed away” (insofar as it is fair to consider an expression a sequence of letters). More generally, we said: the occurrence of x in y at position p = the occurrence of x′ in y′ at position p′ iff: p = p′ and x = x′ and y = y′ and x occurs in y at p. and we have tried to explain what it means. Although we may not have “analyzed away” ‘x occurs in y at p’, we have provided a plausible and reasonably natural elucidation of the phrase in question, but the elucidation may not be unique. If, for example, expressions are sequences of phonemes, rather than letters, then a somewhat different “analysis” will be in order. Among other things, the elucidation of the occurrence very much depends on what we take the objects at hand to be. Are sonatas composed of notes? Or sounds? Or are notes sounds? The precise elucidation of an occurrence in a sonata will depend on how we answer these questions. But these are issues in aesthetics that need not be resolved here. We can
134
Chapter 7
elucidate the notion of an occurrence in a sonata in terms of positions without settling these questions. Similarly, it seems reasonable to characterize a national flag as a rectangular pattern of colors, as I did above (and below), since that is how we distinguish one national flag (type) from another. But if we focus instead on how we distinguish flags from nonflags, a case might be made for starting with something different. Any analysis of ‘x occurs in y at p’ is bound to differ from flags to sonatas to chess games. And many will be bearishly complicated to provide, like that of Old Glory, although they can be done. Here is an example of a simpler flag. The current Peruvian flag has three equal-sized vertical stripes: red, white, red. That is, the same red vertical stripe occurs in the Peruvian flag twice—in two positions, or places. What places? Consider a 3 × 2 Cartesian grid {<x,y>: 0≤x≤3 & 0≤y≤2}. Let a “place” be any area of the grid that is 1 × 2. (One could define a place as a point, but it will be more convenient to let it be a region.) There are infinitely many (overlapping) places on the grid. There are even infinitely many vertical places, depending on the value of x, from {<x,y>: 0≤x≤1 & 0≤y≤2} to {<x,y>: 2≤x≤3 & 0≤y≤2}. The Peruvian flag may not unjustly be regarded as a certain partial function, f, where f(x,y) = white if (1≤x<2 & 0≤y≤2) and red otherwise. The vertical red stripe itself is just a certain function, g, defined on a 1 × 2 Cartesian grid, where g(x,y) = red if 0≤x<1 and 0≤y≤2). Now the vertical red stripe occurs in the flag at place p iff p is a vertical place and there is a real number r, 0≤r≤2, such that for every <x,y> ∈ p, f(x,y) =g(x-r,y). (So there are two places at which the vertical red stripe occurs, namely, {<x,y> : 0≤x<1 & 0≤y≤2} and {<x,y> : 2≤x≤3 & 0≤y≤2}.) ‘The vertical red stripe occurs in the flag at place p’ has been analyzed in fairly natural terms. And although a bit idealized, the analysis works well to elucidate the occurrence of the vertical red stripe in the French flag too. (The French flag has three equal-sized vertical stripes: blue, white, red.) A number of challenges can be made to the occurrence conception. The first one we’ll consider is due to Heath White (in conversation). We know there are four occurrences of hydrogen at acute angles to each other around the single occurrence of carbon, but beyond the fact that there are four of them, they are indistinguishable. Tokens of hydrogen in tokens of methane are of course distinguishable—for example, by the fact that one is closer to the north pole than the others—but the occurrences of
A Serious Problem for Realism?
135
hydrogen in the type methane are not. Now it is true that we tried to distinguish the four occurrences of hydrogen by means of their four different positions in the molecular space: <0, 0, 1>, <0, y, –z>, <x, y–√2x–x2, –z> and <–x, –y, –z>. But have we really succeeded? If we look at a token methane molecule, which (let us assume) has the same structure as the type, it seems as though we ought to be able to say which token hydrogen atom is the <0, 0, 1> one. (We can say which token star in Old Glory is the top corner one, for example.) But any of the hydrogen atoms can be <0, 0, 1>. This seems to suggest, White says, either that methane is instantiated four different times by the same token molecule, or that there are four different universals that the token instantiates. I don’t think it does. We can (and I think should) understand White’s point to show that there are (at least) four different isomorphisms between occurrences of hydrogen in our methane molecule type and hydrogen atoms in any token methane molecule. For that matter, there are at least four isomorphisms between methane and itself. This is because the molecule has a symmetrical structure, unlike the structure of the current Old Glory or the Pathétique. Asymmetries may suggest that one isomorphism is more natural than others. Where there are no asymmetries things may be otherwise. The shape Euclidean equilateral triangle, to take another example, has three sides and three angles, but they are all alike. No one would expect a unique isomorphism between a particular token equilateral triangle and the type. The choice of any side as “the base” is arbitrary. The sides are indistinguishable. This point is even more obvious if one thinks of a token circle. It instantiates a certain shape, the circle. I assume there is no temptation to say that it instantiates the circle infinitely many times—although there are infinitely many isomorphisms between the points on a circle and itself. The methane molecule is symmetrical, but there are at least some relations among its constituents. More problematic than the methane molecule is Armstrong’s example of a structural universal: the property of being (just) two electrons. He claims it is “a property possessed by all two-member collections of electrons.” The problem, as he states it, is that “we cannot say that this property involves the same universal being an electron taken twice over, because a universal is one, not many. We can only say that the more complex universal involves the notion of two particulars of a certain sort, two instances of the same universal state of affairs” (1978b, pp. 69– 70). I disagree with Armstrong’s characterization of structural universals in terms of what structure their tokens must have, rather than in terms of what structure they themselves have. But his reasons for so characterizing
136
Chapter 7
them are plausible: he thinks no sense can be made of “taking the same universal twice over.” I think sense can be made of this by means of the notion of an occurrence. But until now all my examples have involved more structure than we appear to have here—they have involved occurrences that stand to each other in certain relations. But two electrons that have this property needn’t stand to each other in any relation. Well, almost any. Since this is a structural universal for two electron tokens, these two electron tokens must be some distance apart. Thus the occurrences of the electron in the type must be too. (And just as the number 1 can occur as value for more than one argument in a function, so the type the electron can occur as value for more than one argument.) Imagine a space with no particular metric—that is, with an unspecified unit—and two occurrences of the electron in it, some positive distance apart. This is the type pair of electrons. (This works equally well with abstract objects, like numbers, for in any pair of natural numbers we know that in “number space” there is a positive distance between any two distinct numbers.) Now it is true that we cannot distinguish these two occurrences of the electron from each other by nonarbitrary means, and I think that is what White was getting at. Some of the examples used earlier may have suggested that it is always possible for us to uniquely individuate an occurrence of a type within another type—for example, by specifying that it is the third occurrence of ‘Macavity’ in the line in question, or the top leftmost star in Old Glory. But sometimes we cannot, as with two occurrences of the electron (type) in the type pair of electrons. This is not a problem, however, since the electron is occurring at distinct positions in a space (whatever those positions are). So we are guaranteed that they are distinct occurrences by the identity conditions we gave governing occurrences, namely, the occurrence of x in y at position p = the occurrence of x′ in y′ at position p′ iff p = p′ and x = x′ and y = y′ and x occurs in y at p. It is the same with physical tokens. A universe with just two electrons (tokens) in it has two electrons in it, even if we can’t distinguish them. (Or, if you think electrons are always qualitatively discernible because of the Pauli exclusion principle, according to which no two electrons can be in the same states, imagine a universe containing nothing but two qualitatively identical orbs, postulated by Black [1952] as a counterexample to the principle of the Identity of Indiscernibles—that if x has the same properties as y then x = y.) The electron occurs twice, in two separate positions, in the structural property being (just) two electrons. The property itself is had by any two
A Serious Problem for Realism?
137
electrons (tokens). An objection to this is that it is not the electrons that have the property, but what Armstrong calls “two-member collections of electrons,” and that these are sets.3 And then it might seem that the proposed solution via occurrences doesn’t work after all. For although a universal, e, can occur twice in the sequence <e, e>, it cannot occur twice in the set {e, e}, because the latter is just {e}. However, if we let {e1, e2} be one such set of electron tokens, we see that {e1, e2} doesn’t have the property of being (just) two electrons (except in a manner of speaking). It is just one set; it is not one electron and it is not two electrons. True, {e1, e2} has the property of having two electrons as members. But this seems to be (in Armstrong’s terms) a conjunctive property, constituted by simpler properties, for example, being two-membered and having only electrons as members. Hence there is no need to say the universal having two electrons as members is constituted by two occurrences of being an electron. That is, the universal does not “take the same universal being an electron two times over,” which was what Armstrong objected to. And of course neither does the property has only electrons as members, which x has if and only if ∀y(y ∈ x → y is an electron). So the property being two electrons poses no problems for us. However, what about the property being two-membered? Is it a structural property that takes the relation being a member of “two times over”? Well, a set, x, is (exactly) two-membered if and only if ∃y∃z(y ∈ x & z ∈ x & y ≠ z & ∀w(w ∈ x → w = y ∨ w = z)). Let {y, z} be an instance of the property being two-membered. Intuitively, it has a bit of structure, since a pair set cannot be put into a one–one correspondence with, for example, a unit set or a three-membered set. And there is a sense in which it “involves” two occurrences of the membership relation. Yet about the only relation the two members bear to one another is that of nonidentity. Still, isn’t this enough? The fact that the members are distinct should suffice to render the occurrences of the membership relation distinct. That is, if y ≠ z, then ≠ . It does seem a somewhat degenerate case of something’s “occurring” in something else. For one thing, all one can say about the “positions” of the occurrences is that they are distinct. For another, there is something odd about talking of membership occurring, since it occurs everywhere. Everything is a member (of some set or other) (according to some theory or other), just as everything exists, and everything is identical to itself. It is a fundamental relation. Even the so-called class nominalists, who are anxious to do without properties and relations altogether, have not succeeded in doing away with the membership relation, but take it as primitive.
138
Chapter 7
It might be helpful to consider what Armstrong or Lewis would say about whether being two-membered is structural or not. Since Armstrong writes in terms of properties, not of relations, being involved in structural properties, it is unclear what he would say. But according to Lewis’s definition of a structural property quoted above, it is a structural property if {y, z} “must have proper parts,” and there is a necessary connection between the instantiation of being two-membered and {y, z}, “and the instantiating of other universals by the parts.” Things get a bit complicated here, owing to the question of what counts as a “part” in connection with a set. Some, like Lewis, want to reserve the term ‘part’ for use in cases of strict mereological composition, although Armstrong does not. However, even if Armstrong thinks that sets have parts, I doubt that he would want to say that its parts are its members. Lewis (1986a) certainly would not; he says that the parts of a set are its subsets (p. 37).4 (I have been trying to use the more neutral word ‘constituent’, to avoid treading on mereologists’ toes.) There are three cases to consider. (1) A set has no parts of the requisite sort; it has only members and subsets. In this case, is two-membered is not a structural property. (2) Its parts are its members, in which case either (i) is a member is a universal or (ii) it is not. If it is not a universal, then there is no universal that the members necessarily instantiate. If it is a universal, then presumably it is the relational property is a member of some set, and we are back to the issue, alluded to two paragraphs above, of how to analyze the membership relation and its many occurrences (especially for someone sympathetic to class nominalism like Lewis). Or (3) its (proper) parts are its (proper) subsets {y} and {z}, as Lewis (1991, chapter 1) maintains. Presumably there is a necessary connection between the instantiation of being two-membered by {y, z}, and the instantiation of being one-membered by {y} and by {z}. But insofar as it is appropriate to speak of occurrences of the property being one-membered, it is clear that these occurrences are distinct in virtue of the distinction between {y} and {z}. I hope that lays to rest any concerns that might seem to be posed by the property being two-membered, the property pair of electrons, and by White’s objection. 3
David Lewis’s “Against Structural Universals”
Lewis (1986a) mounts a careful, complex objection to all structural universals (although from the point of view of Armstrong’s needs), one that merits a lengthy discussion. He distinguishes three basic conceptions of
A Serious Problem for Realism?
139
what a structural universal is, which he dubs linguistic, pictorial, and magical, and produces objections to each of them (and their variants), objections that he thinks are fatal. Which conception is my occurrence conception, one might ask, and how do I defend it against his objections? The occurrence conception has elements in common with Lewis’s linguistic conception and with four variants of the pictorial conception, yet it manages to differ from each of them in telling ways. I will present the conceptions that are similar to the occurrence conception, describe how they differ from it (how they “go wrong”), explain the problems Lewis finds with each of the conceptions he describes, and show how the occurrence conception avoids these problems. The Linguistic Conception On the linguistic conception, Lewis (1986a, p. 31) says, a structural universal is a set-theoretic construction out of simple universals, in just the way that a (parsed) linguistic expression can be taken as a set-theoretic construction out of its words. . . .
The similarity to the occurrence conception is obvious, since the latter construes (some) expressions as sequences, and since sequences can be modeled in set theory. Lewis (p. 31) claims that the linguistic conception has many advantages, among them: we think of the structural universal as being a complex predicate, in a language in which the words are the simple universals. . . . The words of the language are interpreted by stipulation, and part of our stipulation is that each simple universal is to be a predicate which is satisfied by just the particulars that instantiate it. . . . Complex expressions, including those that we take as the structural universals, are interpreted in a derivative way. Recursive rules are stipulated whereby the interpretation of a parsed expression depends on the interpretations of its immediate constituents . . . and in one step or several we get down to the stipulated interpretation of the words from which that expression is built up.
There are two problems Lewis claims to find with the linguistic conception of structural universals. The one he finds most egregious is that one must assume that there are simple universals in order to get the recursive process going so as to construct the more complex ones. Lewis thinks that the main reason for positing structural universals in the first place is that there may not be simple universals (or not enough of them). This reason is not decisive. If physicists assume, as a working hypothesis, that there are “fundamental particles” out of which all matter is constructed, why shouldn’t metaphysicians assume, also as a working
140
Chapter 7
hypothesis, that there are fundamental universals? One reason, put forward by Lewis (in correspondence) is that “our fundamental ontology ought to work for all possible worlds, not just the one we live in; so if infinite complexity holds at some obnoxious world remote from ours, that still means we need a theory that can handle it.” For that matter, the actual physical world might be a world of “infinite complexity.” This brings us to the next consideration. If physicists do not assume that there are fundamental particles, but conceive of the possibility of particles within particles within particles within . . . , none of which is basic, then metaphysicians don’t need simples either. If, for example, we can perfectly well talk about middle-sized physical objects and the molecules (tokens) that compose them in spite of the infinite complexity (we are supposing) of those molecules, then we can perfectly well talk about middle-sized sets and the members that compose them, in spite of the infinite complexity of the members. If reality is not well founded, then perhaps (the best) set theory isn’t either.5 Set theory without the axiom of foundation is strange, but not inconsistent.6 True, there wouldn’t be simple universals to start the recursive process, but it might be sufficient to explain how to interpret the more complex expressions (“the structural universals”) in terms of simpler universals. That is, we assume the simpler expressions are interpreted, and define the more complex ones in terms of them. After all, Lewis started with words as his simples—which is a reasonable place to start—but words themselves can be broken up into morphemes and phonemes. So whether there are simples or there are not, we can get by. Double success cannot be a failure. Lewis’s second objection is more serious. “Is it fair to call these [settheoretic] conceptions universals?” Lewis asks. Yes, he answers, “although it may stretch a point” (p. 32). He claims that “it is an easy matter to believe in structural universals, so understood; the hard thing would be not to believe in them” (p. 32). Yet he goes on to deride them as “make-believe structural universals for those who do not accept the real thing” (p. 33). Precisely; they are not the real thing. Sets are not (or not for the most part) identical to the structural universals we are concerned with. A model of something is not the something. Not even a set-theoretic model of an abstract object is to be confused with the abstract object itself. It is easy to slide from, for example, expressions to sequences to sets, and conclude that sets are all there really is. But there is no hope of saying which set the methane molecule “really is.” Lewis doesn’t discuss which set the methane molecule turns out to be on “the linguistic conception” and it is easy to see why. It would be arbitrary to identify it with any particular set.
A Serious Problem for Realism?
141
Benacerraf (1965) showed that numbers are not sets, basically on the grounds that it would be arbitrary to identify the number progression with any particular progression of sets. For example, it is common practice in set theory to select one of the infinitely many progressions of sets and treat it as the “numbers” (e.g., the Zermelo “numbers” ø, {ø}, {{ø}}, {{{ø}}} . . . [where n + 1 = {n}], or the von Neumann “numbers” ø, {ø}, {ø,{ø}}, {ø, {ø}, {ø, {ø}}}, . . . (where n + 1 = n ∪ {n}). Benacerraf incorrectly concluded from this that numbers don’t exist. But it doesn’t follow. I showed in Wetzel 1989a that parity of reasoning would show that sets don’t exist either. Nor, for that matter, would expressions, or almost any other sort of abstract object. So unless we are to renounce abstract objects entirely, including sets, the conclusion we should draw is that well-established sorts of abstract objects ought to be considered sui generis. We can indeed use the “von Neumann numbers” as our stand-in for the natural numbers when doing set theory because they are structurally isomorphic to the natural numbers. We can use (Gödel) numbers for proofs when doing arithmetic. But this semantic playfulness is what Quine has called “rocking the boat.” One eminently reasonable way of doing metaphysics is to “acquiesce in our mother tongue,” to borrow another of Quine’s expressions, to “take its words at face value” (1969, p. 49)—and see what doing so commits us to. I have been arguing at length that doing so commits us to types. The fact that they have models in set theory or some other mathematical theory assures us that they are coherent, but we should no more identify the thing to be modeled with the model here than we should identify a tinker toy model with a molecule. It should also be noted that Lewis’s linguistic conception (in which structural universals are set-theoretic constructions) doesn’t even work for linguistic expressions. Sequences—nay, even ordered pairs—are not really sets, owing to the arbitrariness of the identification. There are many settheoretic accounts of what it is to be an ordered pair, for example. The class of Kuratowski ordered pairs is {{{x, {x, y}}: x,y ∈ V}, while the class of Weiner ordered pairs is {{{x}, {ø, y}}: x, y ∈ V}—and of course there are infinitely many others. Notice that a class is only a class of “ordered pairs” relative to a correlation of its elements with ordered pairs. In some correlations, {{x}, {x, y}}, for example, is correlated with <x, y>; in others with . And, as Kitcher (1978, pp. 125–126) noted, if we choose our correlation function correctly, it is correlated sometimes with one and sometimes with the other, for different values of x and y. So a type is not a set (although good models of types can be found in set theory). But why is this so surprising? A token methane molecule isn’t
142
Chapter 7
a set either. What we are enmeshed in here is the question of just what a physical object is—does it have structure among its constituents, or is it a heap of parts? Since this is one of the questions that comes up in connection with the next conception of structural universals, let us move on to it. The Pictorial Conception Lewis’s pictorial conception has four variants that we will consider. Lewis himself held the pictorial conception “in the days when [he] was unworried about structural universals” (1986a, p. 34). On the pictorial conception, Lewis says, a structural universal is isomorphic to its instances. The methane atom consists of one carbon atom and four hydrogen atoms, with the carbon bonded to each of the four hydrogens; the structural universal methane likewise consists of several parts, one for each of the five atoms, and one for each of the four bonds. Compare a balland-spring model: one large central ball, and four smaller balls attached to it by springs. This model is a three-dimensional picture. It represents a methane molecule, any methane molecule, not any one in particular—by isomorphism.
The pictoral conception is more “dimensional” than the linguistic conception. That and the talk of isomorphisms make it sound fairly similar to the occurrence conception. It seems apt for Old Glory, since we can picture the flag as it looks today, and it is isomorphic to (most of) its tokens. Yet the pictorial conception differs from the occurrence conception. For one thing, Lewis’s pictorial conception is 3-D spatial; it doesn’t work for many structural types, for example, Beethoven’s Pathétique, or Microsoft Word 2007, as the more general occurrence conception does. But the chief difference is that it is blatantly inconsistent in a way that the occurrence conception is not. Here is how Lewis describes it: Each methane molecule has not one hydrogen atom but four. So if the structural universal methane is to be an isomorph of the molecules that are its instances, it must have the universal hydrogen as a part not just once, but four times over. Likewise for bonded, since each molecule has four bonded pairs of atoms. But what can it mean for something to have a part four times over? What are there four of? There are not four of the universal hydrogen, or of the universal bonded; there is only one. (p. 34)
There are four occurrences of the universal hydrogen in methane on the occurrence conception. This is the very same conundrum we faced when we asked how the same word (type), ‘Macavity’, can occur three times in the line “Macavity, Macavity, there’s no one like Macavity” when the word
A Serious Problem for Realism?
143
itself is unique, and which the notion of an occurrence was designed to solve. The reason this option is not available on Lewis’s pictorial conception is due to an assumption he builds into it. He says that on the pictorial conception a structural universal is an individual, not a set. It is mereologically composite. The simpler universals it involves are present in it as proper parts. It is nothing over and above them, in the straightforward sense that it is nothing but their mereological sum. (p. 33)
That is, (i) the type methane is the mereological sum of its parts, and (ii) methane’s parts are the universals carbon, hydrogen, and bonded themselves. Old Glory is the mereological sum of its parts, which are the universals white star, red stripe, and so on. Why does Lewis make this assumption? Because he says that the structural universal “is a universal, capable of repeated occurrence, [so] its parts must be universals too” (p. 33), and the only universals that seem to be handy are carbon, hydrogen, and bonded. The assumption he is making is false, and the cause of all the trouble. If we were to apply it to a seven-word sentence like “Macavity, Macavity, there’s no one like Macavity,” which is a structural universal, we’d have to say that the sentence is just the mereological sum of its parts, and its parts are the five word types ‘Macavity’, ‘there’s’, ‘no’, ‘one’, ‘like’. I think it is rather clear in the case of this sentence that the seven words we are counting are occurrences of words, rather than word types. If it is to be the mereological sum of anything, it ought to be the mereological sum of these word occurrences (which, having instances, qualify as universals). So if methane is the mereological sum of anything it ought to be of occurrences of carbon, hydrogen, and bonded. It does seem a bit peculiar to talk of mereological sums of occurrences of things, but Lewis intends to be generous as to what counts as a mereological “part” (p. 36), so perhaps an occurrence can be one. There is reason, therefore, to deny conjunct (ii). However, if we hold to (ii), there is reason to deny conjunct (i). That is, if we consider methane’s parts to be just the three universals carbon, hydrogen, and bonded, then it is not the mereological sum of these parts. For, as Lewis points out, butane has the same parts. Lewis seems to be assuming that everything is either a set or a heap, where the latter is just a structureless mereological sum of parts. Since the linguistic conception was supposed to cover the set case, the pictorial conception must involve mereological sums. He flatly says that on the pictorial conception methane “is an individual, not a set. . . . it is nothing but [a] mereological sum” (p. 33). Now
144
Chapter 7
it is his conception, of course, so one cannot say he has “got it wrong.” But one can take issue with his assumption that everything is either a set or a mereological sum of parts—as though only sets come with more structure than that provided by the “part–whole” relation. The peculiar thing about Lewis’s assumption is that, intuitively, even a token flag has structure: it is not a set, nor is it a mereological sum of its atoms. It would be a poor joke to show up on the Fourth of July waving a ragged pile of stars and stripes and a spool’s worth of thread (or sewn together in a random fashion). They have to be arranged a certain way to form a flag. Similarly, what makes a molecule a molecule of butane rather than isobutane is not just the atoms, but their arrangement. Lewis discusses an example like this in connection with a variant of the pictorial conception. In it he considers the possibility that methane and butane are both composed of the same parts: carbon, hydrogen, and bonded. But he claims it is “unintelligible” to say that “two different things are composed of exactly the same parts” (p. 36). (One might respond that the heap of stars and stripes is not yet the flag, but is composed of the same atoms as the flag. Not so, Lewis could say; there is just one thing with the same atoms, and it is a heap-at-t1 and a flag-at-a-later-time-t2, or, for fans of temporal parts, there are two different things—a heap and a flag—made out of different temporal parts.) Are two different things ever composed of exactly the same parts? Armstrong gives a case in which two spatiotemporal particulars are composed of the same parts. Suppose a loves b and also that b loves a. These are two distinct states of affairs, yet they are composed of the same parts. Armstrong claims one would have to reject universals altogether—and not just structural universals—to deny it (not that Lewis is not willing to do so). Moreover, even if distinct spatiotemporal particulars cannot be composed of the same parts, this does not prove that two universals cannot be, particularly if the parts are universals. The fact that the whole of a spatiotemporal particular cannot be in more than one place at one time does not show that a universal cannot be. According to some theories of universals, universals can occur in their entirety at more than one place in space-time. If so, it would seem plausible that they could occur in their entirety at more than one place in an abstract space. These considerations do not force but do motivate the idea that there are unmereological forms of composition. Lewis considers an unmereological form of composition as another variant of the pictorial conception: we posit a sui generis, unmereological form of composition, whereby many things can be made out of the very same parts. Suppose that we have several different
A Serious Problem for Realism?
145
combining operations, each of which applies to several universals to yield a new universal. Each operation singly obeys a principle of uniqueness: for any given arguments, in a given order, it yields at most one value. But if we apply the operations repeatedly, starting with the same initial stock of univerals, we can produce many different structural universals depending on the order in which the operations are applied (p. 38).
His objection to it is that I do not see by what right the operations are called combining operations. . . . If what goes on is unmereological, in what sense is the new one composed of the old ones? (p. 38) What is the general notion of composition . . . ? I would have thought that mereology already describes composition in full generality. If sets were composed in some unmereological way out of their members, that would do as a precedent to show that there can be unmereological forms of composition. (p. 39) [But] the parts of a set are its (nonempty) subsets, and thus every many-membered set is composed, ultimately, of its unit subsets. This is genuine composition. . . . (p. 37)
Thus there is only one form of composition worthy of the name, for Lewis, and that is mereological composition. He does not give any arguments for this here. As this is not the place to embark upon a discussion of the pros and cons of rival overall ontologies, I will just say that I have different intuitions. It seems equally obvious to me that Beethoven composed more than one piano sonata, but that two of them could be composed of the same notes (fewer than 88) combined in different ways. T. S. Eliot’s composition “Macavity” is composed, unmereologically, of words of English. The fourth and last conception we will consider is a variant of the pictorial conception: Let us concede that when the universal methane involves the universal hydrogen, we don’t just have the one universal hydrogen after all. We do have four of something, and all four are parts of the universal methane. (p. 39)
Might these four things be occurrences? [T]he parts of a universal [have] to be as capable of repeated occurrence as the universal itself. . . . So when we have many of something, instead of the one universal hydrogen, the many are still universals. Or at any rate they’re not particulars. But it’s not clear that they’re universals either, because they’re all alike. (p. 39)
No, they’re not occurrences, for occurrences are not “all alike.” The first occurrence of ‘Macavity’ differs positionally from the third occurrence of ‘Macavity’ in “Macavity, Macavity, there’s no one like Macavity”—it is first, rather than third, it comes just before another occurrence of ‘Macavity’,
146
Chapter 7
and so on. Even in the structural property considered earlier, being (just) two electrons, in which the electron occurs twice, it occurs in two separate places. Lewis calls the hydrogen parts of his conception amphibians. He asks: How about bonded? Do we also need some dyadic amphibians? I think not—not if we are prepared to let the one universal bonded relate amphibians in the same way that it relates particulars. In that case, the fourfold occurrence of bonded in the universal methane can be understood on a par with its fourfold occurrence in a particular molecule of methane; the one universal is instantiated by four different pairs. And we’d better let bonded relate amphibians, else we’re still in trouble over the universals butane and isobutane. (pp. 39–40)
(Butane and isobutane consist of different arrangements of the same number of hydrogen and carbon atoms.) Notice that in the above quote, Lewis himself used the word ‘occurrence’ quite naturally to describe his “amphibians”; this “amphibian” conception is probably the closest conception to the occurrence conception. So what’s the matter with the amphibian conception, according to Lewis? His criticism is indirect. He says that the conception gives rise to three questions that are so “bizarre” that “the theory that asks them just has to be barking up the wrong tree” (p. 40). They are: (1) What becomes of our original monadic universals, such as the one universal hydrogen? Do we have them as well as their amphibians, perhaps instantiated by their amphibians? (2) Does the same amphibian ever occur as part of two different structural universals? (3) If we have two hydrogen atoms in two different methane molecules, is there indeed a distinction between the case in which they instantiate the same amphibian of the structural universal methane and the case in which they instantiate different ones? (p. 40)
The questions look far less bizarre if every occurrence of the disparaging ‘amphibian’ is replaced by an occurrence of ‘occurrence’. I will try to answer them, so interpreted. (1) “What becomes of our original monadic universals, such as the one universal hydrogen?” Our original monadic universal hydrogen is still around, of course. It is what is doing the occurring in methane, so it is an important part of the occurrence, which is, more or less, the state of affairs of hydrogen’s occurring in methane where it does. “Are occurrences of hydrogen instances of hydrogen?” No, although they involve either hydrogen or instances of it. For example, an occurrence of hydrogen in methane in a certain position involves not just hydrogen, but methane too; so it is not an
A Serious Problem for Realism?
147
instance of hydrogen. And the occurrence of a token of hydrogen in a token of methane involves more than just the instance of hydrogen, although it involves the latter. (2) “Does the same occurrence ever occur as part of two different structural universals?” The same universal hydrogen occurs as part of two different structural universals (e.g., methane and butane), but no occurrence of hydrogen within methane is identical to any occurrence of hydrogen within butane, because butane isn’t methane (and these latter are constituents of the occurrence). However, suppose methane occurs in a larger structural universal—say, pair of methane molecules (assuming this is a structural universal). We have already said that this universal is composed of two occurrences of the universal methane. Since methane occurs twice in it, one would think that the same occurrences of hydrogen in methane occur twice in pair of methane molecules too. But we also said that these occurrences are at some positive distance from each other. How can the same occurrence occur at some distance from itself? It can’t. But it doesn’t. To see why, we must notice some distinctions. Hydrogen is not identical to any of its occurrences in something else, as we just said. Also, hydrogen’s occurring in methane is a distinct state of affairs from its occurring in pair of methane molecules (even though hydrogen occurs in the methane and the methane occurs in the pair of methane molecules). Any occurrence of hydrogen in methane is distinct from any occurrence of hydrogen in pair of methane molecules. To see this, let us employ our settheoretic models. Let “be” a particular occurrence of hydrogen in methane. Nonetheless, occurs twice in pair of methane molecules at, say, p1 and p2. And it is these two occurrences: <, pair of methane molecules, p1> and <, pair of methane molecules, p2>, that are distinct and at some positive distance from each other. Occurrence is transitive: if a occurs in b and b occurs in c, then a occurs in c. Letters occur in words and words occur in sentences. However, a’s occurrence in b is a distinct occurrence from a’s occurrence in c, even if the same item, a, is involved. Moreover, the occurrence of a in b that occurs in c at p1 is distinct from the occurrence of a in b that occurs in c at p2. Which is as it should be. Consider another example: middle C occurs as the top note of the C-minor chord that begins the Pathétique. So it occurs in the sonata in the first measure. The same C-minor chord, top note middle C, ends the first movement, 429 measures later. These are distinct
148
Chapter 7
occurrences of the same chord, each containing the same note, middle C, on top. Hence the occurrence of middle C on top in the C-minor chord in the first measure of the Pathétique is distinct from the occurrence of middle C on top in the C-minor chord in the last measure of the Pathétique. (3) “If we have two hydrogen atom tokens in two different methane molecules, is there indeed a distinction between the case in which they instantiate the same occurrence of the structural universal methane and the case in which they instantiate different ones?” Is there a distinction between the case in which the atoms instantiate the same occurrence of hydrogen in the structural universal methane and the case in which they instantiate different occurrences of hydrogen? Well, they don’t instantiate occurrences at all—at least, not by themselves. Although each hydrogen atom token instantiates hydrogen, neither instantiates an occurrence of hydrogen in methane. Only its occurring in methane can instantiate an occurrence of hydrogen in methane. Still, we can ask: is there a distinction between one token occurrence of hydrogen in methane instantiating one type occurrence rather than another? No, there is not. This goes back to the business of multiple isomorphisms discussed earlier. There are four occurrences of hydrogen in methane, and four hydrogen atom tokens in a token methane molecule, but no unique isomorphism between them, since methane is a symmetrical molecule. So much for the objections Lewis raises to structural universals that might apply to the occurrence conception of them. Let me raise one further objection to the occurrence conception that is in the spirit of Lewis’s article (even though he didn’t raise it), and try to answer it. Lewis might say (though he would probably have put it more kindly) “you can call them ‘occurrences’ or call them ‘amphibians’ or call them ‘somethings-I-knownot-what that are needed to explain structural universals’. Either way, you are just putting a name on what is needed and postulating that it exists, but without giving an explanation of it.” The occurrence conception is not so easily dismissed for the following reasons. First, we have a tradition in logic of speaking of occurrences of variables and formulas in well-formed formulas, where it is clear that the variables and formulas in question are not specific physical tokens, but types. And everyone more or less knows how to talk this way—how to identify these occurrences and how to count them. The theory of occurrences of expressions given in section 1 above was an attempt to clarify it. Section 2 is a generalization of the idea. Second, since all talk of occurrences can be replaced by relatively transparent talk of what is occurring in what and where, and even occurrences of
A Serious Problem for Realism?
149
‘occurring’ can be elucidated (although somewhat less transparently) by means of talk of spaces and functions and the like, it seems that the notion of an occurrence is quite respectable after all, relying as it does on entities to which we are already committed. I am not urging fictionalism about occurrences. On the contrary, I view the criterion of identity for occurrences: the occurrence of x in y at position p = the occurrence of x′ in y′ at position p′ iff p = p′ and x = x′ and y = y′ and x occurs in y at p as committing us to occurrences, just as “Hume’s Law”: the number of Fs = the number of Gs iff there is a one–one correlation of the Fs with the Gs in my opinion commits us to numbers. What I am urging is that armed with the criterion of identity and with the other elucidations of the notion of an occurrence—a notion that has now been found to be free of the defects that Lewis considers—unless one is a nominalist, one should have no hesitation in accepting the existence of occurrences. They have a respectable pedigree. Of course, not everyone is committed to universals, or to spaces or functions. Nominalists think they are not. But nominalism is out of the picture already. We are addressing parties who believe in, or are inclined to believe in, universals but are worried about structural universals, or occurrences. And although there isn’t time to pursue the matter here, I assume that those who do not object to universals are unlikely to object to mathematical objects. But the use of mathematical objects like functions and spaces and points may prompt another sort of objection, namely, that this just pushes the problem back a step. It might be said that the same problem reappears with respect to points. That is, two points in a space, especially an abstract space, are qualitatively identical (at least with respect to their intrinsic properties). The only things that distinguish them are their positions in the space. (Perhaps that’s all they are: positions in space.) This is a well-known fact about points—a fact that philosophers and mathematicians usually put up with. Defining fundamental terms like those of geometry, however, is beyond the scope of the present discussion, as are related issues such as whether points are a challenge for the Principle of the Identity of Indiscernibles, whether the latter is true, and whether relationalists about space-time can do without points. It should be noted that the only
150
Chapter 7
nominalist to squarely renounce mathematical entities and make a go of living without them, namely, Field, does not renounce space or points in space. Concluding Remarks I have urged that in addition to types and tokens, we should reckon among the values of our variables occurrences of them. I have shown how postulating occurrences solves certain puzzles, and have elucidated the concept of an occurrence by, among other things, providing criteria of identity for occurrences. I then showed that although the occurrence conception introduced here has a bit in common with various conceptions put forth by Lewis, it manages to avoid the charge of incoherence/artificiality that mars his proposals. Thus, I believe, a major objection to types has been laid to rest. To briefly sum up my argument of this book: type talk is ubiquitous (chapter 1). Moreover, it cannot be avoided (chapters 3, 4, and 5). The chief motivation for trying to do without types (a causal requirement on knowledge) either fails to do so or collapses under the weight of its own deficiencies (chapter 2). And the chief objection to types, that they are incoherent, fails, thanks to the account of occurrences I developed in chapter 7. Therefore, types exist. Having said so, we need to develop a theory, or theories, about them. Some sketchy remarks to that end were offered in chapter 6, the main one being that it is chiefly the job of each discipline to tell us the types their discipline requires, and the properties such types satisfy. More could be said of course, but I hope enough has been said to shift the burden of proof squarely onto the naysayer who doubts there are such things as types.
Notes
Introduction 1. I shall not explore here Peirce’s other remarks about types and tokens, because Peirce requires that every type (“Legisign”) have tokens (“Sinsigns”) (Peirce 1931–58, vol. 2, p. 246)—a requirement that must be rejected if there are to be uninstantiated sentences, which linguistics assures us there are. 2. For excellent discussions of the abstract-concrete distinction, see Dummett 1981, chapter 14, and Hale 1987, chapter 3. It should be mentioned that the characterization I have chosen is not beyond criticism. There is, for example, the question of whether and how it categorizes God and other disembodied spirits, should they exist. A more serious concern in my opinion stems from the fact that the Grizzly Bear, for example, might seem to have a spatial location, because (as we saw) its U.S. range is Montana, Wyoming and Idaho. Similarly, Mozart’s Coronation Concerto was completed on February 24, 1788; therefore it might seem to have a temporal location. As Richard Cartwright has pointed out, it is no good replying that only tokens have spatial and temporal properties, that types do not—because Mozart’s Coronation Concerto, the work itself, really was completed on February 24, 1788. One solution is to say that types can have spatio-temporal properties, without thereby having a spatio-temporal location. Many, perhaps all, of a type’s spatio-temporal properties are had in virtue of spatio-temporal properties of its tokens. The species has a range in virtue of where its members are located, but it is not itself “at” that location. Mozart finished conceptualizing his concerto on February 24, 1788, but the work itself did not commence “occurring.” 3. Of course there are striking parallels between the type–token relationship and the relationship between numbers and pluralities—between, e.g., the number 2 and pairs of things—as I argued in Wetzel 1989b. 4. I take this essay to be harmonious with the work done by Asher (1993). But whereas he is concerned with all sorts of abstract objects, including what he terms “propositions, properties, states of affairs and facts,” I am concerned merely with types.
152
1
Notes
The Data
1. Technically, Science classified only two as “articles”; of the rest, four were classified as “research news,” two as “perspectives,” two as “technical comments,” and eighteen as “reports.” 2. It is not clear that any language is a spatiotemporal particular, i.e., a particular with a unique location in space and time. Jerrold Katz (1981) in Language and Other Abstract Objects has argued that all languages are abstract objects. If he is right, then even one’s idiolect is an abstract object, capable in principle of being spoken by another person. Probably, this is what most linguists mean by ‘idiolect’ anyway—a type. If there are no language tokens, then it might be necessary to extend the idea of a type. We already admit types with no tokens, e.g., sentences that are too long. Now we have to admit types of types with no tokens. (Why call them types? They are abstract objects of some sort, and they have instances.) However, a case can be made that a language is comparable to, although more complex than, a belief. Just as a belief can be had by more than one person, it can also be tokened in one person (perhaps as a state of his physical system). Maybe we can say the same for a language such as English: it can be understood by many, but one person’s understanding of it might be viewed as a token of it. 3. Page references are to the January 2, 1996, edition of the New York Times; they will be given in parentheses in the text. 4. Interestingly, in 2005 evidence came to light that there still exist ivory-billed woodpeckers—i.e., that the species is not extinct. But I will ignore it in what follows and continue to speak of the ivory-billed woodpecker as being extinct in order to be consistent with my linguistic data. 5. Although Strawson 1963, p. 239, makes a case for paintings and sculptures being types. 6. From Feynman 1995: [I]n 1947 or 1948, another particle was found, the ð-meson, or pion. . . . Besides the proton and the neutron, then, in order to get nuclear forces we must add the pion. . . . [E]xperimentalists . . . had already discovered this ì-meson or muon, and we do not yet know where it fits. Also, In cosmic rays, a large number of other “extra” particles were found. It turns out that today we have approximately thirty particles, and it is very difficult to understand the relationships of all these particles, and what nature wants them for, or what the connections are from one to another. (p. 39) In Table 2–2 are listed all the particles. . . . Underneath each particle its mass is given in a certain unit, called the Mev. One Mev is equal to 1.782 × 10−27 gram. . . . Several particles have been omitted from the table. These include the important zero-mass, zero-charge particles, the photon and the graviton, which do not fall into the baryon-meson-lepton classification scheme, and also some of the newer resonances (K*, ö, ç). The antiparticles of the mesons are listed in the table, but the antiparticles of the leptons and baryons would have to be listed in another table. . . .
Notes
153
[A]ll of the particles except the electron, neutrino, photon, graviton, and proton are unstable. . . . (p. 40) [T]he following [baryons] exist: There is a “lambda,” with a mass of 1154 Mev, and three others, called sigmas, minus, neutral, and plus, with several masses almost the same. There are groups or multiplets with almost the same mass, within one or two percent. Each particle in a multiplet has the same strangeness. The first multiplet is the proton-neutron doublet, and then there is a singlet (the lambda), then the sigma triplet, and finally the xi doublet. (pp. 40–42) In addition to the baryons the other particles which are involved in the nuclear interaction are called mesons. There are first the pions, which come in three varieties, positive, negative, and neutral; they form another multiplet. . . . Also, every particle has its antiparticle, unless a particle is its own antiparticle. For example, the ð− and the ð+ are antiparticles, but the ð° is its own antiparticle. The K− and K+ are antiparticles, and the K° and K°. . . . A thing called w which goes into three pions has a mass 780 on this scale, and somewhat less certain is an object which disintegrates into two pions. (p. 42) [L]eptons [include] . . . the following: there is the electron, which has a very small mass on this scale, only 0.510 Mev. Then there is that other, the ì-meson, the muon, which has a mass much higher, 206 times as heavy as an electron. . . . the difference between the electron and the muon is nothing but the mass. Everything works exactly the same for the muon as for the electron, except that one is heavier than the other. . . . In addition, there is a lepton which is neutral, called a neutrino, and this particle has zero mass. In fact, it is now known that there are two different kinds of neutrinos. . . . (pp. 42–43) Finally, we have two other particles which do not interact strongly with the nuclear ones: one is a photon, and perhaps, if the field of gravity also has a quantum-mechanical analog . . . then there will be a particle, a graviton, which will have zero mass. (p. 43)
2
Types Exist
1. I can’t take on here the question of whether quantifiers ought to be interpreted objectually or substitutionally, other than to say it doesn’t matter, for, as Kripke (1976, p. 379) asks, “Can there be a serious question whether someone who says ‘there are men’. . . thereby commits himself to the view that there are men . . . ?” 2. For more details about Frege’s views on function and objects, see, e.g., Frege 1977a,b,c. 3. Besides the three slightly different formulations I mentioned in the text, there are others that are also not equivalent. On the same page of Quine 1961a, for example, he claims that a theory is committed to those and only those entities to which the bound variables of the theory must be capable of referring in order that the affirmations made in the theory be true. (pp. 13–14)
These different formulations raise a host of interesting questions. Does Quine mean that the existence of the entities in question is “assumed,” “said,” “presupposed,” or “implied? Is it we who “countenance entities,” or theories, or particular existential quantifications? Is the commitment involved to particular entities, kinds of things, or whole ontologies (like that of the real numbers)? These questions have been
154
Notes
considered by Chihara (1973, chap. 3) and Jubien (1998). Another question is whether Quine’s criterion is intensional or, if not, then inadequate; see Cartwright 1954 and Jubien 1972. I mention these inequivalent formulations and the questions to which they give rise simply to acknowledge their existence—not to impugn Quine’s criterion of ontological commitment. The fact that there are different versions of it does not mean none is correct. In fact, I am assuming that more than one is correct. But it is not necessary to tarry over which one, because most philosophers and, I hope, the reader have a good grasp of Quine’s basic (and correct) idea, and the basic idea is sufficient for our purposes. (Although it should be mentioned, in all fairness, that even the basic idea has been challenged, by Searle [1969, p. 107], for example, who argues that “there is no substance to the criterion and indeed very little to the entire issue” and also by Alston [1958]. But the gap between their point of view and mine is so wide that there isn’t space in this context to try to bridge it.) 4. Of course, this gets him into the infamous problem of the ‘the concept horse’—is it a concept or an object? We will not pursue that here, as we are not trying to offer perfect formulations of the criteria, but to provide an explanation for the intuitions we have. 5. It appears in Hale’s (1987, p. 11) formulation of what he calls “the Fregean argument”: “if a range of expressions function as singular terms in true statements, then there are objects denoted by expressions belonging to that range. Numerals, and many other numerical expressions besides, do so function in many true statements. Hence there exist objects denoted by those numerical expressions.” 6. This formulation of criteria for singular termhood is due to Wright (1983, pp. 57, 62): ‘a’ is a singular term if and only if the following four conditions are met: (1) For any sentence ‘A(a)’, the inference from ‘A(a)’ to ‘There is something such that A(it)’ is valid. E.g., from ‘Socrates is wise’ we can infer ‘There is something that is wise’. (This condition is designed to rule out ‘nothing’.) (2) For any sentences ‘A(a)’ and ‘B(a)’, the inference from two sentences ‘A(a)’ and ‘B(a)’ to ‘There is something such that A(it) and B(it)’ is valid. E.g., from ‘Socrates is wise’ and ‘Socrates is snub-nosed’ we can infer ‘There is something such that it is wise and it is snub-nosed’. (This condition is designed to rule out, e.g., ‘something’.) (3) For any sentences ‘A(a)’ and ‘B(a)’, the inference from ‘It is true of a that A(it) or B(it)’ to ‘A(a) or B(a)’ is valid. E.g., from ‘It is true of Socrates that he is wise or he is an alien’ we can infer ‘Socrates is wise or Socrates is an alien’. (This is designed to rule out, e.g., ‘everything’.) (4) The conclusion of an inference of the sort described in (1) or (2) is never such that requesting specification can lead to a point at which the demand for a further specification, although grammatically well formulated, would be rejected as evincing a misunderstanding of the conclusion. (This is designed to rule out, e.g., ‘an alien’, because from ‘Someone is an alien’ one can ask ‘Who was an alien?’ but from ‘There is something Socrates was’ one cannot ask ‘what was it?’, and being told ‘an alien’, go on to ask ‘what alien?’.)
As I said, these criteria comprise only a first-level approximation. For one thing, they have to be relativized from expressions to uses of them, which introduces complications.
Notes
155
7. It can be argued, as Quine (1961a) does, that in some sense Frege’s criterion reduces in the end to Quine’s because truths containing singular terms can be “analyzed away” in favor of truths that contain only quantifiers and predicates. Using Russell’s theory of descriptions, Quine would urge that we replace ‘Socrates was wise’ with ‘There is someone that Socratizes, and anyone who Socratizes is identical with him, and he was wise’. This approach may be fine for, say, a formal mathematical language, but it has been found to be inaccurate as an analysis of how singular terms work in English. Therefore I have treated the two criteria as not reducing to each other. My point is that they dovetail to support the conclusion that types exist. 8. This claim should be distinguished from the claim that some parts of the theory discuss fictions, e.g., in physics we study the interactions of forces on objects on a frictionless surface, knowing that there are no frictionless surfaces. But this is not to say that no objects of physics exist. In particular, it is because the study of frictionless surfaces (fictions) tells us something about forces that do exist that the study is of interest. 9. Since I wrote these pages, fictionalism with respect to things other than mathematical entities and possible worlds has become more respectable. For those who would like to see a lengthy general discussion of fictionalism, see Kalderon 2005. 10. Speaking of historical digressions, Mark Steiner was one of the first to attack a modern causal requirement for knowledge of mathematics. In Steiner 1975, he formulated and disproved various versions of it, e.g., that “one cannot know anything about F’s unless this knowledge is caused by some of the F’s.” The problem here, Steiner argues (pp. 114–115), is that Fs, if they are objects, do not cause anything; rather it is events involving Fs that do the causing (as Davidson pointed out). The weaker requirement that “one cannot know anything about F’s unless this knowledge is caused by at least one event in which at least one F participates” rules out any knowledge of (for example) long-extinct creatures on the basis of their footprints (since the events they participated in do not overlap with the events we participate in). Steiner argues that this formulation is best seen as involving an appeal to a causal explanation—but such explanations need to invoke an appropriate theory, and that theory will contain the axioms of number theory and analysis—and hence refer to abstract objects (pp. 115–116). Steiner’s criticisms are by no means decisive, yet they are an early indicator that it may be difficult or impossible to formulate a causal requirement that is both plausible and at the same time incompatible with platonism. Wright, himself a platonist, shows they are not decisive in Wright 1983, (p. 92), on the grounds that, as he puts it, “it is far from clear . . . that every constituent of a theory furnishing a causal explanation of a particular state of affairs should be viewed as being used in that explanation. . . . ” It should be noted that Wright mentions a more plausible causal requirement than the ones Steiner rejects, viz., “None of one’s beliefs constitute knowledge about F’s unless any complete causal explanation of these beliefs must advert to at least one event in which at least one F participates” (p. 92). But
156
Notes
this causal requirement is sufficiently similar to the one formulated by John Burgess and discussed below that it will not be considered here. 11. We shall, with all due respects to Donald Davidson, proceed as though talk of this and that fact is fine. Davidson (1969) argued that if there are any facts, there is only one. 12. His case involved twins. You know Judy; you can recognize her easily; you think you see her coming out of her house; you do see Judy coming out of her house; but unbeknownst to you she has a twin sister, Trudy, who is visiting for the weekend, so you don’t know that you are seeing Judy. 13. It should be noted that Loeb (1976, p. 334) defends Goldman from Skyrm’s counterexample, by employing the principle that “if Q is logically related to P, and if the fact that P is causally connected with X’s belief that Q, then the fact that Q is causally connected with X’s belief that Q.” But we need not concern ourselves with his defense since Loeb himself claims there are serious problems for the principle; and, as we will see two paragraphs below, Hale shows the principle to be false. 14. See Field 1989b for such criticism. 15. Perhaps he is relying on infinitely many possible objects, rather than types. However, if Benacerraf’s problem is a problem for abstract objects because they do not causally interact with us, it is equally a problem for “objects” that don’t even exist, because surely they do not causally interact with us either. 16. I don’t know whether Hart himself would classify linguistic types with mathematical objects. But we are doing so for the duration of this section. 17. To show this here would take us too far afield. The details are best appreciated by first reading Field’s Science without Numbers (1980), then his Realism, Mathematics, and Modality (1989), and Burgess’s (1990) article. 3
Paraphrasing, Part One: Words
1. Something should be said about what is to be preserved in a paraphrase, but it should be said by the nominalist. I am not the nominalist claiming that all type talk is a façon de parler for something else, so I can only guess what they have in mind. It can’t be merely logical equivalence, because ‘7 + 5 = 12’ is, on the usual construal, logically equivalent to ‘7 × 5 = 35’, but no one would take the latter to be a mere façon de parler for the former. Goodman and Quine (1947) suggest a much stronger relation when they claim that until we have given the nominalist paraphrase we don’t really understand the platonistic sentence, for they say “if it cannot be translated into nominalistic language, it will in one sense be meaningless for us” (p. 197). This talk of “translation” suggests the relation is to be one of synonymy. It is tempting to say that the paraphrase must have the same truth conditions as
Notes
157
what is to be paraphrased, but since the one sentence refers to a type and the other doesn’t, it is unclear how they could have the same truth conditions. Perhaps the best thing we might say is that the nominalist is providing a nominalist analysis of the type sentence, one that the realist—who countenances types—can see is in some very strong sense equivalent to the type sentence. 2. As far as I can see, the only alternative to this is to assume the existence of infinitely, or indefinitely many, possible expression tokens. But, first, the existence of possible objects is not apt to appeal to someone who rejects abstract objects on the grounds that the latter are not “concrete” enough. And second, until an acceptable notion of possibility for tokens has been spelled out, there seems little point in pursuing this alternative; and I shall not do so here. 3. They are: orthographic, phonological, morphological, lexical, grammatical, onomastic, lexicographical, and statistical (pp. 1120–1121). 4. Economist, December 23, 1995. 5. This is not to say that the result of misspelling a word is always a token of the word intended. 6. Not that Goodman would; he is acutely aware that tokens of a letter need not be similar in shape. In Goodman 1972b (pp. 437–438) he says: Similarity, ever ready to solve philosophical problems and overcome obstacles, is a pretender, an impostor, a quack. . . . Similarity does not pick out inscriptions that are “tokens of a common type.” . . . Only our addiction to similarity deludes us into accepting similarity as the basis for grouping inscriptions into the several letters, words, and so forth.
7. This is the strategy of Goodman and Quine (1947). But they were concerned with describing a nominalistic artificial written language that would be adequate for proof theory, not with natural languages, nor with what a letter of the English alphabet “really is.” Although they would not be upset by the untoward consequences of shape theory mentioned here (e.g., that most things usually regarded as ‘A’s turn out not to be ‘A’s, although many things not usually regarded as ‘A’s—such as a part of a frisbee indistinguishable from the rest of the frisbee—turn out to be ‘A’s), their strategy cannot help us here. 8. This is not to say, by the way, that shape is irrelevant to an alphabet letter, or that it does not have a (number of) standard shapes at a given time and place. 9. I am grateful to Ned Markosian for making this suggestion to me. 10. Thanks to Ned Block for this point, made in conversation. 11. It goes without saying that there are countless trivial, or uninteresting, or unnatural, or unprojectible properties that all and only tokens of a word share. To name but a few: let c1, c2, . . . , cn be all the (actual) tokens of the word ‘cat’. Then being c1, or c2,or . . . or cn will be a property shared; as will being a member of {c1, c2, . . . , cn}. Another possibility suggested to me by Wayne Davis is being sufficiently similar
158
Notes
for certain linguistic purposes to an exemplar, but if the exemplar is written, it eliminates all spoken tokens; and if spoken, all written tokens. We can include several exemplars, but it would have to be quite a few, and then we get into the same sorts of problems we faced with the letter ‘A’, viz., under- and overclassification. Another suggestion is that all and only tokens of ‘cat’ have [some specific percentage] of [some large set of “natural” properties, e.g., {sounding just like c13, or sounding just like c57, or . . . , or looking like ‘cat’, or ‘CAT’, or ‘cat’ or [Morse code exemplars, etc.] or being produced with the intention of producing a token of ‘cat’}]. Perhaps this works—for ‘cat’; perhaps it doesn’t. The dangers of under- and overclassification are less obvious—mostly because the proposal is not spelled out—but they are still present. (The alphabet could be encoded some fancy new way and someone may accidentally produce a token of a word that way.) To assume otherwise is an act of faith. 12. Thanks again to Ned Markosian, for alerting me to the need for these disclaimers so as to avoid misinterpretation. 4
Paraphrasing, Part Two
1. By, of all people, Frege. He says that ‘the horse is a four-legged animal’ “is probably best regarded as expressing a universal judgment, say ‘all horses are four-legged animals’ or ‘all properly constituted horses are four-legged animals’. . .”(Frege 1977a, p. 45). But he had other fish to fry here, viz., defending his criterion that “the singular definite article always indicates an object.” It would be ironic if Frege, whose criterion for objecthood is being relied on here to argue that types are objects, were to deny that his criterion supports the conclusion that they are. Having argued for the existence of numbers, senses, thoughts, etc. it is hard to imagine that Frege would gainsay such abstract objects as words and sentences (types, that is), and species. Dummett (1981) argues that “colours, chemical substances and animal species” are objects for Frege, because they “can be identified” (p. 76); “we can form complex singular terms by means of which we can refer to objects in these categories” such as “the color of ξ” (p. 77). The referents (colors, chemical substances, and animal species) therefore qualify as objects for Frege. 2. Originally found at http://web.lexis/nexis.com/; no longer online. 3. A pair of cats counts as an individual for Goodman, as does a threesome, or the “sum” of eighteen cats and twelve dogs, or any bunch of individuals. Goodman’s notion of an individual is partially spelled out in Goodman 1972c. Nominalism, for Goodman, “consists specifically in the refusal to recognize classes” (1972c, p. 156). An individual may be widely scattered in space and time (Abraham Lincoln’s first hat + the Andromeda galaxy, e.g.) but the hallmark of an individual for Goodman is that it is not set-like. This means that “two different entities cannot be made up of the same entities” (p. 158). If we have only two atoms, a and b, Goodman’s theory of individuals will recognize in addition one and only one other entity: the “indi-
Notes
159
vidual” that is a conjoined with b. But set theory will recognize, besides a and b, {a}, {b}, {{a}}, {{b}}, {a, {b}}, {b, {a}}, {{a}, {b}}, and so on. David Lewis has shown in Lewis 1991 that the essential difference between set theory and “the calculus of individuals,” is that set theory distinguishes a from its unit set {a}, whereas Goodman’s calculus of individuals does not. 4. It was this Goodman-Quine paraphrase of ‘there are more cats than dogs’ that converted the nominalists into realists in a graduate seminar I taught on universals at Georgetown. 5. I am not claiming that Quine would find it decisive. 6. This is not to say that the species Homo sapiens has all the same properties in both worlds, any more than Bill Clinton would. But he would presumably be the same organism. 7. In Quine 1961a (p. 4), he says Wyman’s slum of possibles is a breeding ground for disorderly elements. Take, for instance, the possible fat man in that doorway; and, again, the possible bald man in that doorway. Are they the same possible man, or two possible men? How do we decide? How many possible men are there in that doorway? Are there more possible thin ones than fat ones? How many of them are alike? Or would their being alike make them one? Are no two possible things alike? Is this the same as saying that it is impossible for two things to be alike? Or, finally, is the concept of identity simply inapplicable to unactualized possibles? But what sense can be found in talking of entities which cannot meaningfully be said to be identical with themselves and distinct from one another? These elements are well-nigh incorrigible. . . . I feel we’d do better simply to clear Wyman’s slum and be done with it.
8. I have argued it is not essential in Wetzel 2000b. 6
Remarks on a Theory of Word Types
1. I said “some of” the identity conditions because I have not given a set of necessary and sufficient conditions for when x and y are the same word, or even different temporal stages of the same word. More on this in section 2 below. 2. A revised version of Bromberger 1981 is Bromberger 1992a. 3. Claims about something’s being “real” can mean different things. As Sidney Morganbesser remarked upon hearing some of this material presented as a paper, when biologists claim that species are “real,” they are apt to mean that we are dividing the world up right—that how individuals are grouped into species is in some significant sense natural, as opposed to the lower and higher taxa (genus, family, order class, phylum, kingdom and superkingdom) which are generally considered to be artificial in varying degrees. Words are real in that sense, I wish to claim; tokens of a word form a kind that is in some significant sense natural (although of course more conventional). But biologists can also mean other things—e.g., that species are entities, or entities “in nature.” Word types are entities too, I wish to claim
160
Notes
(although not entities “in nature”). However, these two ways of being “real” are related and often reinforce each other—as for example when Ernst Mayr (1970, p. 233) claims that Degree of variability may differ quite strongly in families belonging to the same order. For instance, among the North American wood warblers (Parulidae) only 20 (40.8 percent) of the 49 species are polytypic, while among the buntings (Emberizidae) 31 (72.1 percent) of the 43 North American species are polytypic. The difference is real and not an artifact of different taxonomic standards.
In quantifying over species, he is treating them as entities, but doing so produces legitimate results since the differences between species are “real”—nature has been carved at the joints. There is yet another important sense in which species are “real”; it has been claimed that species are “individuals,” i.e., the primary units of evolution (as opposed to their members). Though I am not willing to endorse every aspect of the analogous claim for words (e.g., that word tokens are physical “parts” of words), there is something analogous that I would wish to claim, although I am not emphasizing it here, and that is that words, as opposed to their tokens, are among the primary linguistic units. To paraphrase a claim made by Eldredge and Cracraft (1980, p. 15) about species: Words are real entities, whose origin, persistence and extinction require explanation. 4. Or, according to Merriam Webster’s, “a malicious woman; a strong tackle used to hoist an anchor to the cathead of a ship; a catboat; a cat-o’-nine-tails; a catfish; a player or devotee of hot jazz; or a guy” (Mish 1993, p. 178). 5. The use of the phrase ‘family resemblance’ in this context may bring to mind the following query, which was raised in conversation by Terry Pinkard: Wittgenstein’s observation that some things—things we call ‘games’, e.g.—share no particular property but exhibit merely a “family resemblance” was supposed to put to rest “the problem of universals.” So how is this consistent with the claim that there are such “universal” things as species and word types? The answer—and I will pass over in silence the fact that Wittgenstein made his point by means of a word type, the word ‘games’—is that at best Wittgenstein’s observations put to rest one sort of argument for such “universal” things. There remain other arguments, for example, that such entities are values of variables in true scientific theories. 6. Mayr’s earlier book, Mayr 1942, defined a species as a “group of actually or potentially interbreeding natural populations that are reproductively isolated from other such groups” (my italics)—a definition that may seem familiar from high school biology—but the ‘potentially’ was dropped owing to its tendency to run together clearly different species. 7. Interestingly, an analogy between species and words can be seen whether or not one believes that we “carve nature at the joints.” For example, in the introduction to the OED we are told:
Notes
161
The vocabulary of a widely-diffused and highly-cultivated living language . . . may be compared to one of those natural groups of the zoologist or botanist, wherein typical species forming the characteristic nucleus of the order are linked on every side to other species, in which the typical character is less and less distinctly apparent. . . . For the convenience of classification, the naturalist may draw the line . . . but Nature has drawn it nowhere. So the English Vocabulary contains a nucleus or central mass of many thousand words whose “Anglicity” is unquestioned. . . . But they are linked on every side with other words which are less and less entitled to this appellation. . . . Yet practical utility has some bounds, and a Dictionary has definite limits: the lexicographer must, like the naturalist, “draw the line somewhere,” in each diverging direction. (p. x)
8. This is not to say that everything called an ‘instance’ of a word is necessarily a token of it. The word ‘eleemosynary’ occurs in the first line of Tom Jones but it is not a token, as the line itself is not a token. For further discussion, see chapter 7. 9. For an amusing account of how the traditional quote-name turns out to be ambiguous when applied to phrases, along with a solution, see Boolos 1995. 10. Nothing hinges on attributing intentionality to the child. If you agree with Davidson that animals and small children don’t think, recast this paragraph in terms of behavior. 11. For a few of them see Crystal 1987, p. 91. 7
A Serious Problem for Realism?
1. One issue of the Australasian Journal of Philosophy included articles by Lewis (1986a,b), Armstrong (1986), Forrest (1986), and Bigelow (1986) on the topic. 2. According to James and James (1968, p. 2), an abstract space is “a formal mathematical system consisting of undefined objects and axioms of a geometric nature. Examples are Euclidean spaces, metric spaces, topological spaces, and vector spaces.” 3. Not that I think we are forced to treat pairs as sets. For more discussion of this, see Wetzel 1989b. 4. Lewis explores this idea in Lewis 1991. 5. Bigelow mentions this possibility in Bigelow 1986, p. 96. 6. Lewis commented (in correspondence) that “I’ve found it hard enough to find an ontologically serious way of understanding standard fundiert set theory; I don’t think the picture I arrived at would carry over; I don’t see what could take its place. (See my Parts of Classes, especially the appendix; or, perhaps better, “Mathematics Is Megethology,” reprinted in my Papers in Philosophical Logic. But my approach is not for you, since it relies utterly on unrestricted mereology.)”
References
Allwood, Jens (1998). “Some Frequency based Differences between Spoken and Written Swedish.” In Proceedings from the XVIth Scandinavian Conference of Linguistics. Department of Linguistics, University of Turku. http://www.ling.gu.se/~jens/ publications/docs076-100/084.pdf Alston, William (1958). “Ontological Commitment.” Philosophical Studies 9: 8–17. Angier, Natalie (1996). “Variant Gene Tied to Love of New Thrills.” New York Times, January 2, pp. A1, B11. Armstrong, David (1978a). Universals and Scientific Realism, vol. I: Nominalism and Realism. Cambridge: Cambridge University Press. Armstrong, David (1978b). Universals and Scientific Realism, vol. II: A Theory of Universals. Cambridge: Cambridge University Press. Armstrong, David (1986). “In Defense of Structural Universals.” Australasian Journal of Philosophy 64: 85–88. Armstrong, David (1989). Universals: An Opinionated Introduction. Boulder: Westview Press. Asher, Nicholas (1993). Reference to Abstract Objects in Discourse. Dordrecht: Kluwer Academic Publishers. Asher, Nicholas, and Francis Jeffry Pelletier (1997). “Generics and Defaults.” In Handbook of Logic and Language, ed. J. Van Benthem and A. Ter Meulen. Cambridge, Mass: MIT Press. Benacerraf, Paul (1965). “What Numbers Could Not Be.” Philosophical Review 74: 47–73. Benacerraf, Paul (1983). “Mathematical Truth.” In Philosophy of Mathematics, 2nd ed., ed. by Paul Benacerraf and Hilary Putnam. Cambridge: Cambridge University Press.
164
References
Bigelow, J. (1986). “Towards Structural Universals.” Australasian Journal of Philosophy 64: 94–96. Black, M. (1952). “The Identity of Indiscernibles.” Mind 61: 153–164. Block, Ned (ed.) (1980). Readings in Philosophy of Psychology, vol. 1. Cambridge, Mass.: Harvard University Press. Boolos, George (1971). “The Iterative Conception of Set.” Journal of Philosophy 68: 215–232. Boolos, George (1987). “The Consistency of Frege’s Foundations of Arithmetic.” In On Being and Saying: Essays for Richard Cartwright, ed. Judith Jarvis Thomson. Cambridge, Mass.: MIT Press. Boolos, George (1995). “Quotational Ambiguity.” In On Quine: New Essays, ed. Paolo Leonardi. Cambridge, New York: Cambridge University Press. Bromberger, Sylvain (1992). On What We Know We Don’t Know. Chicago: University of Chicago Press. Bromberger, Sylvain (1992a). “Types and Tokens in Linguistics.” In Bromberger 1992. Bromberger, Sylvain (1992b). “The Ontology of Phonology.” In Bromberger 1992. Bromberger, Sylvain, and Morris Halle (1981). “Mind, Language, and Knowledge: On Some Platonistic Relationships From an Erotetic Position.” Paper presented at the 78th meeting of the American Philosophical Association in Philadelphia in December, 1981. Bromberger, Sylvain, and Morris Halle (1986). “On the Relationship of Phonology and Phonetics.” In Perkell and Klatt 1986. Burgess, John (1984). “Review of Frege’s Conception of Numbers as Objects.” Philosophical Review 93: 638–640. Burgess, John (1990). “Epistemology and Nominalism.” In Physicalism in Mathematics, ed. A. D. Irvine. Dordrecht: Kluwer Academic Publishers. Byrne, Robert (1996). [Chess column.] New York Times January 2, p. B18. Caplan, Arthur L. (2002) “His Genes, Our Genome.” New York Times, May 3. Carlson, Gregory, and Pelletier, Francis Jeffry (eds.) (1995). The Generic Book. Chicago: University of Chicago Press. Carnap, Rudolf (1956). “Empiricism Semantics and Ontology.” Meaning and Necessity. Chicago: University of Chicago Press. Carnap, Rudolf (1959). The Logical Syntax of Language. Patterson, N.J.: Littlefield, Adams.
References
165
Cartwright, Richard (1954). “Ontology and the Theory of Meaning.” Philosophy of Science 21: 316–325. Cartwright, Richard (1987). “Propositions.” In Philosophical Essays. Cambridge, Mass.: MIT Press. Castañeda, Hector-Neri (1980). “The Theory of Questions, Epistemic Powers, and the Indexical Theory of Knowledge.” In Midwest Studies in Philosophy V, ed. Peter A. French, Theodore E. Uehling, Jr., and Howard K. Wettstein. Minneapolis: University of Minnosota Press. Chihara, Charles (1973). Ontology and the Vicious Circle Principle. Ithaca and London: Cornell University Press. Chomsky, Noam (1957). Syntactic Structures. The Hague: Mouton. Collinge, N. E. (ed.) (1990). An Encyclopedia of Language. London: Routledge. Crystal, David (1987). The Cambridge Encyclopedia of Language. Cambridge, New York: Cambridge University Press. Dancy, Jonathan (2004). Ethics without Principles. Oxford, New York: Clarendon Press. Darwin, Charles (1859). On the Origin of Species. London: John Murray. Davidson, Donald (1969). “True to the Facts.” Journal of Philosophy 66: 748–764. Davidson, Donald (1980). “Mental Events.” In Essays on Actions and Events. Oxford: Clarendon Press. Davies, Stephen (2001). Musical Works and Performances: A Philosophical Exploration. Oxford: Clarendon Press. Davis, Wayne (2003). Meaning, Expression and Thought. Cambridge: Cambridge University Press. DeSousa, Ronald (1984). “The Natural Shiftiness of Natural Kinds.” Canadian Journal of Philosophy 14: 561–580. Dicke, William (1996). “Numerous U.S. Plant and Freshwater Species Found in Peril.” New York Times, January 2, p. B12. Dummett, Michael (1981). Frege: Philosophy of Language, 2nd ed. Cambridge, Mass.: Harvard University Press. Dupre, John (1981). “Natural Kinds and Biological Taxa.” Philosophical Review 90: 66–90. Dupre, John (1999). “On the Impossibility of a Monistic Account of Species.” In Species: New Interdisciplinary Essays, ed. Robert A. Wilson. Cambridge, Mass.: MIT Press.
166
References
Einstein, Albert (1934). Essays in Science. New York: Philosophical Library. Eldredge, Niles, and Joel Cracraft (1980). Phylogenetic Patterns and the Evolutionary Process: Method and Theory in Comparative Biology. New York: Columbia University Press. Eliot, T. S. (1952): The Complete Poems and Plays 1901–1950. New York: Harcourt, Brace and World. Faraday, Michael (1860). A Course of Six Lectures on the Various Forces of Matter and Their Relation to Each Other. New York: Harper and Brothers. Feynman, Richard (1995). Six Easy Pieces. Reading, Mass.: Addison-Wesley. Field, Hartry (1980). Science without Numbers. Princeton: Princeton University Press. Field, Hartry (1989). Realism, Mathematics, and Modality. Oxford: Basil Blackwell. Field, Hartry (1989a). “Realism and Anti-Realism about Mathematics.” In Field 1989. Field, Hartry (1989b). “Platonism for Cheap? Crispin Wright on Frege’s Context Principle.” In Field 1989. Field, Hartry (1989c). “Can We Dispense With Space-Time?” In Field 1989. Fillmore, C. J. (1975). “An Alternative to Checklist Theories of Meaning.” Papers from the 1st Annual Meeting of the Berkeley Linguistics Society: 123–132. Fine, Kit (1992). “Essence and Modality.” Available at http://philosophy.fas.nyu .edu/docs/IO/1160/essence.pdf. Firth, J. R. (1957). “A Synopsis of Linguistic Theory 1930–55.” In Studies in Linguistic Analysis. Oxford: Philological Society. Fisher, Lawrence (1996). “Second Gene Is Linked to a Deadly Skin Cancer.” New York Times, January 2, p. B18. Forrest, P. (1986). “Neither Magic Nor Mereology: A Reply to Lewis.” Australasian Journal of Philosophy 64: 89–91. Frege, Gottlob (1977a). “On Concept and Object.” In Geach and Black 1977. Frege, Gottlob (1977b). “What Is a Function?” In Geach and Black 1977. Frege, Gottlob (1977c). “Function and Concept.” In Geach and Black 1977. Frege, Gottlob (1980). The Foundations of Arithmetic. Trans. J. L. Austin. Evanston, Ill: Northwestern University Press. Fudge, Eric (1990) “Language as Organized Sound: Phonology.” In Collinge 1990.
References
167
Geach, Peter, and Max Black (eds.) (1977). Translations from the Philosophical Writings of Gottlob Frege. Oxford: Basil Blackwell. Goldman, Alvin (1967). “A Causal Theory of Knowing.” Journal of Philosophy 64: 357–372. Goldman, Alvin (1976). “Discrimination and Perceptual Knowledge.” Journal of Philosophy 73: 771–791. Goldman, Alvin (1986). Epistemology and Cognition. Cambridge, Mass.: Harvard University Press. Goodman, Nelson (1972a). Problems and Projects. Indianapolis: Bobbs-Merrill. Goodman, Nelson (1972b). “Seven Strictures on Similarity.” In Goodman 1972a. Goodman, Nelson (1972c). “A World of Individuals.” In Goodman 1972a. Goodman, Nelson (1977a). Structure of Appearance, 3rd ed. Dordrecht: Reidel. Goodman, Nelson (1977b). “Of Time and Eternity.” In Goodman 1977a. Goodman, Nelson, and W. V. Quine (1947). “Steps Toward a Constructive Nominalism.” Journal of Symbolic Logic 12: 105–122. Reprinted in Goodman 1972a. Page references are to the 1972a reprint. Grady, Denise (1997). “New Tactic of Invasion by AIDS Virus Is Found.” New York Times, July 17. Greenlee, D. (1973). Peirce’s Concept of Sign. The Hague: Mouton. Grice, Paul (1969). “Utterer’s Meaning and Intentions.” Philosophical Review 78: 147–177. Hale, Bob (1987). Abstract Objects. Oxford and New York: Basil Blackwell. Halle, Morris, and G. N. Clement (1983). Problem Book in Phonology. Cambridge, Mass.: MIT Press. Hanks, Patrick (2003). “Lexicography.” In Handbook of Computational Linguistics, ed. Ruslan Mitkov. New York: Oxford University Press. Hardie, C. D. (1936). “The Formal Mode of Speech.” Analysis 4: 46–48. Hart, W. D. (1977). “Review of Steiner’s Mathematical Knowledge.” Journal of Philosophy 74: 118–129. Hayden, Thomas (2001). “Lives: Quantifiably Normal.” New York Times, March 4. Heisenberg, Werner (1979). Philosophical Problems of Quantum Physics. Woodbridge, Conn.: Ox Bow Press.
168
References
Hilbert, David (1967). “The Foundations of Mathematics.” In From Frege to Gödel, ed. Jean van Heijenoort. Cambridge, Mass.: Harvard University Press. Hilbert, David (1983). “On the Infinite.” In Philosophy of Mathematics, 2nd ed., ed. Paul Benacerraf and Hilary Putnam. Cambridge: Cambridge University Press. Hodes, Harold (1984). “Logicism and the Ontological Commitments of Arithmetic.” Journal of Philosophy 81: 123–149. Hugly, Philip, and Charles Sayward (1981). “Expressions and Tokens.” Analysis 41: 181–187. Hull, David (1965). “The Effect of Essentialism on Taxonomy—Two Thousand Years of Stasis.” British Journal for the Philosophy of Science 15: 314–326. Hull, David (1989). The Metaphysics of Evolution. Albany: SUNY Press. Hume, David (1973). A Treatise of Human Nature. Ed. by L. A. Selby-Bigge. Oxford: Oxford University Press. Hutton, Christopher (1990). Abstraction and Instance: The Type–Token Relation in Linguistic Theory. Oxford: Pergamon Press. James and James (1968). Mathematics Dictionary, 3rd ed. Princeton, N.J.: D.Van Nostrand. Johnson, George (1996). “New Family Tree Is Constructed for Indo-European Languages.” New York Times, January 2, pp. B9, B15. Jubien, Michael (1972). “The Intensionality of Ontological Commitment.” Noûs 51: 378–387. Jubien, Michael (1988). “On Properties and Property Theory.” In Properties, Types, and Meaning, vol. I: Foundational Issues, ed. Gennaro Chierchia. Dordrecht: Kluwer Academic Publishers. Jubien, Michael (1998). “Ontological Commitment.” Routledge Encyclopedia of Philosophy, vol. 7, pp. 112–117. Kalderon, Mark Eli (2005). Fictionalism in Metaphysics. Oxford, New York: Oxford University Press. Kaplan, David (1990). “Words.” Proceedings of the Aristotelian Society, supp. vol. 64: 93–120. Katz, Jerrold J. (1981). Languages and Other Abstract Objects. Totawa, N.J.: Rowman and Littlefield. Kim, Jaegwon (1966). “On the Psycho-Physical Identity Theory.” American Philosophical Quarterly 3: 227–235.
References
169
Kitcher, Philip (1978). “The Plight of the Platonist.” Noûs 12: 119–136. Krifka, Manfred, Francis Jeffry Pelletier, Gregory Carlson, Alice ter Meulen, Godehard Link, and Gennaro Chierchia (1995). “Genericity: An Introduction.” In Carlson and Pelletier 1995. Kripke, Saul (1972). “Naming and Necessity.” In Semantics of Natural Language, ed. D. Davidson and G. Harman. Dordrecht: D. Reidel. Kripke, Saul (1976). “Is There a Problem about Substitutional Quantification?” In Truth and Meaning, ed. Gareth Evans and John McDowell. Oxford: Clarendon Press. Leary, Warren (1996). “Physicists See Long Pass as Triumph of Torques.” New York Times, January 2, pp. B9, B16. Lewis, David (1986a). “Against Structural Universals.” Australasian Journal of Philosophy 64: 25–46. Lewis, David (1986b). “Comment on Armstrong and Forrest.” Australasian Journal of Philosophy 64: 92–93. Lewis, David (1991). Parts of Classes. Oxford: Basil Blackwell. Lewis, Peter (1996). “About Freedom of the Virtual Press.” New York Times, January 2, p. B14. Lindblom, Bjorn (1986). “On the Origin and Purpose of Discreteness and Invariance in Sound Patterns.” In Perkell and Klatt 1986. Locke, John (1975). An Essay Concerning Human Understanding. Ed. Peter H. Nidditch. Oxford: Oxford University Press. Loeb, Louis (1976). “On a Heady Attempt to Befiend Causal Theories of Knowledge.” Philosophical Studies 29: 331–336. Loux, Michael (1978). Substance and Attribute. Dordrecht: D. Reidel. Loux, Michael (1998). Metaphysics: A Contemporary Introduction, 2nd ed. London: Routledge. Ludlow, Peter (1982). “Substitutional Quantification and the Problem of Expression Types.” Logique et Analyse 25: 413–424. Lyons, John (1977). Semantics, vol. 1. Cambridge: Cambridge University Press. Maddy, Penelope (1980). “Perception and Mathematical Intuition.” Philosophical Review 89: 163–196. Maddy, Penelope (1990). Realism in Mathematics. Oxford: Clarendon Press.
170
References
Malament, David (1982). “Review of Field’s Science without Numbers.” Journal of Philosophy 79: 523–534. Manes, Stephen (1996). “Sometimes Achieving Simplicity Isn’t Cheap and Isn’t So Easy.” New York Times, January 2, p. B11. Mayr, Ernst (1942). Systematics and the Origin of Species. New York: Columbia University Press. Mayr, Ernst (1970). Populations, Species, and Evolution. Cambridge, Mass.: Harvard University Press. McArthur, Tom (1992). The Oxford Companion to the English Language. Oxford: Oxford University Press. McCrum, Robert, William Cran, and Robert MacNeil (1986). The Story of English. New York: Viking Penguin. Mill, John Stuart (1979). Utilitarianism. Ed. George Sher. Indianapolis: Hackett. Minsky, Marvin (1975). “A Framework for Representing Knowledge.” In The Psychology of Computer Vision, ed. P.H. Winston. New York: McGraw-Hill. Mish, Frederick C., et al. (eds.) (1993). Merriam Webster’s Collegiate Dictionary, 10th ed. Springfield, Mass.: Merriam Webster. Murdoch, Iris (1970). The Sovereignty of Good. New York: Schocken Books. Murray, J. A. H., et al. (eds.) (1971). The Oxford English Dictionary. Oxford: Oxford University Press. “New Element: Zinc’s Heavy Kin” (1996). New York Times, February 22. Peirce, Charles S. (1931–58). Collected Papers of Charles Sanders Peirce. Ed. Charles Hartshorne and Paul Weiss. Cambridge, Mass.: Harvard University Press. Perkell, Joseph S., and Dennis H. Klatt (eds.) (1986). Invariance and Variability in Speech Processes. Hillsdale, N.J.: Lawrence Erlbaum. Place, U. T. (1956). “Is Consciousness a Brain Process?” British Journal of Psychology 47: 44–50. Pratt, Timothy (1998). “From the Andes to Epcot, the Adventures of an 8,000-YearOld Bean.” New York Times, May 19. Putnam, Hilary (1975). “The Meaning of ‘Meaning’.” In Mind, Language, and Reality. Cambridge: Cambridge University Press. Putnam, Hilary (1981). Reason, Truth, and History. Cambridge: Cambridge University Press. Quine, Willard Van (1940). Mathematical Logic. Cambridge, Mass.: Harvard University Press.
References
171
Quine, Willard Van (1960). Word and Object. Cambridge, Mass.: MIT Press. Quine, Willard Van (1961a). “On What There Is.” In From a Logical Point of View, 2nd ed. New York: Harper and Row. Quine, Willard Van (1961b). “Logic and the Reification of Universals.” In From a Logical Point of View, 2nd ed. New York: Harper and Row. Quine, Willard Van (1961c). “Identity, Ostension and Hypostasis.” In From a Logical Point of View, 2nd ed. New York: Harper and Row. Quine, Willard Van (1969a). “Epistemology Naturalized.” In Ontological Relativity and Other Essays. New York: Columbia University Press. Quine, Willard Van (1969b). “Natural Kinds.” In Ontological Relativity and Other Essays. New York: Columbia University Press. Quine, Willard Van (1969c). “Ontological Relativity.” In Ontological Relativity and Other Essays. New York: Columbia University Press. Quine, Willard Van (1977). “Natural Kinds.” In Naming, Necessity, and Natural Kinds, ed. Stephen P. Schwartz. Ithaca, N.Y.: Cornell University Press. Quine, Willard Van (1987). Quiddities: An Intermittently Philosophical Dictionary. Cambridge, Mass.: Harvard University Press. Rosen, Charles (1972). The Classical Style. New York: W. W. Norton. Ross, W. D. (1988). The Right and the Good. Indianapolis: Hackett. Ruse, Michael (1987). “Biological Species: Natural Kinds, Individuals, or What?” British Journal for the Philosophy of Science 38: 225–242. Searle, John (1969). Speech Acts. Cambridge: Cambridge University Press. Sellars, Wilfrid (1963). “Abstract Entities.” Review of Metaphysics 16: 627–671. Simons, Peter (1982). “Token Resistance.” Analysis 42: 195–203. Simpson, G. G. (1961). Principles of Animal Taxonomy. New York: Columbia University Press. Skyrms, Brian (1967). “The Explication of ‘X knows that p’.”Journal of Philosophy 64: 373–389. Smart, J. J. C. (1959). “Sensations and Brain Processes.” Philosophical Review 68: 141–156. “Stars and Stripes.” Britannica Online. http://www.eb.com:180/cgi-bin/g?DocF= micro/564/35.html. Stebbing, Susan (1935). “Sounds, Shapes, and Words.” Proceedings of the Aristotelian Society, supp. vol. 14: 1–21.
172
References
Steiner, Mark (1975). Mathematical Knowledge. London: Cornell University Press. Stevens, William (1996). “Wildlife Finds Odd Sanctuary on Military Bases.” New York Times, January 2, p. B9. Strawson, Peter (1963). Individuals. Garden City, N.Y.: Doubleday. Szabó, Zoltán (1999). “Expressions and Their Representations.”Philosophical Quarterly 49: 145–163. Teller, Edward (1991). Conversations on the Dark Secrets of Physics. New York: Plenum Press. “Tiniest Nuclear Building Block May Not Be the Quark.” (1996). New York Times, February 8. Wetzel, Linda (1984). “On Numbers.” Doctoral dissertation, Massachusetts Institute of Technology. Wetzel, Linda (1989a). “That Numbers Could Be Objects.” Philosophical Studies 56: 273–292. Wetzel, Linda (1989b). “Expressions vs. Numbers.” Philosophical Topics 17: 173–195. Wetzel, Linda (1990). “Dummett’s Criteria for Singular Terms.” Mind 99: 239– 254. Wetzel, Linda (1993). “What Are Occurrences of Expressions?” Journal of Philosophical Logic 22: 215–220. Wetzel, Linda (2000a). “The Trouble with Nominalism.” Philosophical Studies 98: 361–370. Wetzel, Linda (2000b). “Is Socrates Essentially a Man?” Philosophical Studies 98: 203–220. Wetzel, Linda (2002). “On Types and Words.” Journal of Philosophical Research 27: 237–263. Wierzbicka, A. (1993). “What Are the Uses of Theoretical Lexicography?” Dictionaries: The Journal of the Dictionary Society of North America 14: 44–78. Wollheim, Richard (1968). Art and Its Objects. New York: Harper and Row. Wolterstorff, Nicholas (1970). On Universals: An Essay in Ontology. Chicago: University of Chicago Press. Wolterstorff, Nicholas (1975). “Toward an Ontology of Art Works.” Noûs 9: 115–142. Wolterstorff, Nicholas (1980). Works and Worlds of Art. Oxford: Clarendon Press.
References
173
Wright, Crispin (1983). Frege’s Conception of Numbers as Objects. Aberdeen: Aberdeen University Press. Yablo, Stephen (2002). “Abstract Objects: A Case Study.” Noûs 36, supp. vol. 1: 220–240. Zalta, Edward (1983). Abstract Objects: An Introduction to Axiomatic Metaphysics. Dordrecht: D. Reidel. Zemach, Eddy (1992). Types: Essays in Metaphysics. Leiden: E. J. Brill. Ziff, Paul (1972). “What Is Said.” In Semantics of Natural Language, ed. D. Davidson and G. Harman. Dordrecht: D. Reidel.
Index
Abstract entities, 30, 47. See also Abstract objects Abstract objects, xi–xii, 3, 5–6, 11–12, 16, 29–34, 40–41, 49, 51, 53–56, 70, 73, 80, 85, 93, 99, 123–124, 130, 136, 140–141, 151nn2,4, 152n2, 156n15, 157n2, 158n1 alleged trouble with, 23, 30–32, 101 causal relations to us, 23, 34, 43–45, 101–102, 155n15 causal requirements on knowledge and, 32–40 characterized, ix, 114 criteria of identity for, xi–xii knowledge involving, xii, 23–24, 34, 40, 44, 101, 123 spatiotemporal relations to us, 47–50 Abstract spaces, 132, 140, 149, 161n2 Aesthetics, 2 Allophones, 9, 23, 26 Alston, William, 154n3 Alternations, 9 “Amphibians,” 146–148 “Analyze away,” xii, 4, 14, 53–54, 129, 133, 155n7 A priori knowledge, 35–36, 39 Armstrong, David, xi, 94, 100, 130, 135, 137–138, 144, 161n1 Artifacts, 15–18 Asher, Nicholas, 77–81, 151n4 Avant-garde interpretation, 120
Average, 76–77, 82 Average property interpretation, 119 Axiom of Foundation, 140 “Being at” relation, 45–46 Benacerraf, Paul, xii, 31–32, 37–40, 43, 118, 130, 141, 156n15 Bigelow, J., 161nn1,5 Biology, biologists, 4, 10–15, 29, 32, 74, 76, 83, 92, 107–111, 116–117, 123, 159n3 Black, Max, 136 Block, Ned, 70, 157n10 Boolos, George, 31, 36, 161n9 Bromberger, Sylvain, 38, 67, 106, 159n2 Burgess, John, 36, 39, 41–43, 156nn10,17 Cantor, Georg, 43 Carlson, Gregory, 83 Carnap, Rudolf, 30 Cartwright, Richard, 98, 151n2, 154n3 Castañeda, Hector-Neri, 35 Causal relations. See Abstract objects, causal relations to us Causal requirement on knowledge, xii, 23–24, 32–43, 101, 118, 150, 155n10 no. 1, 32–35 no. 2, 35–39 no. 3, 39–40
176
Causal theories of reference, 123 Causal theory of knowledge, 24 Characterizing property interpretation, 119 Characterizing sentence or statement, xii, 54, 71, 73, 77–81, 86, 119 Chess, 16–17, 26, 134 Chihara, Charles, 154n3 Claddistic approach to species. See Lineage approach to species Classes, 5, 88–92. See also Sets Collective property interpretation, 119 Composition, compose, 128, 133, 138, 140, 143–145, 147 Computers, 15, 23 Concrete signs, 56 Criteria of identity, xi–xii Criterion of application for ‘word’, 116 Criterion of identity for occurrences, 149–150 Criterion of object commitment, 25–27, 155n7, 158n1 Criterion of ontological commitment, xi, 25–26, 154n3 Dancy, Jonathan, 2 Darwin, Charles, 107 Davidson, Donald, 155n10, 156n11, 161n10 Davies, Stephen, 2, 16 Davis, Wayne, 157n11 Default reasoning analysis, 80 Definite descriptions, 26, 50–51 DeSousa, Ronald, 107 Distinguishing property interpretation, 119 Dummett, Michael, 27, 151n2, 158n1 Dupre, John, 107–110, 114, 151n2 Einstein, Albert, 19 Eliot, T. S., xiii, 125–126 Embryology, 74 Empiricists, 30
Index
Environmental biology, 4, 11–12 Epistemological motivation, xii, 23 Epistemological problem for nominalism, xi–xiii, 51, 93, 96–103 for realism, 23, 30–32, 36–37, 44–51, 53, 93, 101, 118–123 Essentialism, 107 Ethics, 2 Etymology, 116–117 Evolution, 75, 108 Expressions, 1, 32, 37, 56, 125–133, 141, 154n5 Extension, 91 Extensional, 79–80, 86, 89 Façon de parler, xii, 2, 23, 28–30, 53, 86–87, 92, 156n1 Family resemblance, 109, 124, 160n5 Faraday, Michael, 18–19, 24 Feynman, Richard, 20–21, 152n6 Fictionalism, fiction, 29, 30, 149, 155n9 Field, Hartry, 24, 29–32, 37–51, 56, 93, 150, 156nn14,17 Fields (electromagnetic), 19 Firth, J. R., 112 Flags, 82, 85–86, 131–134, 142, 144 Football, 17–18, 23 Forces, 18, 26 Formalism, 56 Forrest, P., 161n1 Frege, Gottlob, xi, 25, 26–27, 129, 153n2, 158n1 Frege’s criterion, 26–27 Fudge, Erik, 8–10 Function, 56, 91, 128, 130, 149 Generic operator, 80–81 Generic sentence. See Characterizing sentence or statement Genes, 4, 12–13, 23, 26 Genetic approach to species, 107–109
Index
Genetic code, 108–109 Genetics, 4, 10, 12–15 Goldbach’s conjecture, 101 Goldman, Alvin, 32–35, 156n13 Goodman, Nelson, ix, xiii, 31, 37, 53, 54–57, 60–61, 71, 83–87, 92–103, 112, 156n1, 157n6, 157n7, 158n3, 159n4 Grice, Paul, 1 Hale, Bob, 26–27, 34–36, 151n2, 154n5, 156n13 Halle, Morris, 67 Hanks, Patrick, 117 Hart, W. D., 24, 32, 36–37, 40, 156n16 Heisenberg, Werner, 19 Higher-order types, 14, 126 Hilbert, David, 56 Historical linguistics, 4, 5–6 Hodes, Harold, 37–40, 56 Hull, David, 74–76, 107 Hume, David, 89–90 Hume’s law. See Hume’s principle Hume’s principle, 36, 149 Identity, 89–90 Identity conditions, xi–xii for expressions, 128 for occurrences, 129–133, 136, 149–150 for possibilia, 90 for words, 58, 60, 66–69, 105–106, 116–117, 122, 159n1 Identity of indiscernibles, 136, 149 Identity theory of mind, 1, 2 Indexicals, 50–51 Inferential semantics, 114 Inscriptions, 57, 61, 94–100 Instantiation, 123–124, 135, 138, 146, 148 Intensional, 79, 86, 154n3 Intention hypothesis, 68–71 Intentions, 68–71, 106, 111
177
Internal comparison interpretation, 120 Isomorphisms, 135, 142, 148 Jubien, Michael, 154n3 Kalderon, Mark Eli, 155n9 Kaplan, David, ix, 68, 71, 111 Katz, Jerrold, 152n2 Kinds, 83, 88–89, 106–112, 118–120, 159n3. See also Natural kinds; Real kinds Kind statements, 81 Kitcher, Philip, 129–130, 141 Krifka, Manfred, 81, 119–120 Kripke, Saul, 1, 107, 153n1 Languages, 5–6, 23, 26, 55, 61, 63, 65, 67, 83, 114, 116, 152n2, 157n7 Leibniz, Gottfried Wilhelm, 46 Letters, 7, 61–65, 99, 114, 157nn7,8 Lewis, David, xiii, 125, 130–131, 138– 146, 159n3, 161nn1,4,6 Lexicography, lexicographer, 107, 111–112, 115–117, 119 Lindblom, Björn, 67 Lineage approach to species, 107, 110–111 Linguistic objects (linguistic entities), 37–40, 44, 55, 93–103, 114–115, 123, 156n16 Linguistic role, 70 Linguistics, 1, 4, 5–10, 29, 32, 41, 73, 81, 83, 92, 100–101, 103, 118 Linguistic theory, 95–96, 102, 113–114, 116, 121, 124 Locke, John, 55 Loeb, Louis, 156n13 Logic, 1, 42, 100 Loux, Michael, 70 Ludlow, Peter, 64 MacMahon, M. K. C., 6–8 Malament, David, 46
178
Markosian, Ned, 157n9, 158n12 Mathematical objects (mathematical entities), 37–41, 50, 55–56, 149–150, 155n9, 156n16 Mathematics, 29–32, 35, 41, 56, 155n10 Mayr, Ernst, 13–15, 41, 87, 109, 160nn3,6 McArthur, Tom, 114, 116 Meaning, 114–115, 121. See also Senses Membership relation, 137–138 Mereological composition, 138, 143– 145, 161n6 Metaphysics, xiii, 90 Mill, John Stuart, 2 Modal conditionals, 80 Morgenbesser, Sidney, 159n3 Morpheme, 114–116 Morphological approach to species, 107 Murdoch, Iris, 2 Murray, J. A. H., 117 Naturalized epistemology, 24, 32, 40–43 Natural kinds, 11, 106–107 Newton, Isaac, 46 Nominalism, nominalist, xii–xiii, 30– 31, 37, 41, 43, 53–56, 66, 80–81, 86– 88, 112–113, 125, 149–150, 159n4 and characterizing statements, 77–81 class, 73, 88–92, 137–138 definitions of, 30–31, 37, 55, 158n3 epistemological difficulties for, xi–xiii, 51, 93, 96–103 Field’s, 31, 37, 42, 56 Goodman and Quine’s, xiii, 31, 37, 56–57, 84–87, 93, 99–102, 157n7 motivation for, xii, 30–32 ostrich, xi, 100 and paraphrasing, 1, 41, 53, 83–88, 156n1 (see also Paraphrase) predicate, 94 trouble with, 93–103
Index
Normal, 74–77, 82 Numbers, 31, 37, 39–40, 47, 49–51, 56, 132, 136, 141, 149, 151n3, 153n3, 158n1 Number theory, 36, 155n10 Occurrences, xiii, 125–150 Old Glory, x, 73, 82, 85, 86, 132, 134–136 Ordered pairs, 129–130, 141, 161n3 Paraphrase, xii, 1, 4–5, 23, 28–30, 41, 51, 53–54, 57–58, 71, 73, 83–85, 87–88, 93, 99, 156n1 Particulars, 144–145, 152n2 Parts, 138, 143–145 Peirce, Charles S., ix, 58, 125, 151n1 Pelletier, Francis Jeffry, 77–81, 83 Perceptual knowledge, 36 Person, concept of, 114 Philosophy of language, 1 of mathematics, 23, 50, 55–56 of mind, 1 Phonemes, 8–10, 24, 65–67, 106, 114–115, 133 Phonetics, 6–8, 66 Phonological hypothesis, 65–68 Phonology, 8–10, 65–68, 114 Phylogenetic criterion. See Lineage approach to species Physicalism, physicalist, 45, 48 Physical relations, 44–48 Physics, 17–21, 29, 32, 92 Pinkard, Terry, 160n5 Place, U. T., 1 Platonic heaven. See Platonic realm Platonic objects, 46–47 Platonic realm, 31–32, 43, 87 Platonic relationship principle, 38 Platonism, 31, 36, 40–41, 155n10. See also Realism, realist Platonistic sentence, 84, 156n1
Index
Pluralities, 39–40, 151n3 Points (in space), 46–48, 149–150 Population approach to species, 107, 109–110 Positions, 129, 132–136, 145–146, 149 Possibilia, 90, 99, 102, 156n15, 157n2 Possible worlds, 91, 140, 155n9 Possible world semantics, 80 Pragmatics, 106 Predicates, xi, 49, 82, 94, 100–103, 119, 155n7 Problem of reference, 49–50 Problem of universals, 94 Pronunciation, 38, 58–60, 66, 68, 105, 112, 114–116, 118, 121–123 Proof theory, 56, 100–101, 157n7 Properties, x, 5, 31, 57, 81–82, 105, 107, 111–112, 117–124, 135–138, 151n2, 157n11 Prototype, 80 Putnam, Hilary, 107–108 Quantification, 12, 54–55, 73, 80, 84, 87–88, 92 Quantifiers, 26, 28, 32, 78–80, 153n1, 155n7 Quine, Willard Van, xi, xiii, 30, 37, 51, 61, 116, 129, 141, 157n7, 159n5 criterion of ontological commitment, xi, 11, 25–26, 28, 54, 153n3, 155n7 kinds, 88–89, 117 mistrust of the intensional, 79, 86, 90, 102, 154n3, 159n7 need for paraphrasing, 28, 54, 84–88, 99, 156n1, 159n4 as a nominalist, 31, 37, 55–57, 93–94, 99–102 occurrences, 126–127, 129 universals, xi, 54, 94 Realism, realist, 31, 51, 93, 95, 98–99, 101–102, 124–125, 157n1 Realistic pluralism, 114
179
Realist semantics, 55, 96, 98–99 Real kinds, 106–109, 111–112, 159n3 Received pronunciation (RP), 59 Referring, 49–50 Relations, x, 30, 45. See also Membership relation; “Being at” relation; Spatiotemporal relations; Physical relations; Tokening relation; Instantiation; Parts; Causal relations Replicas, 97–102 Representative object interpretation, 120 Ross, W. D., 2 Ruse, Michael, 107 Russell’s paradox, 31 Russell’s theory of definite descriptions, 155n7 Rutherford, Ernest, 20, 24 Searle, John, 154n3 Second-order logic, 36 Sellars, Wilfrid, 53–55, 57, 70, 83 Senses, 58, 59, 105, 112, 116, 127, 158n1 Sentence formation rules, 41 Sentences, 24, 92, 99–100, 115–116, 122, 125, 143, 152n2, 158n1 Sets, 56, 88, 92, 138, 140–141, 144, 158n3, 161n3 Set theory, 43, 139–141, 159n3, 161n6 Shapes, 61, 113, 157n8 Shape theory of letters, 61–65 Similarity, similar, 53, 61–64, 97, 109, 112–113, 118, 157n6 Simons, Peter, 125–127 Simple universals, 139–140 Simplicity, 42 Simpson, G. G., 110–111 Singular terms, xi, 4, 11, 26–27, 101– 102, 124, 154nn5,6, 155n7, 158n1 Situation semantics, 80 Skyrms, Brian, 33–34, 156n13 Smart, J. J. C., 1
180
Spaces, 56, 132, 149–150 Space-time regions, 46–50 Spatiotemporal relations, 45–50 Species, 11–14, 23, 24, 26, 29, 74, 78, 88–92, 105–114, 116–117, 119–121, 124, 152n4, 158n1, 159n3, 159n3, 160nn5,6,7 Spelling, 38, 58–61, 74, 105–106, 111–112, 114–118, 120–121 Spelling theory, 60–61, 65 States of affairs, 144, 146, 151n4 Steiner, Mark, 34, 37, 155n10 Stereotypes, 80 Strawson, Peter, 152n5 Structural properties, 136–138, 146 Structural types, xiii, 133. See also Structural universals Structural universals, xiii, 130–131, 135–149 linguistic conception of, 139–142, 143 magical conception of, 139 occurrence conception of, 131, 134, 139, 142, 146, 148–150 pictorial conception of, 139, 142–145 Surface structure, 4–5 Syllables, 7, 23, 26 Teller, Edward, 20 Tokening relation, 45–46 Types of actions, 2 Type talk, xii, 3, 14, 15, 20–22, 29–30, 71, 73, 84, 103, 150 Type–token distinction, ix, 1, 3 Universals, ix–xi, 53–55, 57, 94, 124, 130, 137–140, 144–150, 160n5 Utterances, 8, 10, 57, 61, 93, 95–100, 121–122 Variables, 25, 124, 148, 160n5 Vowels, 7–8, 23, 26
Index
Wetzel, Linda, 27, 39, 56, 109, 130, 141, 151n3, 159n8, 161n3 White, Heath, 134–136 Wittgenstein, Ludwig, 160n5 Wollheim, Richard, x, 2, 16 Wolterstorff, Nicholas, 2, 16, 125 Words, xii–xiii, 5–7, 38, 49, 53, 55, 57–61, 65–71, 82–83, 95–103, 105– 127, 143, 158n1, 159n3, 160nn5,7, 161n8 Works of art, 2, 15–16 Wright, Crispin, 27, 36, 154n6, 155n10 Ziff, Paul, 58