This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
agrees on whether p. There may be more or less variation in the strength of the intuition, but either everyone who intuits either way intuits that p or else everyone who intuits either way intuits that not-p. If so, there may remain considerable variation in the number of those who intuit either way or in the strength with which they do so, but whatever such variation may remain does not obviously pose a problem for analytic epistemology. Presumably, the main problem would derive not just from any such variation, but rather from conflict. There must be enough people from one side with a strong enough positive intuition in conflict with enough people from the other side with a strong enough negative intuition. What is more, they must have conflicting intuitions on the same contents. Given all this, I have some doubts about the study and its ostensible results. In each instance the study presented an example to two groups different culturally or socioeconomically, who were asked to say whether a protagonist in the example knew a certain fact or only believed it. Striking statistical variation was found in several instances. Consider first a few of the examples: 1 (From p. 443.) Bob has a friend, Jill, who has driven a Buick for many years. Bob therefore thinks that Jill drives an American car. He is not aware, however, that her Buick has recently been stolen, and he is also not aware that Jill has replaced it with a Pontiac, which is a different kind of American car. Does Bob really know that Jill drives an American car, or does he only believe it?
the use of intuitions in philosophy REALLY KNOWS
107
ONLY BELIEVES
2 (From p. 444.) It’s clear that smoking cigarettes increases the likelihood of getting cancer. However, there is now a great deal of evidence that just using nicotine by itself without smoking (for instance, by taking a nicotine pill) does not increase the likelihood of getting cancer. Jim knows about this evidence and as a result, he believes that using nicotine does not increase the likelihood of getting cancer. It is possible that the tobacco companies dishonestly made up and publicized this evidence that using nicotine does not increase the likelihood of cancer, and that the evidence is really false and misleading. Now, the tobacco companies did not actually make up this evidence, but Jim is not aware of this fact. Does Jim really know that using nicotine doesn’t increase the likelihood of getting cancer, or does he only believe it? REALLY KNOWS
ONLY BELIEVES
3 (From p. 445.) Mike is a young man visiting the zoo with his son, and when they come to the zebra cage, Mike points to the animal and says “that’s a zebra.” Mike is right – it is a zebra. However, as the older people in his community know, there are lots of ways that people can be tricked into believing things that aren’t true. Indeed, the older people in the community know that it’s possible that zoo authorities could cleverly disguise mules to look just like zebras, and people viewing the animals would not be able to tell the difference. If the animal that Mike called a zebra had really been such a cleverly painted mule, Mike still would have thought that it was a zebra. Does Mike really know that the animal is a zebra, or does he only believe it? REALLY KNOWS
ONLY BELIEVES
The responses to these and other examples were found to vary significantly with cultural or socioeconomic background. It is this that allegedly presents a problem for analytic epistemology. But the following reflections suggest otherwise. 1 It is not clear exactly what question the subjects disagree about. In each case, the question would be of the form: “Would anyone who satisfied condition C with regard to proposition
know that p or only believe it?” It is hearing or reading a description of the example that enables the subjects to fill in the relevant C and
. But can we be sure that they end up with exactly the same C and
? Here is a reason for doubt. When we read fiction we import a great deal that is not explicit in the text. We import a lot that is normally presupposed about the physical and social structure of the situation as we follow the author’s lead in our own imaginative construction. And the same seems plausibly true about the hypothetical cases presented to our WNS subjects. Given that these subjects are sufficiently different culturally and socioeconomically, they may, because of this, import different assumptions as they follow in their own imaginative construction the lead of the author of the examples, and this may result in their filling the crucial C differently. Perhaps, for example, subjects while half deny it, with everyone basing their respective attitudes on the sheer understanding of the representational content . Obviously, half of them are getting it right, and half wrong. Of those who get it right, now, how plausible can it be that their beliefs constitute or derive from rational intuition, from an attraction to assent that manifests a real competence? Not that it is logically incoherent to maintain exactly that. But how plausible can it be, absent some theory of error that will explain why so many are going wrong when we are getting it right? Unless we can cite something different in the conditions or in the
108
ernest sosa
who differ enough culturally or socioeconomically will import different background beliefs as to the trustworthiness of American corporations or zoos, or different background assumptions about how likely it is that an American who has long owned an American car will continue to own a car and indeed an American car. For some if not all of the examples, I can’t myself feel sure that C stays constant across the cultural or socioeconomic divide. But if C varies across the divide, then the subjects may not after all disagree about the very same content. 2 A second reason for doubt pertains to the choices offered. We are all familiar with multiple-choice tests where we are asked to choose the option closest to the truth. Now, the choices presented to our subjects were just (a) S knows that p, and (b) S only believes [and does not know] that p. But there are other logically possible options that were left out. It is compatible with all the results obtained that if the test had included a third choice, one that it could logically have included, then there would have been unanimity across all the groups, or at least substantially less divergence. Here is one such third choice: (c) we are not told enough in the description of the example to be able to tell whether the subject knows or only believes. What I am suggesting is that for at least some if not all of the examples, this might be the option of choice across the board, even when subjects import what they can properly import from their background knowledge. If this turned out to be so, that would considerably diminish the interest and importance of whatever differences may remain in the distribution of answers across divides. 3 Finally, WNS explain the conflicting intuitions across the East-Asian/Western divide by appeal to what they call epistemic vectors (p. 457). East Asians (EAs) are said to be “much more sensitive to communitarian factors, while Westerners (Ws) respond to more individualistic ones” (p. 451). And the disagreement may now perhaps be explained in a way that casts no doubt on intuition as a source of epistemic justification or even knowledge. Why not explain the disagreement as merely verbal? Why not say that across the divide we find somewhat different concepts picked out by terminology that is either ambiguous or at least contextually divergent? On the EA side, the more valuable status that a belief might attain is one that necessarily involves communitarian factors of one or another sort, factors that are absent or minimized in the status picked out by Ws as necessary for “knowledge.” If there is such divergence in meaning as we cross the relevant divides, then once again we fail to have disagreement on the very same propositions. In saying that the subject does not know, the EAs are saying something about lack of some relevant communitarian status. In saying that the subject does know, the Ws are not denying that; they are simply focusing on a different status, one that they regard as desirable even if it does not meet the high communitarian requirements important to the EAs. So again we avoid any real disagreement on the very same propositions. The proposition affirmed by the EAs as intuitively true is not the very same as the proposition denied by the Ws as intuitively false. In a more recent paper,4 WNS have responded to this sort of doubt by conceding the possibility of such conceptual variation combined with terminological uniformity, while countering that the variation would still be a problem for analytic epistemology.
the use of intuitions in philosophy
109
In the philosophical tradition, skepticism is taken to be worrisome because it denies that knowledge is possible, and that’s bad because knowledge, it is assumed, is something very important. On Plato’s view, “wisdom and knowledge are the highest of human things” . . . and many people, both philosophers and ordinary folk, would agree. But obviously, if there are many concepts of knowledge, and if these concepts have different extensions, it can’t be the case that all of them are the highest of human things. [It has been argued that] . . . the arguments for skepticism in the philosophical tradition pose a serious challenge to the possibility of having what high SES [Socio-Economic Status], white westerners with lots of philosophical training call ‘knowledge’. But those arguments give us no reason to think that we can’t have what other people – East Asians, Indians, low SES people, or scientists who have never studied philosophy – would call ‘knowledge’. And, of course, those skeptical arguments give us no reason at all to think that what high SES white western philosophers call ‘knowledge’ is any better, or more important, or more desirable, or more useful than what these other folks call ‘knowledge’, or that it is any closer to ‘the highest of human things’. Without some reason to think that what white, western, high SES philosophers call ‘knowledge’ is any more valuable, desirable or useful than any of the other commodities that other groups call ‘knowledge’ it is hard to see why we should care if we can’t have it.5
To my eyes this line of reasoning boils down to the following: If what is picked out by the cognates of ‘knowledge’ in various cultures and socioeconomic groups varies enough, this itself gives rise to doubt that we should continue to value what is picked out by our epistemic vocabulary of “knowledge,” “justification,” et cetera. This line of argument I find baffling. I wonder how it is any better than saying to someone who values owning money banks that since others mean river banks by ‘banks’ his valuing as he does is now in doubt, and that he needs to show how owning money banks is better than owning river banks. Why need he suppose that owning money banks is better? He just thinks it’s quite good as far as it goes. Maybe owning river banks is also good, maybe even better in many cases. And the same would seem reasonable when the commodities are all epistemic. The fact that we value one commodity, called ‘knowledge’ or ‘justification’ among us, is no obstacle to our also valuing a different commodity, valued by some other community under that same label. And it is also compatible with our learning to value that second commodity once we are brought to understand it, even if we previously had no opinion on the matter. 4 Nevertheless, it might be concluded, we do get a real disagreement between the EAs and the Ws when the former insist on communitarian standards for the formation of beliefs while the latter do not. And this raises an interesting question about the content of epistemic normative claims. When we say that a belief is justified, epistemically justified, or even amounts to knowledge, are we issuing a normative verdict that this is a belief one should form or sustain? Might there not be more valuable or important things that we might be doing with our time than forming a belief on that question? Are we even saying so much as this: that if we leave aside other desiderata proper to a flourishing
110
ernest sosa
life, and focus only on epistemic desiderata, then we should be forming or sustaining this belief? I doubt that our talk of knowledge and epistemic justification is properly understood along these lines. Just consider the fact that one can obsessively accumulate all sorts of silly facts that one has no business attending to at all, that are not worthy of one’s attention. One might out of the blue decide to count the number of coffee beans remaining in one’s coffee bag and if one proceeds with due care and diligence one may attain epistemic justification of a very high grade that there are now n beans in that bag. Is this something that one should believe at that time? Well, in one clear sense it is not. Clearly one should not even concern oneself with that question, so it is false that one should be conducting one’s intellectual life in such a way that one then returns an affirmative answer to that question. The whole question is beneath one’s notice. One should not be forming any opinion, positive or negative, on that question. One has better things to do with one’s time, even if we restrict ourselves to properly epistemic concerns. That being so, it is far from clear that the EA emphasis on communitarian factors will necessarily reveal itself in a proclivity to form beliefs that satisfy such factors, or in a normative approval of beliefs that satisfy such factors, or even in a normative approval of such beliefs once we restrict ourselves to epistemic concerns. Silly beliefs about trivial matters can attain the very highest levels of epistemic justification and certain knowledge even if these are not beliefs that one should be bothering with, not even if one’s concerns are purely epistemic. Thus, the supposed normativity of epistemology seems rather like the normativity of a good gun or a good shot. This normativity is restricted to the sphere of guns and shots in some way that isolates it from other important concerns, even from whether there should be guns at all, or shots. At least that seems clear for a discipline of epistemology whose scope is the nature, conditions, and extent of knowledge. If ours is the right model for understanding the normativity proper to such epistemology, then in speaking of a justified belief we are saying something rather like “Good shot!” which someone might sincerely and correctly say despite being opposed to gun possession and to shooting.6 And now any vestige of conflict across the divides is in doubt. For now there seems no more reason to postulate such conflict than there would be when we compare someone who rates cars in respect of how economical they are with someone who rates them in respect of how fast they can go. Even when we take all such considerations into account, clearly we will fall short if we leave out of account the sort of disagreement that divides the superstitious from the enlightened. The enlightened are not just saying that the superstitious value beliefs that satisfy certain conditions (derivation from tea leaves, or crystal balls, or certain writings) such that the enlightened are just focused on different conditions. No, the enlightened object to the conditions elevated by the superstitious. But they do not necessarily object to the formation of such beliefs as a means to inner peace or community solidarity. They may object this way too, but they need not, and probably should not, at least in some actual cases of primitive cultures, and in many cases of conceivable cultures. What the enlightened object to is the notion that the sort of status elevated by the superstitious
the use of intuitions in philosophy
111
constitutes epistemic value in the actual world. And this is presumably because they see superstitious status as insufficiently connected with truth. Compare a culture that loves the way a certain sort of gun sounds, even though it is woefully unreliable and far inferior to bows and arrows. The visiting military advisor need not object to their preference for that sound, nor need he object to their taking the gun into battle in preference to their bows and arrows. He need not object to that all things considered. That would be at most the business of a political advisor; actually, not even he may be in a position to make any such all-things-considered objection. The military advisor’s advice is restricted to informing his clients on what would produce the best results in the battlefield with regard to military objectives. The political advisor’s advice would take that into account, but would go beyond it to consider also broader political objectives. And of course even that will not cover the full span of considerable objectives. Something similar seems true of epistemology. Epistemic justification concerns specifically epistemic values, such as truth, surely, and perhaps others not entirely reducible to truth, such as understanding. Even once we put aside inner peace, happiness, solidarity, and technological control, as not properly epistemic values, however, various remaining statuses of a belief may still qualify as epistemic, such as the following: • • • • • •
being true; being a truth-tracker (would be held if true, not if not true); being safe (would not be held unless true); being virtuously based (derives from a truth-reliable source); being rationally defensible by the believer; being reflectively defensible by the believer (rationally defensible in respect of the truth-reliability of its sources); • being virtuously based through a virtue recognized as such in the believer’s community (and, perhaps, properly recognized as such). Interestingly enough, it is not just people from different cultures or different socioeconomic groups who apparently diverge in rational intuitions on epistemic questions. Notoriously, contemporary analytic epistemologists have disagreed among themselves, nearly all professors at colleges or universities, nearly all English-speaking Westerners. On one side are internalist, evidentialist, classical foundationalists, on the other externalists of various stripes (process reliabilists, trackers, proper functionalists, some virtue epistemologists). It is increasingly clear, and increasingly recognized, that the supposed intuitive disagreements across this divide are to a large extent spurious, that different epistemic values are in play, and that much of the disagreement will yield to a linguistic recognition of that fact, perhaps through a distinction between “animal” knowledge and “unreflective” justification, on one side, and “reflective” knowledge and justification on the other.
112
ernest sosa
Notes and References 1 ‘Cognitive state’ is Stich’s term for belief-like information-storing mental states, while ‘cognitive processes’ is his “cover term whose extension includes our own reasoning processes, the updating of our beliefs as a result of perception, and the more or less similar processes that occur in other organisms.” See p. 571 of his “Reflective Equilibrium, Analytic Epistemology, and the Problem of Cognitive Diversity,” Synthese 74 (1988): 391–413; the references here are to its reprinting in E. Sosa and J. Kim (eds.), Epistemology: An Anthology (Blackwell, 2000), pp. 571–83. Parenthetical references in the main text will be to this publication. My own preference is to stretch the terms ‘belief ’ and ‘reasoning’ to cover these extensions. So the beliefs and reasonings to be discussed in what follows count as such in correspondingly broad senses. Reasoning, for example, is a “cognitive process” that bases a belief or “cognitive state” on reasons, i.e., on other mental states to which the subject gives some weight, pro or con, in forming or sustaining that belief. ‘Basing’, finally, is quasi-technical. Ordinarily we would speak more naturally of acting for a reason, or of being angry or in some other emotional state for a reason. Decisions and beliefs are naturally said to be based on reasons, however, and I am here extending the use of that terminology to cover all cases in which one is in a mental state for a reason, or for some reasons. (And it is better yet to speak of one’s being in that mental state for a reason that then “motivates” one to be in that state. This is to distinguish the case of interest from that in which one is in the state in question “for a reason” only in the sense that “there is” a reason why one is in that state, though it is not a reason that one has, much less one that motivates one to be in that state.) 2 It is a great pleasure for me to contribute to this well-deserved tribute to Steve Stich, long-time colleague and true friend, and iconoclast of analytic epistemology. Here I engage only one of the challenges in his stimulating and influential work. 3 In The Philosophy of Alvin Goldman, in a special issue of Philosophical Topics, ed. Christopher S. Hill, Hilary Kornblith, and Tom Senor, Vol. 29, Nos. 1 and 2; pp. 429–61. (Parenthetical page references in the main text will now be to this article.) 4 “Meta-skepticism: Meditations in Ethno-epistemology,” in S. Luper (ed.), The Skeptics (Ashgate, 2003), with a different order of authors, now listed as Nichols, Stich, and Weinberg. 5 Ibid., p. 245. 6 This leaves open the possibility of a broader concern with the kind of knowledge we should seek in a good life. Wisdom might be one such, something closely connected with how to live well, individually and collectively. Another such might be a world view that provides deep and broad understanding of major departments of proper human curiosity, which of course cries out for an account of what makes curiosity proper.
cognitive and epistemic diversity
113
7 Reflections on Cognitive and Epistemic Diversity: Can a Stich in Time Save Quine? MICHAEL BISHOP
In “Epistemology Naturalized,” Quine famously suggests that epistemology, properly understood, “simply falls into place as a chapter of psychology and hence of natural science” (1969: 82). Since the appearance of Quine’s seminal article, virtually every epistemologist, including the later Quine (1986: 664), has repudiated the idea that a normative discipline like epistemology could be reduced to a purely descriptive discipline like psychology. Working epistemologists no longer take Quine’s vision in “Epistemology Naturalized” seriously. In this chapter, I will explain why I think this is a mistake. In the 1980s and early 1990s, Stephen Stich published a number of works that criticized analytic epistemology and defended a pragmatic view of cognitive assessment (1985, 1987, 1990, 1993). In the past five years, Stich, Jonathan Weinberg, and Shaun Nichols (henceforth, WNS) have put forward a number of empirically based arguments criticizing epistemology in the analytic tradition (Weinberg, Nichols and Stich 2001; Nichols, Stich and Weinberg 2003). My thesis is that the most powerful features of Stich’s epistemological views vindicate Quine’s now moribund naturalism. I expect this thesis to be met with incredulity – not least from Stich, who has explicitly argued that the reductionist view standardly attributed to Quine is a non-starter (1993: 3–5). The chapter will proceed as follows. In section 1, I interpret Stich’s epistemology in a way that provides a prima facie vindication of Quine’s naturalism. In section 2, I take a slight detour to consider and reply to a family of arguments that aim to show that giving up the analytic project in the way Stich suggests is ultimately self-defeating. In section 3, I propose a reliabilist approach to epistemic evaluation that I argue is superior on pragmatic grounds to Stich’s view. In the final section, I bring the various threads together and suggest that the sturdiest elements of Stich’s epistemology drive us to the vision Quine advocated in “Epistemology Naturalized.”
114
michael bishop
1 Stich’s Epistemology At its heart, Stich’s view consists of two empirical hypotheses. 1 Cognitive diversity: There are significant and systematic differences in how different people reason about the world. 2 Epistemic diversity: There are significant and systematic differences in the epistemic concepts, judgments, and practices different people employ in evaluating cognition. Cognitive diversity (the subject of 1.1) raises what is, for Stich, the fundamental problem of epistemology: How are we to evaluate the various ways different people can and do reason? And epistemic diversity (the subject of 1.2) undermines the most popular method philosophers have used to solve that problem: by developing theories that capture our commonsense epistemic judgments (i.e., our epistemic intuitions). But if conceptual analysis, the analysis of our epistemological concepts, cannot deliver a solution to the fundamental problem of epistemology, what’s left? Stich’s epistemological writings suggest two alternative approaches (the subject of 1.3). One approach is explicitly argued for in Stich’s earlier writings, and the other is implicitly suggested in the later, coauthored pieces. While I will argue that there may be some tension in these approaches, they both vindicate Quine.
1.1 The challenge of cognitive diversity Cognitive diversity holds that there are significant and systematic differences in how different people reason about the world. In the earlier pieces, Stich does not offer much empirical evidence for thinking that cognitive diversity is true. In Fragmentation, for example, Stich tries “to make the world safe for irrationality” by criticizing two arguments against cognitive diversity (1990: 17). I will not pursue these defensive maneuvers because in the 15 years since Fragmentation, psychologists have amassed a good deal of evidence in support of cognitive diversity. Richard Nisbett and his colleagues have identified some significant differences in the thought patterns of East Asians (Chinese, Japanese and Koreans) and non-Asian Westerners (from the US and Europe) (Nisbett, Peng, Choi and Norenzayan 2001; Nisbett 2003). The reasoning of Westerners tends to be more analytic, “involving detachment of the object from its context, a tendency to focus on attributes of the object to assign it to categories, and a preference for using rules about the categories to explain and predict the object’s behavior. Inferences rest in part on the practice of decontextualizing structure from content, the use of formal logic, and avoidance of contradiction.” The reasoning of East Asians tends to be more holistic, involving an orientation to the context or field as a whole, including attention to relationships between a focal object and the field, and a preference for explaining and predicting events on the basis of such relationships. Holistic approaches rely on experience-based knowledge rather than on abstract logic and are dialectical, meaning that there is an emphasis on
cognitive and epistemic diversity
115
change, a recognition of contradiction and of the need for multiple perspectives, and a search for the “Middle Way” between opposing propositions. (Nisbett et al. 2001: 293)
A few examples will help make this distinction concrete. In the “Michigan Fish” study, Japanese and American subjects viewed animated underwater scenes and then reported what they had seen (Masuda and Nisbett 2001). The first statement by Americans usually referred to the fish, while the first statement by Japanese usually referred to background elements, e.g., “There was a lake or a pond.” The Japanese made about 70 percent more statements than Americans about background aspects of the environment, and 100 percent more statements about relationships with inanimate aspects of the environment, e.g., “A big fish swam past some gray seaweed” (Nisbett et al. 2001: 297). In this study, the Westerners subjects focused on objects detached from their background, while the Japanese subjects focused on the context and the relationships between objects in the field. Referring to this study, Nisbett has joked that for Westerners, if it doesn’t move, it doesn’t exist. In another fascinating study (Peng and Nisbett 1999), American and Chinese subjects were shown conflicting statements, such as: A A survey found that older inmates are more likely to be ones who are serving long sentences because they have committed severely violent crimes. The authors concluded that they should be held in prison even in the cases of a prison population crisis. B A report on the prison overcrowding issue suggests that older inmates are less likely to commit new crimes. Therefore, if there is a prison population crisis, they should be released first. Subjects were asked to rate the plausibility of one or both of these claims. For each pair of statements studied, American and Chinese subjects agreed about which was more plausible (in this case, A). Who gives higher plausibility ratings to A, subjects who rate A by itself, or subjects who rate both A and B together? The answer is different for American and Chinese subjects. American subjects gave higher plausibility ratings to A when they rated both A and B, while Chinese subjects gave higher plausibility ratings to A when they rated it by itself. In the face of conflicting claims, American subjects become more polarized while Chinese subjects become less polarized (Nisbett et al. 2001: 302). East Asians tend to avail themselves of a “Middle Way” whereas Westerners tend to insist on “My Way.” After reviewing the evidence for cognitive diversity, Nisbett and his colleagues conclude that “literally different cognitive processes are often invoked by East Asians and Westerners dealing with the same problem” (Nisbett et al. 2001: 305). Nisbett argues that these cognitive differences are explained by some deep and long-standing cultural differences between East Asian and Western societies (Nisbett 2003: chs. 1–3). In support of this hypothesis, Nisbett and his colleagues note that “Asians move radically in an American direction after a generation or less in the United States” (Nisbett et al. 2001: 307). A fair reading of the literature suggests that Stich’s defense of cognitive diversity was prescient.
116
michael bishop
Why does cognitive diversity matter to epistemology? “It is the prospect of cognitive diversity among normal folk that lends a genuine, almost existential, urgency to the project of cognitive evaluation” (1990: 74). Cognitive diversity motivates a project Stich sees as fundamental to epistemology: How to evaluate the various ways different people can and do reason. Most contemporary epistemology focuses on defending theories and theses about epistemological categories that apply to individual belief tokens (i.e., knowledge or justification). But a theory that assesses how different people reason will not focus on the evaluation of belief tokens. Instead, it focuses on the evaluation of methods of inquiry. It tries to say which ways of going about the quest for knowledge – which ways of building and rebuilding one’s doxastic house – are the good ones, which the bad ones, and why. Since reasoning is central to the quest for knowledge, the evaluation of various strategies of reasoning often plays a major role in the assessment of inquiry. (1990: 1, emphases added)
As Stich notes, this project has been pursued by Bacon, Descartes, Mill, Carnap and Popper, among others. But it differs substantially from the projects pursued by almost all contemporary epistemologists. Rather than provide a theory for the evaluation of belief tokens, Stich aims to provide a way to evaluate methods of inquiry, ways of acquiring and revising beliefs, reasoning strategies.1, 2
1.2 Epistemic diversity Epistemic diversity holds that there are significant and systematic differences in the epistemic concepts, judgments, and practices different people employ in evaluating cognition. The early arguments do not offer much in the way of evidence for epistemic diversity. In Fragmentation, for example, Stich raises the possibility of epistemic diversity in order to criticize the analytic approach to epistemology. The main aim of the later arguments is to proffer empirical evidence for epistemic diversity. In a typical study, WNS gave the following Gettier-style example to a group of Western subjects and nonWestern subjects: Bob has a friend, Jill, who has driven a Buick for many years. Bob therefore thinks that Jill drives an American car. He is not aware, however, that her Buick has recently been stolen, and he is also not aware that Jill has replaced it with a Pontiac, which is a different kind of American car. Does Bob really know that Jill drives an American car, or does he only believe it? REALLY KNOWS ONLY BELIEVES A large majority of Western subjects gave the answer sanctioned by analytic epistemologists (“only believes”) but a majority of East Asians and a majority of subjects from India gave the opposite answer (“really knows”) (2001: 443). WNS also found cases in which there were significant differences between the epistemic judgments of people of high socioeconomic status (SES) and of low SES (2001: 447–8).
cognitive and epistemic diversity
117
WNS emphasize that they did not merely find random variation in people’s epistemic judgments across cultures. The cognitive differences that Nisbett and his colleagues found between East Asians and Westerners are reflected in WNS’s epistemic diversity findings. This consilience suggests some pretty deep differences in how people in different cultures evaluate reasoning. WNS argue that the differences between Ws [Westerners] and EAs [East Asians] look to be both systematic and explainable. EAs and Ws appear to be sensitive to different features of the situation, different epistemic vectors, as we call them. EAs are much more sensitive to communitarian factors, while Ws respond to more individualistic ones. Moreover, Nisbett and his colleagues have given us good reason to think that these kinds of differences can be traced to deep and important differences in EA and W cognition . . . What our studies point to, then, is more than just divergent epistemic intuitions across groups; the studies point to divergent epistemic concerns – concerns which appear to differ along a variety of dimensions. (Weinberg, Nichols and Stich 2001: 451)
In Fragmentation, Stich laid a bet that the epistemic diversity thesis is true. This increasingly coherent body of evidence gives us some reason to think that Stich may well win that bet. Of course, one might reasonably raise concerns about these studies, and defenders of the analytic tradition in epistemology have already done so (e.g., Sosa, this volume). In my view, the best way to rebut WNS’s empirical case for epistemic diversity is with more empirical findings. Absent contrary findings, I propose to provisionally accept that people in different cultures evaluate cognition in at least somewhat different ways.
1.3 Two arguments against analytic epistemology For an epistemological theory to be genuinely prescriptive, for it to earn its normative keep, it must explain why some ways of thinking are better than others. Theories of analytic epistemology seek “criteria of cognitive evaluation in the analysis or explication of our ordinary concepts of epistemic evaluation” (1990: 19). According to WNS, contemporary epistemology assumes that knowledge of the correct epistemic norms is implanted in us and we can discover them by an appropriate process of self-exploration. The analytic philosopher’s method for testing and developing epistemological theories, which they dub “Intuition Driven Romanticism,” involves three features: (1) It takes epistemic intuitions (spontaneous judgments about the epistemic properties of cases) as inputs; (2) it produces epistemic claims or principles as output; and (3) the output is a function of the input – in the sense that significantly different inputs would yield significantly different outputs (Weinberg, Nichols and Stich 2001: 432). How does epistemic diversity undermine the method of analytic epistemology? The main early argument is framed in terms of what we intrinsically value, what we value for its own sake. other languages and other cultures certainly could and probably do invoke concepts of cognitive evaluation that are significantly different from our own . . . For many people –
118
michael bishop
certainly for me – the fact that a cognitive process is sanctioned by the venerable standards embedded in our language of epistemic evaluation, or that it is sanctioned by the equally venerable standards embedded in some quite different language, is no more reason to value it than the fact that it is sanctioned by the standards of a religious tradition or an ancient text, unless, of course, it can be shown that those standards correlate with something more generally valued . . . (Stich 1990: 94)
Stich’s contention is that when we “view the matter clearly, most people will not find it intrinsically valuable to have cognitive states or to invoke cognitive processes that are sanctioned by the evaluative notions embedded in ordinary language” (1990: 93). Unless we are inclined toward epistemic chauvinism or xenophobia – preferring our epistemic standards merely because they are our own – we are unlikely to value cognitive states or processes merely because they accord with the contingent and idiosyncratic standards implicit in our epistemic concepts.3 Of course, if that’s true, then most of us will not intrinsically value beliefs and belief-forming processes because they are endorsed by somebody else’s intuitions or practices of epistemic evaluation, either. In Fragmentation, the objection to analytic epistemology is that it is incapable of solving the fundamental problem of epistemology – how to evaluate the various ways different people reason. Its basic mistake is to evaluate such matters on the basis of our idiosyncratic, culturally specific epistemic concepts, judgments, intuitions, and practices.4 There is a second argument against analytic epistemology that is implicit in the later writings. Analytic epistemology seeks theories that aim to capture our intuitive judgments about whether a belief is justified or whether S knows that p. Jaegwon Kim gives clear expression to this desideratum: “It is expected to turn out that according to the criteria of justified belief we come to accept, we know, or are justified in believing, pretty much what we reflectively think we know or are entitled to believe” (1988: 382). Of course, in order to clarify and regiment our epistemological judgments, a successful theory might require some minor revision to some of our reflective epistemic judgments. But other things being equal, a theory that captures our reflective epistemic judgments is better than one that does not. J. D. Trout and I (2005a, 2005b) have called this the stasis requirement: To be correct, theories of analytic epistemology must “leave our epistemic situation largely unchanged” (Kim 1988: 382).5 WNS pointedly suggest that when we couple epistemic diversity with the stasis requirement, the resulting image of analytic epistemology is not of a universal normative theory, but of ethnography: Our data indicate that when epistemologists advert to “our” intuitions when attempting to characterize epistemic concepts or draw normative conclusions, they are engaged in a culturally local endeavor – what we might think of as ethno-epistemology . . . [I]t is difficult to see why a process [Intuition Driven Romanticism] that relies heavily on epistemic intuitions that are local to one’s own cultural and socioeconomic group would lead to genuinely normative conclusions. Pending a detailed response to this problem, we think that the best reaction to the high-SES [socioeconomic status] Western philosophy professor who tries to draw normative conclusions from the facts about “our” intuitions is to ask: What do you mean by “we”? (Weinberg, Nichols and Stich 2001: 454–5)
cognitive and epistemic diversity
119
To identify the core of analytic epistemology with ethnography is to suggest that it is empirical. Of course, analytic epistemology is not entirely empirical. Analytic philosophers extract lots of normative, epistemological claims from their descriptive, ethnographic theories. If this is right, then the method of contemporary analytic epistemology has been broadly Quinean all along. It’s just that rather than starting with psychology, analytic epistemology starts with anthropology. This interpretation of analytic epistemology immediately raises alarms about whether it is deriving ‘ought’s from ‘is’s. This is rather awkward, since this is precisely the charge analytic epistemologists have repeatedly made against naturalistic approaches to epistemology (e.g., BonJour 2002; Feldman 2003; Plantinga 1993; Williams 2001). It raises the possibility that analytic epistemologists have been throwing stones at naturalized epistemology from glass houses. But WNS don’t raise this worry. (Bishop and Trout 2005b, however, do.) Instead, the critique implicit in the above passage is that, as an empirical endeavor, analytic epistemology is bad science. Analytic epistemology relies heavily on the introspective judgments of a relatively small group of idiosyncratic folks (i.e., professional philosophers) to deliver empirical results about what “we” think about certain issues. As such, it ignores well-understood and widely accepted scientific methods that allow us to effectively investigate and discover what some group thinks about a subject.
1.4 Pragmatism and experimental philosophy We have considered two arguments against analytic epistemology – the early argument from intrinsic value and the later argument (co-authored with Nichols and Weinberg) from bad science. The first argument says that as an attempt to solve the fundamental problem of epistemology, analytic epistemologists are investigating the wrong parts of the world. The second argument says that analytic epistemology uses empirically dubious methods. In a nutshell, the problem with analytic epistemology is that it searches for answers in the wrong place and in the wrong way. To fix these problems, Stich must tell us where to look for answers (1.4.1) and how to look for answers (1.4.2). I will argue (1.4.3) that there might be some tension in the fixes Stich offers. 1.4.1 Where to look for answers: pragmatism Analytic epistemology attempts to solve the fundamental epistemological problem raised by cognitive diversity – how to properly evaluate the various ways different people can and do reason – in terms of considerations most of us do not, upon reflection, value. Stich proposes the obvious fix: “One system of cognitive mechanisms is preferable to another if, in using it, we are more likely to achieve those things that we intrinsically value” (1990: 24). In evaluating systems of cognitive processes, the system to be preferred is the one that would be most likely to achieve those things that are intrinsically valued by the person whose interests are relevant to the purposes of the evaluation. In most cases, the relevant person will be the one who is or might be using the system. So, for example, if the issue at hand is the evaluation of Smith’s system of cognitive processes in comparison with some
120
michael bishop
actual or hypothetical alternative, the system that comes out higher on the pragmatist account of cognitive evaluation is the one that is most likely to lead to the things that Smith finds intrinsically valuable. (1990: 131–2)
As opposed to the evaluations that come out of analytic epistemology, Stich notes that “there is no mystery about why [we] should care about the outcome of [the pragmatic] evaluation” (1990: 132). For Stich, pragmatism leads directly to relativism – the idea that the cognitive evaluation of a set of reasoning strategies will be highly sensitive to facts about the person (or group) using those strategies. The pragmatic view of cognitive evaluation is relativistic for two reasons. Most obviously, it is sensitive to what reasoners intrinsically value. If there are significant differences in what different people intrinsically value, then we should expect a pragmatic view to recommend quite different cognitive systems to different people (Stich 1990: 136). The second reason a pragmatic account is relativistic is that it will be sensitive to a reasoner’s environment. A set of reasoning strategies that yields the best expected consequences in one environment might not do so in a different environment (Stich 1990: 136–40). There is, in the pragmatist tradition, a certain tendency to down play or even deny the epistemic relativism to which pragmatism leads. But on my view this failure of nerve is a great mistake. Relativism in the evaluation of reasoning strategies is no more worrisome than relativism in the evaluation of diets or investment strategies or exercise programmes. The fact that different strategies of reasoning may be good for different people is a fact of life that pragmatists should accept with equanimity. (1993: 9)
Relativism is not the bogeyman it’s cracked up to be. A natural criticism of Stich’s view would be that he has unnecessarily run together two quite different normative domains – the epistemic and the pragmatic. The complaint might best be put by insisting, “But Stich hasn’t provided an account of epistemic evaluation” (table-pounding for emphasis is optional). In reply to this objection, Stich embraces “the very Jamesian contention that there are no intrinsic epistemic virtues” (1990: 24). “For pragmatists, there are no special cognitive or epistemological values. There are just values” (1993: 9). So although one might call Stich an “epistemic pragmatist” or an “epistemic relativist,” these locutions can lead to misunderstanding. Everything one intrinsically values is relevant to one’s evaluative judgments, whether those judgments are (intuitively) moral, esthetic, epistemic, etc. Stich is a pluralist about a great many things, but when it comes to normative, evaluative matters, he is a methodological monist. Regardless of the item one is evaluating, the evaluative considerations that arise are the same: What is most likely to bring about those things one intrinsically values? 1.4.2 How to look for answers: experimental philosophy WNS allege that a central element of analytic epistemology is empirical. As such, it is bad science. A natural way to fix this problem is to replace this bad science with good science. This is a plausible way
cognitive and epistemic diversity
121
to understand the motivation behind experimental philosophy. Experimental philosophy involves “using experimental methods to figure out what people really think about particular hypothetical cases” (Knobe forthcoming). It is easy to see how the method of experimental philosophy might be extended. By getting clear about the patterns of people’s judgments across different cultures and socioeconomic classes, experimental philosophers can offer well-grounded hypotheses about the nature of the psychological mechanisms that subserve these judgments (see, e.g., Nichols 2004). While I am in no position to limn the historical details, Stich seems to be a reasonably central figure in the development of this movement. Fragmentation provides a powerful intellectual justification for the empirically respectable study of what people actually think about philosophical matters. Further, as far as I can tell, WNS (2001) authored some of the earliest examples of experimental philosophy (some of which we reviewed in section 1.2). The experimental method in philosophy is growing in popularity. It is being used in epistemology, as we have seen, but also in debates about ethics (Nichols 2004), free will (Nahmias, Nadelhoffer, Morris and Turner 2005), action theory (Knobe 2003), and philosophy of language (Machery, Mallon, Nichols and Stich 2004). 1.4.3 A normatively modest approach to experimental philosophy? Whatever the merits of experimental philosophy, and I think there are many, Stich (or at least, the Stich of Fragmentation) is committed to the view that experimental philosophy is incapable of solving the fundamental epistemological problem raised by cognitive diversity. Experimental philosophy is victimized by the same objection that defeated analytic epistemology. Recall that Stich argues that the problem with analytic epistemology is that it tries to evaluate different ways of reasoning by appealing to our parochial, idiosyncratic epistemic concepts, judgments and intuitions. Since we don’t intrinsically value cognitive states or processes that are sanctioned by our – or anybody else’s – hidebound epistemic practices, a description of those practices can’t serve as the basis for a legitimate evaluation of different ways of reasoning. The fact that analytic epistemology is also bad science merely adds insult to injury. Experimental philosophy avoids the insult, but not the injury. Instead of focusing on the wrong things in the wrong way, experimental philosophy focuses on the wrong things in the right way. So as an attempt to solve the fundamental epistemological problem, the Stich of Fragmentation would have to conclude that experimental philosophy is the wrong tool for the job. The experimental philosopher has lots of potential responses to this argument. One possibility is to adopt a normatively modest view of experimental philosophy, according to which experimental philosophy is entirely descriptive. So when it comes to normative philosophical disciplines, experimental philosophy is ancillary. It can confirm or disconfirm the empirical assumptions or empirical implications of genuinely normative philosophical theories. But that’s it. Of course, there is plenty of positive descriptive knowledge experimental philosophy has to contribute. But a brief survey of some experimental philosophy gives some support to the normatively modest view. WNS’s epistemological writings fit the typical disconfirmatory mold. They argue that, to be fruitful, the method of analytic epistemology requires the truth of an empirical assumption: there is
122
michael bishop
considerable similarity in different people’s epistemic intuitions. WNS then attempt to undermine the legitimacy of this method by showing that the empirical assumption on which it relies is false. Recent examples of experimental philosophy applied to morality seem to further confirm its essentially ancillary normative functions. Shaun Nichols (2004) offers an account of the psychological mechanisms that subserve our moral judgments. Nichols uses this account to criticize a major argument in favor of moral realism – namely, that the best explanation of the historical trend “towards increasingly inclusive and nonviolent norms” is that people are coming to better and deeper understanding of real moral properties or values (149). Nichols argues that his psychological account of how we make “core” moral judgments does a better job of explaining this trend than moral realism (2004: ch. 7). In a pair of recent papers, Doris and Stich (2005) and Machery, Kelly and Stich (2005) have suggested that moral realism is challenged by the apparent existence of fundamental moral disagreements. Experimental philosophy has much to teach us about the nature and structure of the human mind. It promises to put some psychology into moral psychology. And this is all to the good. But prima facie, it seems incapable of offering up positive normative theories or principles. There is not much textual evidence for the normatively modest view of experimental philosophy in WNS’s epistemological writings. In fact, there are hints that there might be a legitimate role for epistemic intuitions in our epistemological theorizing. For polemical purposes we have been emphasizing the diversity of epistemic intuitions in different ethnic and SES groups . . . But we certainly do not mean to suggest that epistemic intuitions are completely malleable or that there are no constraints on the sorts of epistemic intuitions that might be found in different social groups. Indeed, the fact that subjects from all the groups we studied agreed in not classifying beliefs based on “special feelings” as knowledge suggests that there may well be a universal core to “folk epistemology.” Whether or not this conjecture is true and, if it is, how this common core is best characterized, are questions that will require a great deal more research. (WNS 2001: 450)
Perhaps there is some “universal core” of our epistemic practices that can be unearthed by experimental philosophy and that should be taken seriously in our epistemological theorizing (see also Nichols, Stich and Weinberg 2003). This core might involve some set of epistemic intuitions, judgments, or (perhaps more plausibly) it might involve some set of general psychological mechanisms that subserve our epistemic judgments and intuitions. Of course, it is an entirely empirical issue whether such a core exists. But if there is a robust psychological foundation for our epistemic practices, then there may be good reasons – including good pragmatic reasons – to take this core seriously in our epistemological theorizing. After all, the costs involved in significantly revising our epistemic practices might overwhelm whatever benefits might accrue from such a revision. Prohibitive start-up costs for new alternatives can be a powerful consideration in favor of the status quo (Sklar 1975; Harman 1986; Bishop and Trout 2005a). There is much more to say about what can be legitimately extracted from experimental philosophy. But for now, I will simply point out the prima facie tension between Stich’s views in Fragmentation and an experimental approach to philosophy that has positive normative aspirations.
cognitive and epistemic diversity
123
2 A Brief Digression: The Role of Intuitions in Epistemology It is perhaps appropriate to linger a bit over the role of intuitions in epistemology. Epistemic intuitions are usually taken to be our non-discursive, though perhaps considered, judgments about the epistemic properties of some cognitive item (Cohen 1981; Bealer 1987; BonJour 1998; Pust 2000). The paradigm example of an epistemic intuition is our judgment that subjects in Gettier cases do not have knowledge. For many analytic epistemologists, epistemic intuitions are taken to be basic sources of evidence. An adequate reconstruction of philosophical methodology here requires a two-step evidential route. In the first step, the occurrence of an intuition that p, either an intuition of one’s own or that of an informant, is taken as (prima facie) evidence for the truth of p (or the truth of a closely related proposition). In the second step, the truth of p is used as positive or negative evidence for the truth of a general theory. (Goldman and Pust 1998: 182)
So intuitions are a basic evidential source if and only if the intuition that p (when formed in favorable circumstances) provides prima facie evidence that p is true (Goldman and Pust 1998: 182–3; see also Bealer 1998: 204–7). To say that intuitions are a basic source of evidence is to say that any (properly formed) intuition that p is by the very fact of its being a (properly formed) intuition prima facie evidence of the truth that p. What happens if we deny that intuitions are a basic evidential source? Some philosophers have argued that a deep skepticism about epistemic intuitions is not viable (Bealer 1993, 1998; Jackson 1998; Kaplan 1994; Siegel 1984) and that it leads to “intellectual suicide” (Bonjour 1998: 99). The basic idea behind these arguments is that we cannot accept beliefs on the basis of good reasons without relying, at some point, on our epistemic intuitions. How could we start believing anything if we never accepted a belief that p on the basis of evidence e simply because it is intuitively obvious that evidence e licenses the belief that p? There is certainly room to criticize these arguments but, from my perspective, the best reply is strategic capitulation. Let’s distinguish two sorts of epistemological theories. 1 An ethno-epistemological theory that aims to capture our epistemic intuitions (with perhaps some light revisions in the service of power or clarity). 2 A genuinely prescriptive theory with normative force that will enable us to legitimately evaluate the various ways different people can and do reason. The first sort of theory might also be a theory of the second sort; but as Stich has stressed, this will require some argument. Now, are intuitions basic sources of evidence? This is a poorly framed question. We need to ask: Evidence for what? Our olfactory beliefs might be a basic evidential source, but they’re not legitimate evidence for theories of logic. They might play a legitimate, though minor, evidential role in the natural sciences; for example, the belief that there is an acrid burning smell might be a legitimate part of our evidence for thinking that an experiment has gone awry (or not gone awry!). Now suppose we want an epistemological theory of the first sort, one that satisfies the
124
michael bishop
stasis requirement by (in Kim’s words) leaving “our epistemic situation largely unchanged.” It is hard to see how our epistemic intuitions could not be a legitimate source of evidence. They are, after all, what such theories are trying to capture. But what if we want a theory of the second sort, a theory with genuine normative bite? It’s not so obvious that our epistemic intuitions should be a significant source of evidence. This does not imply that our epistemic intuitions never have any normative force. Properly understood, the defenders of intuition may be right that we have no choice but to trust our epistemic intuitions (maybe often). But plenty of philosophers who have reservations about the role of intuitions in our philosophical theorizing have granted this point (e.g., Kornblith 2002: 5; Devitt 1994: 564). I am certainly prepared to grant that our epistemological intuitions have directed our reasoning well enough to have a reasonable claim to some normative legitimacy. So I don’t believe there is a case to make that our intuitions are in principle irrelevant evidence for or against a genuinely prescriptive epistemological theory. (And as far as I know, no one, including Stich, has made such a case.) We need our intuitions to get on in the world; and perhaps having an epistemic intuition that reasoning strategy R is the epistemically best way to reason may be prima facie evidence for thinking that, in fact, reasoning strategy R is the epistemically best way to reason. But it doesn’t follow that our epistemic intuitions are (or must be) the primary source of evidence for our epistemological theorizing. We can draw an illuminating analogy with our physical intuitions (our immediate judgments about the behavior of physical objects).6 We would not long survive without trusting our physical intuitions. If someone genuinely stopped trusting his physical intuitions, we would worry that he was trying to commit actual suicide rather than just intellectual suicide. Besides their pragmatic importance, our physical intuitions are almost surely essential to our theorizing about the nature of the physical world. If physicists could never rely on their physical intuitions (about, e.g., whether some pointer has come to rest at zero or how, roughly, an object will fall), it’s doubtful they would get far in their theorizing. So our physical intuitions are “basic evidence” in the sense given above: having a physical intuition that p is prima facie evidence that p is true. In fact, we know that our physical intuitions are often true, but we also know quite a bit about their systematic failures (Clement 1982; Halloun and Hestenes 1985). So even though a deep skepticism about our physical intuitions would be catastrophic on practical and theoretical grounds, it would be grotesque to suggest that we must therefore trust our physical intuitions to play a significant evidential role in the testing and development of our physical theories. As far as I can tell, the analogous point is all Stich needs. Those of us who are skeptical about our epistemic intuitions are denying that they ought to play a significant role in epistemological theorizing, an activity engaged in by a tiny fraction of the world’s population and even then only occasionally. This highly limited pessimism about our intuitions is perfectly coherent and, in fact, is consistent with the claim that intuitions are a basic source of evidence (as characterized above). Giving up epistemic intuitions in our epistemological theorizing does not lead to intellectual catastrophe. The case for a circumscribed skepticism about our intuitions is incomplete. I have granted that our intuitions have some prima facie normative force. So whether it is reasonable to ignore them in our epistemological theorizing depends on whether we have
cognitive and epistemic diversity
125
an evidential source that is better than our intuitions. The physics analogy is apt. If we don’t have some source of evidence that is better than our physical intuitions and if our physical intuitions are reasonably reliable, then the best we can do is appeal to our physical intuitions in constructing our physical theories. Our resulting theories might be disappointing. But cooks are only as good as their ingredients. So what alternatives do we have to evaluating epistemological theories in terms of how well they capture our epistemic intuitions? Well, at least a few. Stich has offered a pragmatic alternative in Fragmentation: Epistemological theories, principles and judgments are to be evaluated in terms of how well they promote what we intrinsically value. Some experimental philosophers might offer a different alternative: Epistemological theories, principles and judgments are to be evaluated in terms of how well they capture a universal “core” of people’s epistemic practices (as described by a scientifically respectable account of those practices). J. D. Trout and I (2005a, 2005b) have offered yet a third alternative: Epistemological theories, principles and judgments are to be evaluated in terms of how well they capture the normative judgments of “Ameliorative Psychology” – those areas of psychology that for more than 50 years have been extremely successful in giving us useful advice about how to reason about matters of great practical significance (e.g., whether a convict is likely to be violent if paroled, whether a heart-attack victim is at risk of dying within the next month, whether a newborn is at risk for Sudden Infant Death Syndrome, whether a Bordeaux wine will be any good when it matures).7 These are all genuine options for epistemology. And by firmly grounding epistemology in science, they are all animated by the spirit of Quine’s naturalism.
3 A Critique: The Pragmatic Virtues of Reliabilism What if we evaluated different ways of reasoning in terms of their tendency to deliver true beliefs? Against this suggestion, Stich has argued that “once we have a clear view of the matter, most of us will not find any value, either intrinsic or instrumental, in having true beliefs” (1990: 101). In 3.1, I will spell out this argument, surely one of the more shocking positions in the Stich oeuvre. In 3.2, I will argue that, despite the power of Stich’s case against truth, we are naturally drawn to true belief even when it is against our interests to do so. In 3.3, I use our default attraction to true belief to try to justify on pragmatic grounds a reliabilist theory of cognitive evaluation.
3.1 Against truth The heavy work in Stich’s argument against the value of truth involves spelling out in considerable detail what is involved in having true beliefs (1990: 101–18). The crux of the story involves an account of the interpretation function, which maps belief tokens onto entities that can be true or false (e.g., propositions, truth conditions, sets of possible worlds). So the interpretation function maps a particular brain state of mine onto the proposition
126
michael bishop
that mental state (we’ll suppose it’s a belief) has the content: Lincoln was a great president. The belief is true just in case the proposition
The interpretation function favored by our intuitive judgments will be idiosyncratic. The best way to see this is to consider evidence suggesting that there are systematic, cross-cultural differences in people’s intuitive semantic judgments (Machery, Mallon, Nichols and Stich 2004). If true, this means that the interpretation function favored by our culture will be different from the interpretation function favored by some other cultures. Stich argues that once we reflect upon what’s involved in having true beliefs, we will find that we don’t intrinsically value them: “Those who find intrinsic value in holding true beliefs . . . are accepting unreflectively the interpretation function that our culture . . . has bequeathed to us and letting that function determine their basic epistemic value. In so doing, they are making a profoundly conservative choice; they are letting tradition determine their cognitive values without any attempt at critical evaluation of that tradition” (1990: 120). Stich readily admits he has no useful reply to the person who, when fully informed, intrinsically values true belief. But he maintains that most people, when they are clear about what it involves, will come to see that they don’t intrinsically value true belief. What about the instrumental value of true belief? The issue here is not whether true belief is sometimes instrumentally valuable (surely it is) but whether true belief is more instrumentally valuable than potential alternatives. To get clear about this, let’s consider an example. Shelley Taylor and her colleagues (Taylor 1989; Armor and Taylor 2002) have argued that having views that are somewhat unrealistically optimistic can promote mental and physical health. “Unrealistic optimism about the future is generally adaptive in that it promotes . . . feelings of self-worth, the ability to care for and about others, persistence and creativity in the pursuit of goals, and the ability to cope effectively with stress” (Taylor et al. 1992: 460). Before the advent of reasonably effective antiretroviral drugs, a study found that men with AIDS who were unrealistically optimistic lived longer than those who had more realistic views about their prospects (Reed et al. 1994). More generally, research suggests that mildly unrealistic optimists cope better with health issues than realists, they tend to recover more quickly than realists, and they have
cognitive and epistemic diversity
127
higher recovery rates than realists (Scheier et al. 1989; Fitzgerald et al. 1993). So consider the possibility of having true* beliefs. Most true* beliefs are also true, but true* beliefs are more optimistic than true beliefs in certain cases having to do with health prospects.8 “True beliefs are not always optimal in the pursuit of happiness or pleasure or desire satisfaction, nor are they always the best beliefs to have if what we want is peace or power or love, or some weighted mix of all of these” (1990: 123).
3.2 True belief as prodigal son These are powerful arguments. And properly understood, I don’t want to challenge them. Upon reflection, we have no good reason to value true beliefs (intrinsically or instrumentally) as opposed to true* beliefs. I want to suggest, however, that we distinguish between what we find valuable (worthy of value) upon reflection and what we actually value. We value the truth, despite the results of our Stich-inspired deliberations on its idiosyncrasies and practical failings. The truth is like the prodigal son. We might realize that he does not deserve our love, our care, our energy; we might realize that we would be much better off committing those feelings and resources to a more deserving child. But despite what our heads say, we can’t help but embrace him. To make the case for truth, let’s go back to the benefits of unrealistic optimism. Suppose Hobart is suffering from a serious disease for which there is some hope for recovery. Hobart is unrealistically optimistic about his prospects, as are his friends and loved ones. This sort of example seems to give succor to the pragmatist and headaches to the supporter of true belief. It suggests that we ought to reason in ways that are unreliable. But now suppose that Hobart gets a visit from Lance, who is a decent enough fellow but who can sometimes be stunningly thoughtless. Lance brings Hobart a large notebook full of evidence that supports in gruesome detail the hypothesis that Hobart’s prospects for recovery are rather poor. What’s more, Lance presents this evidence to Hobart. He raises every consideration that might lead Hobart to be overly optimistic about his prospects for recovery, and he utterly demolishes each consideration in meticulous detail with true and well-grounded evidence. Lance concludes by allowing that there is a small probability that Hobart will recover, but the probability is considerably lower than the average patient with Hobart’s condition. Let’s suppose that Lance’s performance is not the result of any cruelty or malevolence – he just has a stunningly tin ear when it comes to interpersonal relationships. And let’s further suppose Lance delivers this information in a genuinely friendly manner. Despite Lance’s intentions and affability, we recoil at the prospect of his forcing Hobart to hear the truth about his situation. But why? Suppose that Pam had thoughtlessly gone on about the genuinely pathetic prospects of the Cubs, Hobart’s favorite baseball team. Assuming his attachment to the Cubs is not unreasonably fanatical (not always a safe assumption), Pam’s behavior would be rude and stupid, but not nearly as cruel as Lance’s. Unlike telling Hobart the truth about the Cubs, telling him the truth about his prospects for recovery threatens to destroy Hobart’s hope and weaken his will. In this way, it threatens to rob Hobart of the resources he needs to survive his disease.
128
michael bishop
And here’s the point: Lance’s performance is appalling precisely because Hobart values true beliefs. If he didn’t value true beliefs but instead valued true* beliefs, he could laugh off Lance’s evidence and stick to his rosy beliefs about his prospects. But when we become convinced that we hold a false belief, even when that belief is highly useful, we are often drawn to the true belief like a moth to the flame. And that is why forcing the truth about Hobart’s prospects on him undermines those prospects. When we are clearly convinced that p is true, we are naturally and powerfully drawn to the belief that p. We are drawn to the truth (or rather, what we clearly and fully understand to be the truth) in matters of belief in a way that we are not drawn to (what we clearly and fully understand to be) the good, the beautiful, or the useful. Luckily, we have powerful psychological systems that enable us to elude a clear vision of the truth, and hence to escape its attraction. We are deceived and self-deceived in many ways (e.g., Nisbett and Wilson 1977; Wilson and Stone 1985). And most of us have a surprisingly robust capacity for adopting and protecting from disconfirmation overly rosy beliefs about ourselves (Taylor 1989: ch. 4). We tend to believe that we possess a host of socially desirable characteristics, and that we are free of most of those that are socially undesirable. For example, a large majority of the general public thinks that they are more intelligent, more fair-minded, less prejudiced, and more skilled behind the wheel of an automobile than the average person. This phenomenon is so reliable and ubiquitous that it has come to be known as the “Lake Wobegon effect,” after Garrison Keillor’s fictional community where “the women are strong, the men are good-looking, and all the children are above average.” A survey of one million high-school seniors found that 70% thought they were above average in leadership ability, and only 2% thought they were below average. In terms of ability to get along with others, all students thought they were above average, 60% thought they were in the top 10%, and 25% thought they were in the top 1%! Lest one think that such inflated self-assessments occur only in the minds of callow high-school students, it should be pointed out that a survey of university professors found that 94% thought they were better at their jobs than their average colleague. (Gilovich 1991: 77)
While we are adept at protecting these rosy beliefs from negative evidence, it is surely a mistake to suppose that they are totally immune from disconfirmation. It might be hard to convince us that our rosy beliefs are false. But when we do finally become convinced, we find it quite difficult (or perhaps even impossible) to continue believing them – even when doing so would be in our interests. We might agree with Stich that upon reflection, true beliefs are not intrinsically valuable and not as instrumentally valuable as true* beliefs. The problem is the hard empirical fact of our mindless, unthinking, default attraction to true belief.
3.3 The pragmatic benefits of reliabilism The cool-eyed pragmatist will be the first to insist that a theory of cognitive evaluation should not make demands on us that we can’t meet. Our judgment and decision-making
cognitive and epistemic diversity
129
capacities are deeply imperfect, and the limits on our memory, computing power, time, energy, patience, and will are legion (Stich 1990: 149–58). I’m suggesting the pragmatist add one more imperfection to the list: We tend to value truth, even when, from a pragmatic perspective, we shouldn’t. Once we take this fact about ourselves to heart, the pragmatist is faced with a familiar challenge: What sorts of normative principles, theories, or recommendations can we offer that will effectively guide our reasoning but that will clearly recognize and compensate for our built-in limitations and imperfections? Perhaps our regrettable attraction to true belief gives us pragmatic grounds for placing truth at the center of our epistemological theory. Not because truth is more valuable to us than truth* (or truth** or truth***) . . . but just because we’re stuck valuing true belief. To compensate for our unfortunate attraction to true belief, two different strategies suggest themselves. A direct strategy, which Stich adopts in Fragmentation, places pragmatic virtues center stage. Normative claims about cognitive matters – generalizations about good reasoning as well as evaluations of particular cognitive states and processes – are framed directly in terms of what we intrinsically value. An indirect strategy would place truth (or some other non-pragmatic category) center stage but would find some way to license the adoption of false (or true*) beliefs when they serve our interest.9, 10 What might such a theory look like? I suggest it might look something like Strategic Reliabilism (Bishop and Trout 2005a). While Strategic Reliabilism has not been defended on pragmatic grounds, I want to argue here that (with some minor modifications) it can be. The indirect pragmatic theory of cognitive evaluation I will sketch here consists of two parts. The first holds that an excellent reasoner will have at her disposal a set of reasoning methods or strategies that are robustly reliable – they yield a high percentage of true beliefs across a wide variety of situations. The second part of the theory focuses on the problems these strategies will apply to, and how the excellent reasoner will use them. Given the infinitely many reasoning problems one might tackle at any moment and our limited resources, the excellent reasoner will do more than reason reliably about useless matters. It is here that the pragmatic element of the indirect theory plays a vital role. Given the indefinitely many reasoning problems we face, some are more worthy of attention than others in a straightforwardly pragmatic sense: 1 Constructive problems: Reasoning in a robustly reliable way about some problems will help us achieve those things we intrinsically value. 2 Neutral problems: Reasoning in a robustly reliable way about most problems we face will bring us nothing that we intrinsically value – they are just a waste of time and resources. 3 Undermining problems: Reasoning in a robustly reliable way about some problems will lead to results that undermine those things we intrinsically value.11 So the excellent reasoner will have at her disposal robustly reliable reasoning strategies for handling constructive problems. In her daily life, she will mete out attention and resources to these problems in a prudent, cost-effective manner. She will ignore neutral
130
michael bishop
problems. And she will be naturally disposed to avoid spending cognitive resources on undermining problems. The idea here is not that the excellent reasoner will consciously work through these various cost-benefit calculations every time she faces a reasoning problem. That would lead to an infinite regress. (I’m faced with problem P. Before solving it, I must solve problem P′: Is P a constructive, neutral or undermining problem? But before solving P′, I have to solve problem P″: Is P′ a constructive, neutral or undermining problem? . . .) So a lot of these “decisions” occur at a non-conscious level. That’s not to say that we have no control over what reasoning strategies we employ (“I need to insist on controls when thinking about causal claims”) or over where to put our cognitive resources (“I should spend less time and energy thinking about the mind–body problem”). It is to say that, most of the time, we run our cognitive lives without any high-level, conscious decision-making about what reasoning problems to tackle. And this is part of (and perhaps a large part of) why our overly rosy beliefs can be so effective. Imagine if Hobart had to consciously decide to not spend a great deal of energy scrutinizing the issue of his prospects for recovery on the grounds that doing so might kill him. That just wouldn’t work. Psychological research (e.g., Wegner 1994) as well as schoolyard wisdom (“Don’t think about pink elephants!”) supports the idea that we aren’t very good at suppressing our thoughts. So Hobart’s rosy belief survives because we are naturally inclined to not bring our critical faculties to bear on undermining issues; and absent critical scrutiny, our rosy beliefs are protected by the powerful psychological mechanisms that account for the sturdiness of the Lake Wobegon effect (Gilovich 1991; Taylor 1989: ch. 4). On what grounds are we to choose between a direct and an indirect pragmatic theory of cognitive evaluation? Insofar as Stich is committed to a direct theory, he will surely not argue that his direct theory is to be preferred because it is a true account of the nature of cognitive evaluation. Any choice between a direct or indirect theory must ultimately be made on pragmatic grounds. For Stich, whichever theory works better in straightforwardly pragmatic terms (i.e., it helps people better achieve those things they intrinsically value), that’s the one we should believe. So the choice between these theories is, for Stich, an empirical matter. Since I don’t have the empirical goods to settle the issue, plausibility considerations will have to suffice. The primary mission of epistemology is to evaluate the various ways different people can and do reason and (hopefully) offer useful suggestions about how we can reason better. A direct theory will frame these normative judgments in terms of useful belief, while an indirect (reliabilist) theory will frame these normative judgments in terms of true belief. Given our default attraction to true beliefs, the indirect theory starts off with all the pragmatic advantages of incumbency. We are already used to thinking about our epistemic responsibilities in terms of true belief. And so given any claim about the relative quality of one of our reasoning strategies, we’ll have an easier time understanding, accepting, and reflecting upon the claim made by the indirect theory. Further, we are likely to find it easier and more natural to be guided by the normative recommendations made by the indirect theory. This is a significant, built-in advantage for the indirect (truth-based) pragmatic theory of cognitive evaluation.12 Can the direct theory overcome its built-in deficit? Perhaps. Consider that we sometimes accept beliefs we know are useful but not true, as, for example, when we use Newtonian theory to predict and explain the motion of medium-to-large objects. But in that case, we
cognitive and epistemic diversity
131
give up negligible amounts of accuracy for a great deal of utility. Further, I think we are naturally inclined to explain the utility of such beliefs in terms of their proximity to the truth. So this example doesn’t give us any reason to suppose that revising our epistemic habits so we are no longer mindlessly attracted to true belief will be easy. In fact, I speculate that it would take considerable work to refashion our habitual epistemic practices in such a dramatic way. Of course, such a change might be psychologically impossible, or so difficult as to be practically impossible. If that’s so, then the pragmatist will have to settle for an indirect theory. Let’s suppose we can alter our epistemic habits so that pronouncements about how to reason and believe that are framed in terms of useful belief are as easy for us to grasp, ponder, accept and act on us as pronouncements that are framed in terms of true belief. Should we go to the trouble of revising our epistemic practices? It depends on the pragmatic benefits of doing so. These benefits will come from our increased ability to reason unreliably (i.e., reliably*) to useful beliefs in those cases the indirect theory cannot plausibly handle. If the direct and indirect theories were to make exactly the same recommendations, then the direct theory would be doomed. It couldn’t overcome the indirect theory’s incumbency advantages. But I think Stich will argue (rightly) that we have no good reason to suppose the direct and indirect theories will yield exactly the same cognitive evaluations. To see how the direct and indirect theories might make different recommendations, let’s go back to the Hobart example. The indirect theory could license Hobart’s useful, false but true* (rosy) belief because of two contingent psychological facts: (a) Hobart’s reasoning strategies naturally deliver rosy beliefs. (b) Hobart is inclined to avoid critical scrutiny of his rosy beliefs. Without these psychological mechanisms, the indirect theory has no epistemic resources to license Hobart’s rosy belief. In fact, the indirect theory will tend to undermine (a), Hobart’s natural inclination to reason to the rosy belief rather than the true belief. That’s because the indirect theory directs us to adopt robustly reliable reasoning strategies. Doing so could readily undermine our natural tendency to adopt rosy beliefs. Once a reasoner comes to a true belief as a result of robustly reliable reasoning, the indirect theory has no resources to license the true* belief or the reliable* reasoning strategy. Now consider those cases in which (b) is false. Suppose Hobart is naturally disposed to critically scrutinize his rosy belief. The indirect theory might pronounce that, to be an excellent reasoner, Hobart ought to avoid scrutiny of such beliefs. But as we have already seen, this is not likely to be a particularly effective recommendation (“Don’t think about pink elephants!”). So if there are undermining problems that the indirect theory can’t handle (i.e., it can’t effectively recommend the rosy belief), then the direct theory has a chance to overcome the indirect theory’s incumbency advantages. The case for an indirect pragmatic theory is by no means a slam dunk. But the direct theory faces two pressing problems. First, the direct theory fails if we cannot alter our default epistemic practices. And second, the direct theory must show that the costs of altering our default epistemic practices will be more than compensated for by the direct theory’s advantages over the indirect theory. The direct theory needs both claims, but
132
michael bishop
it’s not clear it can have either. Without a clear case for the pragmatic advantages of revising our epistemic habits, on pragmatic grounds, we should stick with what seems to be working well enough. Given what we know, the pragmatist should provisionally adopt an indirect theory.
4 Conclusion In “Epistemology Naturalized,” Quine offered a vision of epistemology that has fallen on hard times. “The simulation of his sensory receptors is all the evidence anybody has had to go on, ultimately, in arriving at his picture of the world. Why not just see how this construction really proceeds? Why not settle for psychology?” (1969: 75) Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science. It studies a natural phenomenon, viz., a physical human subject. This human subject is accorded a certain experimentally controlled input – certain patterns of irradiation in assorted frequencies, for instance – and in the fullness of time the subject delivers as output a description of the three-dimensional external world and its history. The relation between the meager input and the torrential output is a relation that we are prompted to study for somewhat the same reasons that always prompted epistemology; namely, in order to see how evidence relates to theory, and in what ways one’s theory of nature transcends any available evidence. (1969: 82–3)
For decades, analytic epistemologists have been united in their opposition to the idea that epistemology could be reduced to psychology. Tradition holds that such a reduction is subject to a devastating objection: It empties epistemology of its normative character. On this point, Stich agrees with tradition. The Quinean naturalized epistemologist can explore in detail the various ways in which different people construct their “picture of the world” on the basis of the evidence available to them. But he has no way of ranking these quite different strategies for building world descriptions; he has no way of determining which are better and which are worse. And since the Quinean naturalized epistemologist can provide no normative advice whatever, it is more than a little implausible to claim that his questions and projects can replace those of traditional epistemology. We can’t “settle for psychology” because psychology tells us how people do reason; it does not (indeed cannot) tell us how they should. (Stich 1993: 4–5)
But what is the lesson? That we can’t put empirical theories at the heart of our epistemology? But then what survives? Not Quinean naturalism. Not analytic epistemology. If we can’t replace epistemology with psychology, then we can’t replace it with ethnography, either. Not Stich’s pragmatism, which is built on empirical claims about how people reason, about how people evaluate cognition, and about what we intrinsically and instrumentally value. And not experimental philosophy, either. If we take the arguments proposed by Stich and by WNS against analytic epistemology seriously, then a genuinely normative, reason-guiding epistemology cannot be in the
cognitive and epistemic diversity
133
business of a priori reflection on our parochial epistemic concepts, judgments, and intuitions. But then science is all that’s left. If this is right, then Quine’s mistake was not identifying epistemology with science. His mistake was identifying it with the wrong bit of science. Of course, we’re still left with the very serious problem of how to extract normative, epistemological claims from science. But if Stich’s and WNS’s critique of analytic epistemology is correct, we don’t have a choice. At every crucial step, Stich’s epistemology drives us to the conclusion that genuinely normative theories find their homes in science. Further, the alternatives to analytic epistemology we have considered all make epistemology a chapter, though perhaps a rather quirky chapter, of psychology. Maybe a Stich in time saves Quine.13
Notes 1 Has Stich changed the subject so much that he is not engaging, and hence not disagreeing with, analytic epistemologists? Goldman (1978) recognizes that some contemporary epistemologists might be disinclined to take this project to be epistemology. To avoid becoming embroiled in boundary disputes, Goldman calls this project by a different name, “epistemics.” Stich has at least two replies to the “change of subject” worry. First, reasoning strategies typically yield belief tokens. So to recommend a reasoning strategy is to recommend belief tokens, at one remove. Thus, a theory that evaluates reasoning strategies and a theory that evaluates belief tokens can recommend incompatible belief tokens. And second, Stich’s pragmatic view of cognitive evaluation can be applied directly to belief tokens. So his theory can make recommendations that directly conflict with those of standard theories of justification. 2 Debates about the quality of people’s default reasoning strategies, and the variation in the quality of different people’s default reasoning strategies, provides another motivation for this project (see, e.g., Stanovich 1999). 3 Stich also argues that we have no good reason to believe that our epistemic standards are instrumentally valuable – or rather, more instrumentally valuable than any alternative set of epistemic standards. And, in fact, he argues that we have some reason to doubt that they are (1990: 96–127). 4 The intrinsic value argument does not appear in WNS’s articles, although a weaker form of it does. WNS note that if we were to begin the analytic project with different sets of intuitions (e.g., intuitions of East Asians, or of Westerners, or of low SES subjects or of high SES subjects) we would end up with very different epistemological theories. WNS then ask: “What reason is there to think that the output of one or another of these Intuition-Driven Romantic strategies has real (as opposed to putative) normative force?” (Weinberg, Nichols and Stich 2001: 434). Not all of these theories can be genuinely (as opposed to putatively) normative. While I think this argument is less ambitious than Stich’s earlier intrinsic value argument, I needn’t press the point here. 5 I have been surprised to find philosophers occasionally denying that stasis really is a requirement on contemporary theories of analytic epistemology. Here is a crucial experiment to test this suggestion: Find cases of contemporary analytic epistemologists “outsmarting” themselves (where “outsmart” is a technical term from The Philosophical Lexicon (Dennett 1987): “outsmart, v.: To embrace the conclusion of one’s opponent’s reductio ad absurdum argument. ‘They thought they had me, but I outsmarted them. I agreed that it was sometimes just to hang an innocent man.’ ”) If the stasis requirement is not operative, we should find lots
134
6 7
8
9
10 11
12 13
michael bishop of cases in the epistemological literature in which people embrace positions with little regard for their intuitive judgments – as occurs in the sciences. But we don’t find lots of instances of contemporary analytic epistemologists “outsmarting” each other. In fact, except for the occasional skeptic, I am hard pressed to think of any. Bealer (1993, 1998) has argued strenuously that philosophical (“a priori”) intuitions are unlike physical intuitions. Perhaps. Still, their evidential relationship to certain theories might be similar. A potential advantage of this approach over the others is that it grounds epistemology in a bit of psychology that, prima facie, already has legitimate normative force. The great challenge for the naturalist is how to extract ‘ought’s from ‘is’s. That challenge is met if we’re extracting epistemological ‘ought’s from epistemological ‘ought’s we find in psychology. That still leaves the puzzle of what we’re to make of these apparently normative parts of psychology. In this case, some true* beliefs are true and others false. But some alternatives to true beliefs won’t involve false beliefs. The interpretation function is not only idiosyncratic, it is also partial: it won’t assign propositions to many belief-like states. So some true** beliefs might be true while the rest have no truth values at all (Stich 1990: 121–2). This indirect strategy is familiar in ethics, particularly among hedonists who argue that, although only happiness is intrinsically valuable, in order to achieve happiness, people ought to intrinsically value lots of things besides happiness (e.g., see Mill’s Utilitarianism: ch. IV). The case for an indirect theory undermines Stich’s methodological monism with respect to normative issues in a way that does not require table-pounding (see the discussion in 1.4.1). The main difference between this indirect pragmatic view and Strategic Reliabilism has to do with the nature of these values. Trout and I assume (without argument) a realistic conception of value, but this is an optional part of the theory (2005a: 93–103). What I have done here is to modify Strategic Reliabilism by replacing the realistic conception of value with a pragmatic one. As I suggested at the end of section 1, the experimental philosopher can adopt a remarkably similar pragmatic defense of her approach. The credit for this line and for the subtitle of this paper belongs to the anonymous wag who defined ‘stich’ in The Philosophical Lexicon (Dennett 1987): “stich, n. (cf. croce) The art of eliminative embroidery. In the art of stich, one delicately strips the semantics off the rich tapestry of folk psychology revealing the bare warp and wood of pure syntax. ‘A stich in time saves Quine.’ ”
References Armor, D. and Taylor, S. (2002) “When Predictions Fail: The Dilemma of Unrealistic Optimism,” in T. Gilovich, D. Griffin, and D. Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge: Cambridge University Press. Bealer, George (1987) “The Philosophical Limits of Scientific Essentialism.” Philosophical Perspectives 1: 289–365. Bealer, George (1993) “The Incoherence of Empiricism,” in S. Wagner and R. Warner (eds.), Naturalism: A Critical Appraisal, Notre Dame, IN: Notre Dame University Press. Bealer, George (1998) “Intuition and the Autonomy of Philosophy,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry, Lanham, MD: Rowman & Littlefield. Bishop, Michael and Trout, J. D. (2005a) Epistemology and the Psychology of Human Judgment. New York: Oxford University Press.
cognitive and epistemic diversity
135
Bishop, Michael and Trout, J. D. (2005b) “The Pathologies of Standard Analytic Epistemology.” Nous 39(4): 696–714. BonJour, Laurence (1998) In Defense of Pure Reason. Cambridge: Cambridge University Press. BonJour, Laurence (2002) Epistemology: Classic Problems and Contemporary Responses. Lanham, MD: Rowman and Littlefield. Clement, J. (1982) “Students Preconceptions in Introductory Mechanics.” American Journal of Physics 50(1): 66–71. Cohen, L. J. (1981) “Can Human Irrationality be Experimentally Demonstrated?” Behavioral and Brain Sciences 4: 317–31. Dennett, Daniel (1987) The Philosophical Lexicon. Blackwell Publishers (online). http:// www.blackwellpublishing.com/lexicon/ Devitt, Michael (1994) “The Methodology of Naturalistic Semantics.” Journal of Philosophy, 91: 545–72. Doris, J. and Stich, S. (2005) “As a Matter of Fact: Empirical Perspectives on Ethics,” in F. Jackson and M. Smith (eds.), The Oxford Handbook of Contemporary Analytic Philosophy, Oxford: Oxford University Press. pp. 114–52. Feldman, R. (2003) Epistemology. Upper Saddle River, NJ: Prentice-Hall. Fitzgerald, T., Tennen, H., Affleck, G., and Pransky, G. (1993) “The Relative Importance of Dispositional Optimism and Control Appraisals in Quality of Life after Coronary Bypass Surgery.” Journal of Behavioral Medicine 16: 25–43. Gilovich, Thomas (1991) How We Know What Isn’t So. New York: The Free Press. Goldman, Alvin (1978) “Epistemics: The Regulative Theory of Cognition.” The Journal of Philosophy 75: 509–23. Goldman, Alvin and Pust, Joel (1998) “Philosophical Theory and Intuitional Evidence,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry, Lanham, MD: Rowman & Littlefield. Halloun, I. A. and Hestenes, D. (1985) “The Initial Knowledge State of College Physics Students.” American Journal of Physics 53(11): 1043–55. Harman, Gilbert (1986) Change in View. Cambridge, Mass.: MIT Press. Jackson, Frank (1998) From Metaphysics to Ethics: A Defense of Conceptual Analysis. Oxford: Oxford University Press. Kaplan, M. (1994) “Epistemology Denatured.” Midwest Studies in Philosophy 19: 283–300. Kim, Jaegwon (1988) “What is ‘Naturalized Epistemology’?” Philosophical Perspectives 2: 381–405. Knobe, J. (2003) “Intentional Action and Side Effects in Ordinary Language.” Analysis 63: 190–3. Knobe, J. (forthcoming) “What is Experimental Philosophy?” The Philosophers’ Magazine. Kornblith, Hilary (2002) Knowledge and Its Place in Nature. Cambridge, Mass.: MIT Press. Machery, E., Kelly, D., and Stich, S. (2005) “Moral Realism and Cross-Cultural Normative Diversity,” a commentary on J. Henrich et al., “ ‘Economic Man’ in Cross-Cultural Perspective: Behavioral Experiments in 15 Small-Scale Societies.” Behavioral and Brain Sciences 28(6) (Dec.): 830. Machery, E., Mallon, R., Nichols, S., and Stich, S. (2004) “Semantics, Cross-Cultural Style.” Cognition 92: B1–B12. Masuda, T. and Nisbett, R. E. (2001) “Attending Holistically vs. Analytically: Comparing the Context Sensitivity of Japanese and Americans.” Journal of Personality & Social Psychology 81: 922–34. Nahmias, E., Nadelhoffer, T., Morris, S., and Turner, J. (2005) “Surveying Freedom: Folk Intuitions about Free Will and Moral Responsibility.” Philosophical Psychology 18(5): 561–84. Nichols, S. (2004) Sentimental Rules. Oxford: Oxford University Press.
136
michael bishop
Nichols, S., Stich, S., and Weinberg, J. (2003) “Meta-Skepticism: Meditations in EthnoEpistemology,” in S. Luper (ed.), The Skeptics, Aldershot, UK: Ashgate Publishing. Nisbett, R. (2003) The Geography of Thought: How Asians and Westerners Think Differently . . . And Why. New York: The Free Press. Nisbett, R., Peng, K., Choi, I., and Norenzayan, A. (2001) “Culture and Systems of Thought: Holistic vs. Analytic Cognition.” Psychological Review 108: 291–310. Nisbett, R. and Wilson, T. (1977) “Telling More than We Can Know: Verbal Reports on Mental Processes.” Psychological Review 84(3): 231–59. Peng, I. and Nisbett, R. (1999) “Culture, Dialectics, and Reasoning about Contradiction.” American Psychologist 54: 741–54. Plantinga, A. (1993) Warrant and Proper Function. New York: Oxford University Press. Pust, J. (2000) Intuitions as Evidence. New York: Garland Publishing. Quine, W. V. O. (1969) “Natural Kinds,” in Ontological Relativity & Other Essays, New York: Columbia University Press. Quine, W. V. O. (1986) “Reply to Morton White,” in Lewis Hahn and P. Schilpp (eds.), The Philosophy of W. V. Quine, La Salle, IL: Open Court. Reed, G., Kemeny, M., Taylor, S., Wang, H., and Visscher, B. (1994) “Realistic Acceptance as a Predictor of Decreased Survival Time in Gay Men with AIDS.” Health Psychology 13: 299– 307. Scheier, M., Matthews, K., Owens, J., Magovern, G., Lefebvre, R., Abbott, R., and Carver, C. (1989) “Dispositional Optimism and Recovery from Coronary Artery Bypass Surgery: The Beneficial Effects on Physical and Psychological Well-being.” Journal of Personality and Social Psychology 57: 1024–40. Siegel, H. (1984) “Empirical Psychology, Naturalized Epistemology and First Philosophy.” Philosophy of Science 51: 667–76. Sklar, Lawrence (1975) “Methodological Conservatism.” Philosophical Review 84: 374–400. Stanovich, Keith (1999) Who is Rational? Studies of Individual Differences in Reasoning. Hillsdale, NJ: Erlbaum. Stich, Stephen (1985) “Could Man Be an Irrational Animal?” Synthese 64(1): 115–34. Stich, Stephen (1987) “Reflective Equilibrium, Analytic Epistemology and the Problem of Cognitive Diversity.” Synthese 74(3): 391–413. Stich, Stephen (1990) The Fragmentation of Reason. Cambridge: MIT Press. Stich, Stephen (1993) “Naturalizing Epistemology: Quine, Simon and the Prospects for Pragmatism,” in C. Hookway and D. Peterson (eds.), Philosophy and Cognitive Science, Royal Institute of Philosophy, Supplement no. 34, Cambridge: Cambridge University Press. Taylor, S. (1989) Positive Illusions: Creative Self-Deception and the Healthy Mind. New York: Basic Books. Taylor, S., Kemeny, M., Aspinwall, L., Schneider, S., Rodriguez, R., and Herbert, M. (1992) “Optimism, Coping, Psychological Distress, and High-Risk Sexual Behavior among Men at Risk for AIDS.” Journal of Personality and Social Psychology 63: 460–73. Wegner, D. M. (1994) “Ironic Processes of Mental Control.” Psychological Review 101: 34–52. Weinberg, J., Nichols, S., and Stich, S. (2001) “Normativity and Epistemic Intuitions.” Philosophical Topics 29(1 & 2): 429–60. Williams, M. (2001) Problems of Knowledge: A Critical Introduction to Epistemology. New York: Oxford University Press. Wilson, T. and Stone, J. (1985) “Limitations of Self-Knowledge: More on Telling More than We can Know,” in Shaver, P. (ed.), Review of Personality and Social Psychology, Vol. 6, Beverly Hills: Sage, pp. 167–83.
simulation theory and cognitive neuroscience
137
8 Simulation Theory and Cognitive Neuroscience ALVIN GOLDMAN
Is Simulation a Natural Category? One topic on which Steve Stich and I have occupied opposing positions is the topic of mindreading (also called mentalizing, or folk psychology). In his first position statement on the subject, he and Shaun Nichols framed the debate as one between simulation theory and theory-theory (Stich and Nichols 1992). Responding primarily to Bob Gordon (1986) and me (Goldman 1989) as protagonists of simulation theory, they offered a lucid and spirited defense of theory-theory. As the decade of the 1990s proceeded, Stich and Nichols published a seemingly endless stream of articles on the subject, most of which appeared in Mind and Language. A funny thing happened, however, on the way to the century’s close. The Stich–Nichols attack on the simulation theory gradually softened. In Stich and Nichols (1995) and Nichols, Stich, Leslie and Klein (1996), they showed grudging appreciation of simulationism’s virtues, at least in some of its forms or applications. By the time they published their book on mindreading (Nichols and Stich 2003), their preferred theory was acknowledged to be “very eclectic” (2003: 100). The book rarely refers to their position as theory-theory, and emphatically rejects the theorytheory of self-awareness (2003: ch. 4). One mindreading process (inference prediction) is definitely said to be executed by simulation, and other mindreading processes are allowed to bear some similarities to simulation prototypes (2003: 135). Not surprisingly, I find this increased appreciation for simulation quite congenial. But philosophers are a hard bunch to please, myself included. For my money, Stich and Nichols still haven’t moved far enough in the direction of simulationism. Let us look more closely at their early and late assessments of its prospects. In early writings they viewed simulation theory as an unwelcome intruder into cognitive science. In 1992 they warned that it is fundamentally at odds with “the dominant explanatory strategy in cognitive science” (Stich and Nichols 1992: 36). Again in 1996 they called it “a radical departure from the typical explanations of cognitive capacities” (Nichols et al. 1996: 39). In 1997 their complaint shifted a bit in tone and focus. Surveying what they regarded as
138
alvin goldman
the excessively heterogeneous use of the term ‘simulation’, they challenged the naturalness or utility of the term and urged that it be retired. They continue to urge this in 2003, quoting with approval a passage from their 1997 paper: In reaction to the apparently irresistible tendency to use the word ‘simulation’ as a label for almost anything, we have for some years been arguing that the term ‘simulation’ needs to be retired, because ‘the diversity among the theories, processes and mechanisms to which advocates of simulation theory have attached the label “simulation” is so great that the term itself has become quite useless. It picks out no natural or theoretically interesting category’ (Stich and Nichols 1997: 299). (Nichols and Stich 2003: 134)
Now, I do not want to defend every application of the term ‘simulation’ that anybody has ever proposed. But I do want to resist Nichols and Stich’s continuing claim that there is no natural or theoretically interesting category of simulation. Defense of the naturalness, robustness, and theoretical interest of simulation is the main aim of this paper. The paper also has a secondary, though pervasive, aim. I believe that the best case for the robustness and theoretical interest of simulation can be made with the help of cognitive neuroscience. This is where the best evidence for simulation, including simulationbased mindreading, resides. It is a major omission of Nichols and Stich (2003) to neglect cognitive neuroscience. In general, Stich and Nichols and I agree that the subject of mentalizing requires a heavy dose of inputs from empirical science. Nichols and Stich mention cognitive ethology, anthropology, developmental science, and clinical psychology (2003: 3–5). But they give no attention to, or even make mention of, cognitive neuroscience. This is a serious lacuna. Mentalizing is a prominent topic in cognitive neuroscience; articles on the subject are sprinkled generously throughout its journals. My own booklength treatment of mentalizing (Goldman 2006) makes heavy use of cognitive neuroscience. I regard it as a vital discipline to infuse into the debate about mentalizing. In particular, the full power and merit of the simulation theory only emerges with the help of neuroscientific findings. Notice that I view mental simulation as a broader phenomenon than mental simulation for mindreading. Simulation-based mindreading is a special application of mental simulation in general. Mental simulation can be used for further tasks, such as mindreading, but it doesn’t have to be so used to qualify as simulation. I would claim that the existence and robustness of mental simulation in the broad sense lends support to the claim that simulation is a natural and theoretically interesting category. Some of the material in this paper is intended to elaborate on this point (for more extended defense, see Goldman, 2006). Cognitive neuroscience is a mix of methods, including single-cell recordings (primarily of laboratory animals), lesion studies of clinical patients, a variety of imaging techniques (e.g., PET and functional MRI), neurocomputational modeling, and transcranial magnetic stimulation (TMS). The present discussion registers no preferences among these methods; on the contrary, it draws on quite a few of them. Of course, any of these methods can pose problems in application or interpretation. Each method is susceptible of more or less careful employment; but this holds equally of traditional psychological
simulation theory and cognitive neuroscience
139
methods. (For an instructive discussion of special pitfalls in the use of fMRI, see Saxe, Carey and Kanwisher 2004.) Needless to say, I regard the studies cited here as good examples – indeed, often superior examples – of their respective methods.
Simulation and Respects of Similarity How should the term ‘simulation’ be understood, especially for purposes of the simulation theory of mindreading? This is a complex and delicate matter which I won’t try to cover thoroughly (see Goldman 2006: ch. 2). But the general approach I recommend, fairly consonant with a good bit of the simulationist literature (though not all of it), is keyed to the notions of similarity, copying, or replication. Very roughly, one process successfully simulates another, in the intended sense, only if the first process copies, replicates, or resembles the target process, at least in relevant respects. A process needn’t successfully simulate its target process, however, to qualify as simulation. If a process is launched in an attempt to replicate another process (either actual or possible), it can be considered a simulation even if it fails quite miserably to copy or replicate its target. Furthermore, even to be successful, process P does not have to replicate all aspects or stages of process P ′ in order to simulate P′; it suffices to replicate relevant stages and aspects of P ′ (relevant to the task at hand). Now, when we are concerned with mental simulation, we are concerned with similarities or resemblances along mental or psychological dimensions. But exactly which types or respects of psychological similarity are relevant? Phenomenological respects? Functional respects? Neurological respects? According to a standard version of simulation theory, a simulation routine occurs when an attributor creates in himself an imagined or pretend mental state intended to resemble its genuine counterpart in the head of the target. For example, if I wish to predict a forthcoming decision of yours, I might create in myself pretend desires and beliefs that correspond to the genuine desires and beliefs I take you to have. These pretend states, if they are well chosen, will be similar to their genuine counterparts. What kind of similarity, however, is in question? Starting with the phenomenological answer, several problems immediately arise. First, it is highly debatable whether (occurrent) propositional attitude tokens have any phenomenological properties at all. Is there anything it’s “like” to have them? If not, then a pretend propositional attitude (token) cannot resemble a genuine propositional attitude counterpart in phenomenological respects. Second, even if propositional attitude tokens do have phenomenological properties, it may be extremely difficult to identify those properties and compare them across mental events. Given this difficulty, the prospects for verifying or falsifying ST (or even gathering trustworthy evidence) in a sound, reliable way becomes highly problematic. Third, it is widely conceded that a large part of mindreading activity, like other cognitive activity, occurs non-consciously. Moreover, it is usually assumed that non-conscious states lack phenomenology (though there are those who dissent from this presumption). Thus, if the sole or principal respect of similarity were phenomenological similarity, this threatens to preclude the possibility that mindreading activity is simulational. That seems unfairly prejudicial against ST.
140
alvin goldman
What about functional similarities, the option that Stich would probability endorse (if he accepted a similarity-based characterization of simulation)? If we follow ST’s standard story, this option also runs into problems from the get-go. Pretend, or surrogate, states are described by the standard story as having important functional dissimilarities from their counterparts. Dissimilarities are apparent on both output and input sides. To illustrate, suppose you are asked to predict what you would do if you entered your house and saw a skunk in your parlor? Suppose you make this prediction via a simulation routine, by constructing, among other states, a pretend state of seeing a skunk in the parlor. What are the functional properties of a pretend skunk-seeing state? Presumably they don’t include (a) causing you to grab for a skunk-sized object, or (b) causing you to turn around and run out the door. Why? Because, according to the standard ST story, a pretend decision is something that gets taken off-line; it isn’t a state that tends to cause behavior that a corresponding genuine decision would cause. So there’s a functional difference between a pretend state and its genuine counterpart. Similarly on the input side. Genuine skunk sightings are typically produced by skunks; pretend skunk sightings are not typically so caused. They have a more “endogenous” causal source. Admittedly, these functional differences between genuine skunk sightings and pretend skunk sightings do not exhaust the functional properties of the two states. So, the existence of these functional differences does not prove that the two states fail to share any relevant functional properties. But the selected functional properties are quite important ones, and they are typical of pretend states and their genuine counterparts in general. If these (types of ) functional properties aren’t shared, one wonders whether a credible case could ever be made for significant and relevant similarities between pretend states and their counterparts along the functional dimension. Again, this would threaten a fair shake for simulationism. A pretend skunk sighting might resemble a genuine skunk sighting in more categorical, or intrinsic, respects even if they don’t resemble one another very closely in functional terms. For example, a pretend decision might not produce its “natural” behavioral output because it is neurally inhibited. This is a serious possibility, not a merely abstract one. In other words, a pretend decision might closely resemble a genuine decision in “intrinsic” terms but be attended by an inhibition instruction that cancels its normal, downstream effects. This analysis already moves beyond the functional to the neurological level. At the neurological level, there may be better prospects for finding illuminating resemblances among processes. Admittedly, there are many grains of neuroscientific analysis, and it isn’t clear ab initio at what grain there might be interesting process resemblances. As I shall show, however, there is lots of evidence of neural resemblances, and they greatly strengthen the case for simulation as a robust characteristic of both mindreading and other forms of social cognition. Notice that the evidential value of cognitive neuroscience does not depend on the foregoing argument that neurological respects of similarity are critical. Even if functional respects of similarity were agreed upon as the crucial “truth-makers” for simulation, it could still emerge that cognitive neuroscience provides invaluable evidence for the existence or nonexistence of such functional resemblances. But if the foregoing argument is right, there are more direct reasons to turn to neuroscience.
simulation theory and cognitive neuroscience
141
Simulation and Motor Cognition Instead of turning immediately to simulation-based mindreading, I want to start by examining mental simulation in a certain type of non-mindreading activity. The activity in question is in the motor domain, the domain of action. The cases in question are cases in which a simulation relation holds between mental activity involving purely covert action-related thoughts and mental activity that executes overt action. In other words, under a variety of action-related conditions in addition to execution, the motor part of the cognitive system is activated (often at subliminal levels) in ways very similar to the way it is activated when it executes overt action. A typical simulating event is imagining (kinesthetically) the performance of an action, or producing a plan for such an action while watching someone else perform it. The main point to be established is that imagining a mental activity consists of genuinely simulating execution-related mental activity in the sense of ‘simulation’ I have specified, namely the resemblance, similarity, or approximate-duplication sense of ‘simulation’. (Notice, I am not saying that motor imagination resembles overt action, only that it resembles the mental activity responsible for producing the sort of overt action imagined.) I draw here on a review paper by Marc Jeannerod (2001), one of the leading figures in motor imagery research. Jeannerod summarizes his simulation theory for motor cognition as follows: The simulation theory to be developed in this paper postulates that covert actions are in fact actions, except for the fact that they are not executed. The theory therefore predicts a similarity, in neural terms, between the state where an action is simulated and the state of execution of that action. The term S-states [for simulation states] will be used throughout to designate those ‘mental’ states which involve an action content and where brain activity can be shown to simulate that observed during the same, executed action. (2001: S103)
Jeannerod begins by citing behavioral findings suggesting that action imaginings have the same temporal characteristics as the corresponding real, executed actions. For example, Decety, Jeannerod and Preblanc (1989) measured the time it took subjects to walk to a target. When they were blindfolded and encouraged to imagine walking to the target, imagined walking times were very similar to actual walking times. Similarly, in Decety and Jeannerod (1996), subjects were instructed either to actually walk or to imagine walking on beams with different widths. It was assumed that the narrower the width, the more difficult the task. Fitts law suggests that the more difficult the task, the slower should be its execution. This was indeed found, not only for executed walking but for imagined walking as well. In another experiment (Frak et al. 2001) subjects were requested to make estimates of the difficulty of grasping an object placed at different orientations. The times taken to make these estimates were close to the times taken to actually reach and grasp objects at the same orientations. This behavioral evidence suggests that motor imagery recruits neural mechanisms in the subject’s motor brain similar to those that are recruited during a real action. Although this behavioral evidence invites a simulationist interpretation, how probative is it? Evidence more specifically targeted to neural activity is obviously pertinent, and
142
alvin goldman
this calls for neuroscientific methods. Such methods have been utilized to compare the activity in various brain regions for several action-related tasks, including (i) action execution, (ii) imagination of action, and (iii) observation of others’ actions, which is known to trigger mirror-matching neural activity in monkeys (more on mirror-matching activity below). Jeannerod summarizes the results of various studies by speaking of a “neurophysiological validation of the simulation theory” (2001: S104). By this he does not mean that the activation areas for the various S-states are identical; rather, they partially overlap. There is a core network that pertains to all S-states, he says, but each S-state retains its own specific network. Activation of the motor system during S-states is essential to the simulation theory of motor cognition. The motor system is comprised of several areas: primary motor cortex, corticospinal pathway, basal ganglia, cerebellum, and premotor cortex. With respect to the primary motor cortex, Jeannerod reports that fMRI studies demonstrate that pixels activated during contraction of a group of muscles are also activated during imagery of a movement involving the same muscles (Roth et al. 1996). Primary motor cortex activation during motor imagery is weaker, or more attenuated, than that of motor execution. Nonetheless, the level of activation during imagery is about 30 percent of the level observed during execution, which is still substantial. If motor cortex is active during S-states, i.e., during “covert actions,” this should influence the motoneuron level. This was tested by directly measuring corticospinal excitability using transcranial magnet stimulation (TMS) of motor cortex during both observed and imagined arm movements (Fadiga et al. 1995, 1999). Motor evoked potentials (MEPs) increased only in those muscles involved in the covert hand action. For example, MEPs were selectively (differentially) increased in a finger flexor when the subject mentally activated finger flexion, whereas MEPs in the antagonist extensor muscle remained unchanged. The generation of motor outflow by the motor system involves concomitant commands to the vegetative system. Thus, during vigorous movement there is an increase in heart rate and respiration rate. Similarly, an increase in heart rate was found during merely imagined movements (Decety et al. 1993), and respiration rate increased during imagination of exercise, in proportion to the imagined effort (Wuyam et al. 1995). Paccalin and Jeannerod (2000) also found that when an observer merely watched a runner on a treadmill, the observer’s respiration rate increased with the speed of the runner. The cerebellum is also part of the motor system, and again cerebellar activation was clearly found in imagined action (Ryding et al. 1993), in perceptually based motor decision (Parsons et al. 1995), and during action observation of others (Grafton et al. 1996). Lotze et al. (1999) undertook additional fMRI tests to study other motor-related subsystems, such as the premotor cortex and the supplementary motor area (SMA). They also trained their subjects to create kinesthetic images of motor movements without discharges of the muscles or involvement of visual imagery, to avoid possible confounds. Lotze et al. found that both the premotor cortex and SMA were equally activated during both actual and imagined movement. They view their results as “add[ing] further support to the notion that motor imagery is functionally and anatomically related
simulation theory and cognitive neuroscience
143
to motor performance” and as “support[ing] the hypothesis of functional equivalence of motor imagery and motor preparation” postulated by Jeannerod (Lotze et al. 1999: 494). There is also a large degree of overlap between SMA activation during imagined movement and action observation, on the one hand, and during execution, on the other. SMA has the function of acting as a parser for temporally segmenting an action and anticipating its successive steps; this function, apparently, is retained during S-states, not only during execution. If motor imagination is so similar to the preparation portion of motor execution, why doesn’t it produce overt movement? Why, or how, is it “taken off-line”? The standard answer, as I indicated earlier, is inhibition of movement execution during imagination. Lotze et al. (1999) locate this inhibitory activity at least partly in the posterior cerebellum. More recent work suggests a role for the parietal area. Schwoebel et al. (2002) report the case of C. W., a patient with bilateral parietal damage due to two separate strokes. When C. W. imagines movements, he actually produces them, but without being aware of doing so. He gives every indication of understanding instructions to merely imagine hand movements, and reports no awareness of overtly moving his hands. So it appears that the inhibitory signal has been selectively removed by his parietal damage. In his case, motor imagination has the unintended consequence of following through to execution (Wilson 2003). This illustrates how brain lesion evidence can contribute to the understanding of mental processes (at the neural level, of course). Most of the foregoing discussion has concentrated in motor imagery, only one of the S-states treated by Jeannerod (2001). Let me now say more about the action-related states that occur when one observes another person acting. The fundamental discovery here was made in the laboratory of Giacomo Rizzolatti, whose group was studying the premotor cortex of macaque monkeys, using single-cell recordings. The particular brain region was the ventral portion of the premotor cortex, variously called F5 or Brodmann’s area 44. They first discovered that different groups of neurons code for distinctive types of actions, such as grasping, holding, or tearing an object. Further testing of the monkeys revealed the surprising fact that certain groups of these neurons fire at close to maximal rates not only when the animal executes the distinctive type of action those neurons code but also when the animal merely observes another monkey or an experimenter execute the same action with respect to a goal object. This matching or mirroring of observation and execution was quite unexpected and led the investigators to label the neurons mirror neurons (Gallese et al. 1996; Rizzolatti et al. 1996). As reported above, a rather similar (and presumably homologous) action mirroring system is found in humans. In the human case, observing an action, such as grasping an object or tracing a figure in the air, produces electromyographically detectable activation in the muscle groups appropriate for the observed action, although no overt action is performed. This is like taking the plan of action off-line, apparently because of inhibitory control, as discussed earlier. Mirror neuron activity can obviously be interpreted as a simulational event, indeed an interpersonal simulational event, inasmuch as the same group of neurons fire at closeto-maximal rates both in the observer’s premotor cortex and in the actor’s premotor cortex. Gallese and Goldman (1998) speculated that mirror neurons may be the precursor of the mindreading of intention, although it would be overreaching to say that
144
alvin goldman
mirror-neuron activity per se is mindreading, because mirror-neuron activation undoubtedly isn’t, all by itself, the substrate of a belief about the actor’s mental state. Research on mirroring, or resonance, phenomena has produced many other impressive findings of a comparable sort, clear evidence that minds do a lot of automatic simulating of other minds (see Gallese 2001, 2003 for reviews). Rizzolatti’s group extended their work with an fMRI study of human subjects who were shown actions made not only with the hand but with the mouth or a foot, e.g., biting an apple or kicking a ball (Buccino et al. 2001). There was matching neuronal activity in each case. In other words, when we observe an action, our motor system becomes active as if we were executing the very same action being observed. Indeed, action observation determines a somatotopically-organized activation of premotor cortex, a pattern similar to that of the classical motor cortex somatotopy. Another intriguing simulation-related discovery in the motor domain was also inspired by monkey studies. In monkeys there is evidence that a close link exists between common objects and the actions necessary to interact with them. Single-unit recording studies showed that some neurons in the monkey discharge not only when the monkey executes grasping movements but also when it merely looks at graspable objects associated with such movements (Murata et al. 1997; Sakata et al. 1995). Apparently, actions associated with graspable objects are automatically evoked whenever the monkey sees these objects. Because tools are commonly associated with specific hand movements, Chao and Martin (2000) predicted that if humans were shown pictures of tools, but not other categories of objects, there would be activity in regions of the brain that store information about motor-based properties. This was indeed confirmed, using fMRI methodology. Thus, merely thinking about a tool tends to evoke a simulation of associated action-related processes in the brain. None of the foregoing discussion directly addresses motor-related mindreading. Now let me now mention another study of motor cognition that, arguably, crosses the threshold from simulation to simulation for (interpersonal) mindreading. In a study by Chaminade et al. (2001), subjects were asked to try to anticipate a sequence of movements based on cursive handwriting. What they saw was simply movements of black dots on a white screen depicting different trajectories. These movements were based on the handwriting of either the two-letter sequence ‘ll’ (two lower-case ‘l’s) or the sequence ‘ln’. The presented stimuli depicted only the first element of the sequence, and subjects were asked to anticipate the goal of the whole sequence. In principle this is possible because the kinematics of the graphic production is influenced by anticipation of the next letter to be written. An observer might exploit these differences to predict the second letter, for example, by simulating in his own motor system the kinematics in question, thereby reconstructing the kinematics of what he observed. Apparently this is how subjects did make predictions at a level greater than chance. Evidence for this explanation is as follows. If the motor simulation explanation were correct, the same areas specifically activated when actually performing the motor actions should also be activated during observation and prediction. This is consistent with what was found by neuroimaging. The cortical areas of the subjects specifically activated were left pars opercularis (Broca’s area) and left superior parietal lobule (SPL). The latter is independently thought to be a center for the kinematics of writing, and the former has a well-known role in the
simulation theory and cognitive neuroscience
145
production of language. Thus, people seem to use their motor capacities simulatively to interpret perceived actions performed by others. The Chaminade et al. task did not clearly implicate the observers’ ascriptions of movement intentions. However, there has been a great deal of research associated with observation of biological motion, even when the motion involves geometric figures like triangles and squares. This goes back to research by Heider and Simmel (1944). Normal subjects standardly describe such motions in mentalistic terms. This suggests that the motor system is used simulatively not merely to make purely kinematic interpretations of biological movement but intention interpretations as well.
Simulation and Face-based Emotion Attribution In the previous section I talked about the robustness of simulation in a certain corner of cognition, motor cognition. Only at the end did I adduce a possible case of simulational mindreading. Nonetheless the entire discussion was quite relevant to the simulation theory of mindreading by providing crucial pieces of evidence that imagining (or pretending) is properly viewed as a species of mental simulation in the similarity sense I have specified. In this section, all the cases to be discussed are definitely cases of mindreading. They comprise, however, a somewhat atypical sector of mindreading, different from the cases usually treated in the literature (especially the philosophical literature). The stock examples in the literature are attributions of garden-variety propositional attitudes: belief, desire, intention, and so forth. The present section instead focuses on emotion attributions, specifically, attributions based on a target’s facial expression. Though the topic is somewhat narrow, it is a natural and important one, because face-based emotion mindreading is an extremely common and primitive form of human mindreading. It has also been heavily studied, of late, with the tools of cognitive neuroscience. My treatment of face-based emotion mindreading will not make use of the constructs of “imagination” or “pretense.” Imagination or pretense is part of the standard version of simulation theory, but is merely optional under the wider conception of simulation as (approximate) matching or replication (either successful or merely attempted). It is that wider conception I continue to apply here. Most of the studies to be discussed involve patients with brain lesions. The principal tests of these patients are intensively used types of tests in which subjects observe slides or videotapes of facial expressions and are asked to classify the emotions shown. The choices are drawn from a set of about six basic emotions: sadness, happiness, fear, anger, disgust, and surprise. Since these emotions are mental states, judgments or attributions made by the subjects are attributions of mental states to the depicted targets, and hence mindreading judgments. Our question is whether or not these attributions are executed by a simulation heuristic. To summarize the results of research in several laboratories, it has been established that for each of three basic emotions there is a paired deficit in experiencing the emotion and in attributing, or “recognizing,” it in a facial expression. (For a review see Goldman and Sripada 2005 or Goldman 2006: ch. 6.) These deficits were selective in the sense that
146
alvin goldman
patients impaired specifically in emotion X had no difficulty in recognizing emotion Y or Z but only in recognizing X. (Sometimes matters were less clear-cut.) In short, facebased emotion recognition displays a systematic pattern of paired deficits. One could also formulate the matter in terms of double, indeed triple, dissociations. Recognition of emotion X can be intact while recognition of Y is impaired, and recognition of Y can be intact while recognition of X is impaired; and so forth. Adolphs and colleagues studied patients to determine whether damage to the amygdala, known to be associated with the production of fear, might also affect face-based recognition of fear (Adolphs et al. 1994). One patient, S. M., was a 30-year-old woman with Urbach-Wiethe disease, a rare metabolic disorder that resulted in bilateral destruction of her amygdalae. S. M.’s relatively fearless nature was evidenced, for example, by her not feeling afraid when shown film clips that normally elicit fear (e.g., The Shining and The Silence of the Lambs); but she seemed to experience other emotions strongly (Adolphs and Tranel 2000). Also, S. M. was tested on various face-based emotion recognition tasks. S. M. was abnormal in face-based recognition of fear. Her ratings of fearful faces correlated less with normal ratings than did those of any of 12 brain-damaged control subjects (with different kinds of brain damage), and fell 2–5 standard deviations below the mean of the controls when the data were converted to a normal distribution. She was also abnormal to a lesser extent in recognizing anger and surprise. Sprengelmeyer et al. (1999) studied another patient, N. M., who also had bilateral amygdala damage. N. M. was also abnormal in experiencing fear. He liked dangerous activities, such as dangling from a helicopter while hunting deer in Siberia. Like S. M., N. M. also exhibited a severe and selective impairment in fear recognition. The second emotion for which a paired deficit was found is disgust. As background, it is well established from both animal and human studies that taste processing is localized in the anterior insula region, and that disgust is an elaboration of a phylogenetically more primitive distaste response (Rozin et al. 2000). Calder et al. (2000) studied a patient N.K. who suffered from insula and basal ganglia damage. His overall score on a questionnaire for the experience of disgust was significantly lower than the controls, but his scores for anger and fear did not significantly differ from the scores of controls. Thus, his experience of disgust appeared to be selectively impaired. In tests of his ability to recognize emotions in faces, N. K. similarly showed significant and selective impairment in disgust recognition. Only in that emotion was his recognitional ability abnormal. The third emotion for which a paired deficit has been found is anger. The neurotransmitter dopamine is apparently involved in the processing of aggression in social-agonistic encounters and plays an important role in mediating the experience of anger. Dopamine levels in rats are elevated during social-agonistic encounters, and increased dopamine levels can lead to enhanced appetitive aggression and agonistic dominance. Lawrence et al. (2002) speculated that lowering the level of dopamine in normal individuals might not only reduce the production or experience of anger but also impair their ability to recognize anger in faces. So a dopamine antagonist, sulpiride, was administered to otherwise normal subjects, producing a temporary impairment in their dopamine system. As predicted, the subjects receiving sulpiride performed significantly worse than controls at recognizing angry faces, but no differently in recognizing facial expressions of other emotions.
simulation theory and cognitive neuroscience
147
What accounts for this striking pattern of paired deficits for three emotions? What does it tell us about the ordinary performance of face-based emotion reading tasks that damage to the system in which an emotion is produced, or experienced, also damages the individual’s capacity to recognize that emotion specifically? An obvious hypothesis is that when a normal person successfully executes a face-based emotion reading task for some emotion E, they do so via a simulation heuristic. They use the same neural system in making a recognition judgment as is used in experiencing emotion E. The classification process somehow uses the target emotion itself, or the neural substrate of that emotion. Naturally, when that neural substrate is damaged, emotion E is not produced and therefore they cannot correctly recognize emotion E in another’s face. This germ of a simulationist idea was suggested in several of the original studies reported above, but not developed in detail. Goldman and Sripada (2005) and Goldman (2006: ch. 6) set out the argument more systematically. Let the simulation hypothesis for mindreading be the hypothesis that some of the same events, processes or machinery used in tokening state M oneself are also used to arrive at an accurate attribution of state M to another. This hypothesis predicts that damage to the system used to token M in oneself will also damage one’s ability to attribute M accurately to others. Precisely this prediction is what the pattern of paired deficits bears out, in spades. So simulation theory predicts this observed pattern of findings. Does the theory-theory also predict them? Goldman and Sripada argue to the contrary. They articulate three ways that a theorytheorist might try to explain the paired deficit data. However, two of these possible explanations are strongly contravened by separate evidence and the third would be very ad hoc. One possible explanation is that subjects with face-based emotion reading deficits have difficulty with the perceptual processing of faces. In fact many of the studies tested their subjects on measures designed to detect such perceptual deficiencies (such as the Benton Face Matching Task), and the subjects were normal. Another possibility is that subjects with emotion reading deficits are selectively impaired in declarative knowledge concerning the emotion in question, for example, knowledge of its typical elicitors or behavioral effects. But researchers report that their impaired subjects have no such declarative knowledge deficit. For example, Calder et al. (2001) reported that “patients with disgust recognition impairments are able to provide plausible situations in which a person might feel disgusted and do not show impaired knowledge of the concept of disgust.” A third possible explanation is that there is damage to the pairing of semantic labels for emotions and representations of facial configurations. But this explanation would require the pairing to be damaged very selectively. First, the labeling must be damaged for one emotion but spared for others. Second, the label must be rendered inaccessible specifically from visual representations of faces, not from general knowledge, because patients show command of the label when verbally discussing general facts about the impaired emotion type. This kind of damage is not logically impossible, but it is entirely ad hoc to postulate it as an explanation of the observed deficits. Thus, a theorizing account of normal face-based emotion recognition is highly implausible. A number of fMRI studies have also been done, especially for the case of disgust, that decisively support the simulation story. A recent, very convincing, study shows that the
148
alvin goldman
same brain regions that are preferentially activated when experiencing disgust are also preferentially activated when merely observing disgust facial expressions (Wicker et al. 2003). In two “visual” runs, normal participants passively viewed movies of someone smelling the contents of a glass, which could either be disgusting, pleasant, or neutral, and showing the natural facial expression for the respective emotion. In two “olfactory” runs, the same participants inhaled disgusting or pleasant odorants through a mask on their nose and mouth. The core finding was that the left anterior insula and the right anterior cingulate cortex are preferentially activated under both conditions (as compared with appropriate contrasting conditions). Thus, it appears that observation of disgustexpressive faces automatically activates the same neural substrates that are implicated in the experience of disgust. This closely parallels the patterns of matching, or “resonance,” across different individuals that were discussed in the previous section, patterns in which the undergoing or experiencing of a given mental state in one individual is matched by the undergoing of a similar mental state (often attenuated) in an observing individual. When we understand simulation in terms of replication or similarity, the Wicker et al. study presents a clear case of interpersonal simulation. The study does not show that disgust simulation is used for mindreading, because the participants were not given any attribution or judgment tasks to perform. But the paired-deficit studies discussed earlier point to the conclusion that normal subjects use their own (similar) emotion equipment and experience when making face-based attributions in accord with the simulation theory of mindreading. Mirroring or resonance phenomena have also been recently discovered in other domains, including pain and the feeling of touch (see Goldman 2006, for details). Putting all of this together, there is now a massive case for simulation and simulational mindreading across an impressive swath of territory (although this type of resonance is not known to extend to the propositional attitudes). Moreover, the great bulk of the evidence flows from various methods of cognitive neuroscience. It is hard to imagine how comparably compelling evidence could be obtained by other methods.
Simulation is a Robust and Theoretically Interesting Category Let us now return to the question of whether the term ‘simulation’, as explained above, picks out a natural and theoretically interesting psychological category. I submit that it does. I have documented the existence of a well-unified family of phenomena that seem to be “naturally”, not merely conventionally, related to one another and of substantial scientific (and theoretical) interest. And what I have presented is only a fraction of what is “out there”. That these phenomena are of theoretical interest is not simply a report of my personal judgment but a summary of many scientists’ interest in the foregoing phenomena, and their recognition that they comprise a single category. In referring to a “family” of phenomena, I acknowledge differences in types of simulation. This doesn’t undercut the naturalness or theoretical interest of the category. There are also a large number of different atomic elements, yet that doesn’t mean that the category ‘atomic element’ isn’t a natural or theoretically interesting category. The same holds for ‘cell’,
simulation theory and cognitive neuroscience
149
‘force’, ‘economy’, and almost all other theoretical notions in science. There are many types of cells, several types of physical forces, and many types of economies. I have not tried to elaborate the variety of types of simulation, a project that would take us too far afield (see Goldman, 2006). But here are a few examples. First, one might distinguish between automatic processes of simulation, such as those involved in emotion recognition and mirroring phenomena, and more effortful processes of simulation, such as controlled exercises of the imagination. Second, in simulation-based mindreading, there would presumably be simulational routines used for predictive mindreading and other simulational routines used for retrodictive mindreading. These would not be alike in all relevant ways. Also, I cheerfully acknowledge that some mindreading processes might use both simulational elements and theorizing elements. A hybrid theory of mindreading, of the kind I favor, would allow this possibility. But that doesn’t conflict with the notion that, to the extent that a routine involves simulation, there is a univocal category of simulation of which it is an instance. What leads Stich and Nichols to their conclusion that ‘simulation’ fails to designate a natural, unified, or theoretically interesting category, and hence is worthy of retirement? I would diagnosis the matter as follows. Not unreasonably, they have emphasized examples of simulation-based mindreading in which a cognitive mechanism is taken off-line, fed pretend inputs, and allowed to generate an output, which is then used for attribution to a target. They point out that other putative examples of simulation don’t fit this description. Although this is a reasonable interpretation of matters, given most of the traditional simulationist literature, it is simply too narrow and constraining. It is especially too narrow in light of the automatic mirroring phenomena I have described here, phenomena that many scientists find both compelling and theoretically interesting. Finally, what about Stich and Nichols’ older critique of simulation theory as a “radical departure” from mainstream cognitive science? Although simulationist ideas may still be viewed as a “radical departure” in mainstream in developmental psychology, where the great bulk of work on mentalizing has been done, the scene has shifted quite substantially in cognitive neuroscience. Cognitive neuroscience no longer views simulation theories as a radical departure. Indeed, they are fairly popular in some respected quarters. So whatever may have been true in the 1990s, the landscape has been changing. I predict further changes in the same direction as the kinds of research findings sampled here become better known.
References Adolphs, R. and Tranel, D. (2000) “Emotion Recognition and the Human Amygdale,” in J. P. Aggleton (ed.), The Amygdala: A Functional Analysis, 2nd edn., Oxford: Oxford University Press, pp. 587–630. Adolphs, R., Tranel, D., Damasio, H., and Damasio, A. (1994) “Impaired Recognition of Emotion in Facial Expressions following Bilateral Damage to the Amygdala.” Nature 372: 669–72. Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V., Seitz, R. J., Zilles, K., Rizzolatti, G., and Freund, H.-J. (2001) “Action Observation Activates Premotor and Parietal
150
alvin goldman
Areas in a Somatotopic Manner: An fMRI Study.” European Journal of Neuroscience 13(2): 400–4. Calder, A. J., Keane, J., Manes, F., Antoun, N., and Young, A. W. (2000) “Impaired Recognition and Experience of Disgust Following Brain Injury.” Nature Reviews Neuroscience 3: 1077–8. Calder, A. J., Lawrence, A. D. and Young, A. W. (2001) “Neuropsychology of Fear and Loathing.” Nature Reviews Neuroscience 2: 352–63. Chaminade, T., Meary, D., Orliaguet, J.-P., and Decety, J. (2001) “Is Perceptual Anticipation a Motor Simulation? A PET Study.” NeuroReport 12: 3669–74. Chao, L. L., and Martin, A. (2000) “Representation of Manipulable Man-made Objects in the Dorsal Stream.” NeuroImage 12: 478–94. Decety, J. and Jeannerod, M. (1996) “Mentally Simulated Movements in Virtual Reality. Does Fitts Law Hold in Motor Imagery?” Behavioral and Brain Research 72: 127–34. Decety, J., Jeannerod, M. and Preblanc, C. (1989) “The Timing of Mentally Represented Actions.” Behavioral and Brain Research 34: 35–42. Decety, J., Jeannerod, M., Durozard, D., and Baverel, G. (1993) “Central Activation of Autonomic Effectors during Mental Simulation of Motor Actions in Man.” Journal of Physiology 461: 549–63. Fadiga, L., Buccino, G., Craighero, L., Fogassi, L., Gallese, V. and Pavesi, G. (1999) “Corticospinal Excitability is Specifically Modulated by Motor Imagery: A Magnetic Stimulation Study.” Neuropsychologia 37: 147–58. Fadiga, L., Fogassi, L., Pavesi, G., and Rizzolatti, G. (1995) “Motor Facilitation during Action Observation: A Magnetic Stimulation Study.” Journal of Neurophysiology 73: 2608–11. Frak, V., Paulignan, Y. and Jeannerod, M. (2001) “Orientation of the Opposition Axis in Mentally Simulated Grasping.” Experimental Brain Research 136: 120–7. Gallese, V. (2001) “The ‘Shared Manifold’ Hypothesis: From Mirror Neurons to Empathy.” Journal of Consciousness Studies 8(5–7): 33–50. Gallese, V. (2003) “The Manifold Nature of Interpersonal Relations: The Quest for a Common Mechanism.” Philosophical Transactions of the Royal Society of London B, Biological Science 358: 517–28. Gallese, V. and Goldman, A. (1998) “Mirror Neurons and the Simulation Theory of Mindreading.” Trends in Cognitive Sciences 2: 493–501. Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G. (1996) “Action Recognition in the Premotor Cortex.” Brain 119: 593–609. Goldman, Alvin (1989) “Interpretation Psychologized.” Mind and Language 4: 161–85. Goldman, Alvin (2006) Simulating Minds: The Philosophy, Psychology and Neuroscience of Mindreading. New York: Oxford University Press. Goldman, A. I. and Sripada, C. S. (2005) “Simulationist Models of Face-based Emotion Recognition.” Cognition 94: 193–213. Gordon, Robert (1986) “Folk Psychology as Simulation.” Mind and Language 1: 158–71. Grafton, S. T., Arbib, M. A., Fadiga, L. and Rizzolatti, G. (1996) “Localization of Grasp Representations in Humans by Positron Emission Tomography. 2. Observation Compared with Imagination.” Experimental Brain Research 112: 103–11. Heider, F. and Simmel, M. (1944) “An Experimental Study of Apparent Behavior.” American Journal of Psychology 57: 243–59. Jeannerod, M. (2001) “Neural Simulation of Action: A Unifying Mechanism for Motor Cognition.” NeuroImage 14: S103–S109. Lawrence, A. D., Calder, A. J., McGowan, S. M. and Grasby, P. M. (2002) “Selective Disruption of the Recognition of Facial Expressions of Anger.” NeuroReport 13(6): 881–4. Lotze, M., Montoya, P., Erb, M., Hulsmann, E., Flor, H., Klose, U., Birbaumer, N. and Grodd,
simulation theory and cognitive neuroscience
151
W. (1999) “Activation of Cortical and Cerebellar Motor Areas during Executed and Imagined Hand Movements: An fMRI Study.” Journal of Cognitive Neuroscience 11: 491–501. Murata, A., Fadiga, L., Fogassi, L., Gallese, V., Raos, V. and Rizzolatti, G. (1997) “Object Representation in the Ventral Premotor Cortex (area F5) of the Monkey.” Journal of Neurophysiology 78: 2226–30. Nichols, Shaun and Stich, Stephen (2003) Mindreading: An Integrated Account of Pretence, SelfAwareness, and Understanding Other Minds. Oxford: Oxford University Press. Nichols, S., Stich, S. P., Leslie, A. and Klein, D. (1996) “The Varieties of Off-line Simulation,” in P. Carruthers and P. Smith (eds.), Theories of Theories of Mind, Cambridge: Cambridge University Press. Paccalin, C. and Jeannerod, M. (2000) “Changes in Breathing during Observation of Effortful Actions.” Brain Research 862(1–2): 194–200. Parsons, L. M., Fox, P. T., Downs, J. H., Glass, T., Hirsch, T. B., Martin, C. C., Jerabek, P. A., and Lancaster, J. L. (1995) “Use of Implicit Motor Imagery for Visual Shape Discrimination as Revealed by PET.” Nature 375: 54–8. Rizzolatti, G., Fadiga, L., Gallese, V., and Foggasi, L. (1996) “Premotor Cortex and the Recognition of Motor Actions.” Cognitive Brain Research 3: 131–41. Roth, M., Decety, J., Raybaudi, M., Massarelli, R., Delon-Martin, C., Segebarth, C., Morand, S., Gemignani, A., Decorps, M. and Jeannerod, M. (1996) “Possible Involvement of Primary Motor Cortex in Mentally Simulated Movement: A Functional Magnetic Resonance Imaging Study.” NeuroReport 7: 1280–4. Rozin, P., Haidt, J. and McCauley, C. (2000) “Disgust,” in M. Lewis and J. Haviland (eds.), Handbook of Emotions. New York: The Guilford Press. Ryding, E., Decety, J., Sjolhom, H., Stenberg, G. and Ingvar, H. (1993) “Motor Imagery Activates the Cerebellum Regionally. A SPECT rCBF Study with Tc-HMPAO.” Cognitive Brain Research 1: 94–9. Sakata, H., Taira, M., Murata, A. and Mine, S. (1995) “Neural Mechanisms of Visual Guidance of Hand Action in the Parietal Cortex of the Monkey.” Cerebral Cortex 5: 429–38. Saxe, R., Carey, S. and Kanwisher, N. (2004) “Understanding Other Minds: Linking Developmental Psychology and Functional Neuroimaging.” Annual Review of Psychology 55: 87–124. Schwoebel, J., Boronat, C. B., and Branch Coslett, H. (2002) “The Man who Executed ‘Imagined’ Movements: Evidence for Dissociable Components of the Body Schema.” Brain and Cognition 50(1): 1–16. Sprengelmeyer, R., Young, A. W., Schroeder, U., Grossenbacher, P. G., Federlein, J., Buttner, T., and Przuntek, H. (1999) “Knowing No Fear.” Proceedings of the Royal Society, Series B: Biology 266: 2451–6. Stich, S. P. and Nichols, S. (1992) “Folk Psychology: Simulation or Tacit Theory?” Mind and Language 7: 35–71. Stich, S. P. and Nichols, S. (1995) “Second Thoughts on Simulation,” in M. Davies and T. Stone (eds.), Mental Simulation: Evaluations and Applications, Oxford: Blackwell. Stich, S. P. and Nichols, S. (1997) “Cognitive Penetrability, Rationality and Restricted Simulation.” Mind and Language 12(3/4): 297–326. Wicker, B., Keysers, C., Plailly, J., Royet, J-P., Gallese, V., and Rizzolatti, G. (2003) “Both of Us Disgusted in My Insula: The Common Neural Basis of Seeing and Feeling Disgust.” Neuron 40: 655–64. Wilson, M. (2003) “Imagined Movements that Leak Out.” Trends in Cognitive Science 7(2): 53–5. Wuyam, B., Moosavi, S. H., Decety, J., Adams, L., Lansing, R. W. and Guz, A. (1995) “Imagination of Dynamic Exercise Produced Ventilatory Responses which were More Apparent in Competitive Sportsmen.” Journal of Physiology 482: 713–24.
152
kim sterelny
9 The Triumph of a Reasonable Man: Stich, Mindreading, and Nativism KIM STERELNY
1 Beyond Simulation and the Theory-Theory Humans interpret others. We are able to anticipate both the actions and intentional states of other agents. We do not do so perfectly but, since we are complex and flexible creatures, even limited success needs explanation. For some years now Steve Stich (frequently in collaboration with Shaun Nichols) has been both participant in, and observer of, debates about the foundation of these capacities (Stich and Nichols 1992, 1995). As a commentator on this debate, Stich (with Nichols) gave explicit and fairminded sketches of the cognitive architectures presupposed by the various theories of mindreading. As a participant, Stich has mostly been a defender of the theory-theory, the view that normal human agents have an internally represented theory of other agents and they use that theory in interpreting other agents. The main recent rival to this position, simulationism, claims that agents use their own decision-making mechanisms as a model of those of other agents, and derive their predictions by modeling others in something like the way aeronautical engineers derive predictions from the use of scale models in wind tunnels. Stich has been skeptical about this alternative, for on his view simulation theory makes mistaken predictions about both the development of interpretive competence and about the pattern of interpretive success and failure. In their most recent work, Stich and Nichols have shifted ground, developing an impressive simulationist/theory-theory hybrid (Nichols and Stich 2004). I do not intend to evaluate this hybrid model in this paper. Rather, I intend to explore its consequences for nativist conceptions of interpretation. Under the influence of the linguistics model, nativist ideas have become increasingly prominent in cognitive and evolutionary psychology. Applied to mindreading, nativism emerges as the idea that folk psychological concepts, folk psychological principles, or both, are innate. They develop early, universally and independently of any informational contribution from experience. Folk psychology is developmentally entrenched. In recent work, I have been increasingly skeptical of this nativist conception of our interpretive capacities (Sterelny 2003b, 2006), and
stich, mindreading, and nativism
153
I shall argue that, if Stich and Nichols are right about the architecture of mindreading, these skeptical arguments are reinforced. This conclusion would not be very exciting if their view was hopelessly implausible. But in my view, a hybrid approach to the problem of understanding our interpretive capacities has a good chance of being true. Simulationist ideas are made plausible by the suspicion that the theory-theory over-intellectualizes interpretation; this is particularly evident from the fact that relatively young children (five-year olds) are quite competent interpreters. If we interpret by learning and applying a theory of mind, how come we can do this so young? Perhaps our early faculty with a folk psychological theory can be explained by supposing that it is innate. But prediction does not come through theory alone: data and auxiliary hypotheses are needed. To apply innate principles to a particular case, they must be combined with a rich and accurate data set. For even if our theory of others is itself innate, if the theory-theory is true, interpretation requires a considerable data base, so for it early competence remains a puzzle. Yet simulation cannot be the whole story, for we are sometimes able to allow not just for differences in beliefs and preferences in predicting others, but also for different cognitive capacities. Good poker players regularly exploit weaker ones’ fallible probabilistic reasoning. Prima facie, interpretation seems likely to be the result of some mix of (as they put it) “information-poor” and “information-rich” processes. So the consequences for nativism of a hybrid model of interpretation are important. First, however, an outline of that model.
2 A Hybrid Theory of Interpretation and Prediction The Stich–Nichols conception of mindreading begins with a picture of an intentional system without metarepresentational capacities. The mind of this hypothetical agent includes a belief system with Updater and Planner: these are inference mechanisms taking inputs from and to the belief box. The UpDater, as the name suggests, updates the belief box in response to new information, whereas the Planner takes goals as inputs and gives as output a proposed course of action to realize that goal. The belief and desire box feed into a practical-reasoning decision-making system which in turn feeds into action control. What would we need to add to this agent to make it an interpreting intentional system: an agent with beliefs about the acts and thoughts of other agents? On the Stich–Nichols view, first, our interpreting agent needs to be sensitized to the goals of other agents. With this, a minimal form of interpretation is already possible. Having attributed a desire to another agent, our interpreting agent can use her own beliefs and her own Planner to determine a strategy through which the interpretive target can achieve his goal, and she can then predict that the target will exploit that strategy. Of course, these interpretive capacities are quite limited even if the interpreting agent is a sophisticated detector of others’ goals, for there is no recognition of the differences in belief between interpreter and target. So such an interpreter will often mispredict the target’s actions, for a strategy she would use will sometimes depend on a conception of their world the target does not share.
154
kim sterelny
Stich and Nichols suggest that more sophisticated interpretive capacities depend on co-opting a cognitive capacity that is likely to be present for other purposes: a capacity for representing and reasoning about hypothetical situations. Suppose, for example, we are worried about the consequences for our academic program if the department chair were to resign. To reason about such scenarios we have (they suggest) a dedicated workspace, a “Possible World Box.” When we reason hypothetically, and all goes well, the hypothetical is “switched on” in the Possible World Box. Thus we write “The chairman has resigned” into the Possible World Box and lock that proposition on. We then add those of our beliefs that are consistent with that supposition to this workspace and then UpDate in the Possible World Box. The result is our prediction of what would happen should the chair resign. In the Stich–Nichols view, the Possible World Box is co-opted to account for the differences in the target’s beliefs and the interpreter’s beliefs. The interpreter loads the Possible World Box with her best estimate of the target’s beliefs, and she uses the contents of the Possible World Box and the Planner to determine the target’s likely actions. In their view, importantly, a specific heuristic guides this procedure of estimating the target’s beliefs. The default procedure is to suppose that the interpreter’s beliefs and the target’s beliefs are the same. This default explains why we can routinely attribute common knowledge and its obvious implications, and why interpretation is often quite successful with strangers. But though this is the default procedure, it is overridden in particular cases. We use information about differences in perceptual perspective (“she could not see that from there”), information from conversations and other social interactions, and information from the agent’s behavior to fine-tune our estimate of the target’s beliefs. So, for example, we infer that she could not have realized that the fish is poisonous, for otherwise she would not have picked it up. There are many common desires, too (especially avoidance-desires), so a parallel argument suggests that our default is to suppose that others have the same desires that we do. Just as we know without direct evidence that our target will believe that a crow is smaller than a whale, equally we know without direct evidence that Jane will not want to go to work with her ears painted blue. Despite this apparent parallel, this default–desire heuristic does not play a central role in the Stich–Nichols picture of desire estimation. In their view, in its earlier developmental states, we read desire off other agents’ expression, orientation, and action. The fact that Peter is reaching for the beer in plain sight in front of him makes it easy to infer that he wants a beer. We explain our own actions intentionally, too, and Stich and Nichols have a suggestion about first-person interpretation, one which does not depend on an information-rich process. Instead, our minds are equipped with a special purpose detector that registers a belief in the belief box (or a preference in the preference box) and which writes a further, metabelief in the belief box. This loop involves no theory of mind or other informationrich understanding of intentional states. Detecting one’s own current beliefs is just a self-monitoring process, one which has the effect of adding a belief to the belief box. It is a brute-causal process, and is relevant only to self-awareness: it plays no direct role in understanding others. Monitoring your own thoughts is not at all like recognizing the thoughts of others. Reasoning about your own thoughts, on the other hand, is like
stich, mindreading, and nativism
155
reasoning about those of others. Reasoning about thought is symmetrical. Working out what you would want if (say) your current desires were satisfied uses the same mechanisms as those involved in the counterfactual predictions of others’ desires. This theory of mind reading is a hybrid. The utility of the default attribution of the interpreter’s belief set to the Possible World Box depends on interpreter-target similarities. So too does the use of the interpreter’s own Planner and UpDater to update the target’s doxastic world and to estimate the target’s course of action once her desire world has been identified. However, as Stich and Nichols point out, there are many elements of their view of mindreading which do not depend on predictor-agent similarities. Reading desire from expression and orientation does not: someone who loathes beer can zero in on Peter’s thirst for a beer as easily as a fellow beer-lover. Likewise, the mechanisms which filter and override default attribution do not depend on interpretertarget similarities.
3 Innateness on a Hybrid Model Interpretive capacities develop very robustly: like language, the capacity to interpret others is a feature of all normal human minds. The pattern of development is similar from child to child, though there is variation in the rate at which the pattern unfolds (Peterson and Siegal 1999). Moreover, it is arguable that the information to which children are exposed is not rich; that is, the information on which interpretation rests is not manifest in the primary social experience to which children have access. Thus on some views the unobservability of psychological states makes it impossible for children to learn about the mental causes of behavior. On this view, so-called poverty-of-thestimulus considerations show that folk psychology cannot be learned. A further and quite striking fact is that there is significant insensitivity to quite marked differences in general learning capacity: normal children and children with moderate retardation pass false belief tests at similar ages. In short: developmental patterns do not vary from individual to individual, even when there are marked differences in learning abilities; moreover, development proceeds in an informationally impoverished environment, and so the crucial principles connecting intentional states to action must be innate. In the light of these facts it is no surprise that interpretation has been assimilated to the language model, and regarded as the result of an innate module. However, despite the prima facie plausibility of modular nativism, I have argued that these facts have an alternative explanation. The robust, stable development of interpretive capacities is the result of environmental scaffolding based on perceptual cues. For in addition to our perceptual responsiveness to facial expression and the like, children live in environments in which mutual interpretation is ubiquitous and made public by language. Moreover, the development of our capacities to understand other agents is supported by other social tools: for example, novels, plays, and stories provide people with a set of intentional templates (Sterelny 2003b, 2006). There is an important convergence between the anti-nativist arguments I have developed and the Stich–Nichols conception of mindreading. On both our views, the
156
kim sterelny
development of mindreading is stabilized by quasi-perceptual mechanisms (by “shallow” modules). These play a crucial role in desire-reading: that is, in detecting emotional expression, the focus of attention, orientation, and the differences between deliberate and accidental motion. Shallow modules play an important role in estimating the doxastic world of other agents by factoring-in the role of differences in perceptual points of view.1 It is also stabilized by the co-option of cognitive machinery that evolved for other purposes. The Updater, the Planner, and the Possible World Box are not specializations for mindreading, but their co-option for mindreading stabilizes this developmental trajectory. For central modes of reasoning do not have to be built from scratch to understand other agents. We are indeed perceptually adapted to mindreading: we have perceptual mechanisms tuned to the focus of another’s desires and to differences in perceptual points of view. But mindreading reasoning depends more on co-opted systems than systems built from scratch to interpret others. In view of these convergences, my aim is to parasitize the Stich–Nichols view: to show that if their conception of our mindreading capacities is broadly correct (for nothing in the arguments that follow depend on the fine-grained details of their position), the anti-nativist case is thereby strengthened. I shall discuss three issues. I shall begin with poverty-of-the-stimulus considerations. I shall then discuss the insensitivity of interpretive capacities to differences in general learning ability. Finally, I shall argue that innate adaptations and learned automatized skills are subject to different constraints, and that interpretation has the marks of a system constrained by limits on learning, not limits on adaptation.
4 The Poverty of the Stimulus The poverty-of-the-stimulus argument for an innate folk psychology has been sketched rather than defended in detail. So, for example, Leslie and Scholl have argued: As such, a ToM has often been thought to require its owner to have acquired the concept of belief. This raises the obvious developmental question: how do we acquire such a concept, and thereby acquire ToM. The obvious empiricist reply – that we learn it, perhaps from our parents – is suspect, due to the extremely abstract nature of mental states. After all, it’s not as if a parent can simply point and say “See that. That’s a belief.” Children become competent reasoners about mental states, even though they cannot see, hear or feel them. (Scholl and Leslie 1999: 133)
Despite its brevity, this line of thought is widely seen as plausible.2 In considering it, let me begin with the familiar distinction between concept innateness and information (or knowledge) innateness. For this argument is most plausibly taken to suggest that intentional concepts cannot be learned from experience. I shall argue to the contrary that that there is no special difficulty in supposing that intentional concepts are learned. On my view, the crucial nativist issue is whether information needed for mindreading is innate. Nothing in this argument depends on the Stich–Nichols picture, but I shall then co-opt
stich, mindreading, and nativism
157
that picture to argue that if their view of the informational resources needed for mindreading is right, those resources can be accumulated through ordinary learning mechanisms. The Stich–Nichols view of mindreading undercuts a nativism about mindreading based on poverty-of-the-stimulus arguments. So let’s begin with the idea that intentional concepts are innate.3 If we concentrate on belief, there is a picture of concept acquisition that lends plausibility to this suggestion, a picture that sees concept acquisition as some kind of stimulus generalization from instances.4 Thus experiences of particular pigeons leads to the development of a pigeon prototype which in turn is the basis of (or is perhaps even identical to) a concept of pigeons. It is indeed very hard to see how a concept of belief could be acquired by any route like this. However, while the whole issue of concepts and their possession is deeply opaque, we should be very cautious in buying an innateness hypothesis on the basis of any such line of thought. For a large class of human concepts cannot be learned by stimulus generalization: consider such representative examples as concepts for such kinds as: vehicle, tool, lawyer, father, fungus, food. Moreover Fodor is right, I think, to doubt that such concepts as these are definable: so they cannot be learned by specifying necessary and sufficient conditions of their application. Unless we are prepared to embrace the ultra-nativism of (Fodor 1975), we should not infer from “not learnt from instances by stimulus generalization” to “innate.” To be fair, one might reject a stimulus-generalization view of concept learning and still wonder whether intentional concepts could be learned from experience. For if we focus on belief, it is true that there seems to be a large gap between the activity that an agent can observe and the concept of belief. An agent’s beliefs are not distinctively manifest in his or her behavior. Beliefs contribute causally to many actions, but they do not often have distinctive, readily learnable behavioral signature. My belief that there is beer in the fridge will take me to the fridge only if I want a beer and only if that desire is not trumped by more overriding concerns. This point is familiar from functionalist critiques of behaviorism, but though in general the behavioral expression of an agent’s psychological state is modulated by her other psychological states, notice that belief stands at the extreme end of this interaction spectrum. Consider how belief contrasts, say, with disgust or embarrassment. While these states have no strict behavioral definition, they have reliable behavioral manifestations and so, to a greater or lesser extent, do many other psychological states. For this reason, folk psychology contains an array of iceberg concepts: concepts which name a syndrome that includes both a mental state and its distinctive behavioral manifestation. Desire (in its hot, affective sense); our basic emotions; some bodily sensations and perhaps some perceptual states (for example the focus of visual attention) are semi-observable psychological states, hence are named by iceberg concepts of this kind. With iceberg concepts, the intuitive gap between observable activity and our concepts for the mental causes of that activity is less wide. You can point to the distinctive manifestation of an itch. Moreover, an agent who has mastered the concept of an itch has mastered the concept of an internal cause of action. The same is true of more obviously cognitive mental states. For example, children take time to master how to disguise their desires and to voluntarily delay their gratification. It is typically very easy to read their
158
kim sterelny
current desire off their actions, expression, and perceptual focus. The same is true of men in bars. In an important sense, we can often point at desires. Moreover, if such iceberg concepts can be learned, their acquisition can facilitate the acquisition of concepts for less overt states. For they prime an agent to the possibility of internal causes of action. Children do seem to acquire the ability to reason about desires earlier than they acquire the ability to reason about belief. In the light of this argument, that fact is very suggestive. Iceberg concepts are a ramp for the acquisition of concepts for more cryptic mental states. We lack a good general theory of the nature and acquisition of concepts but, even so, I doubt that there is a special, intractable problem of learning intentional concepts, despite the unobservability of intentional states. So I shall focus on the idea that we have innate information about intentional states and their roles. And here the Stich–Nichols model helps greatly. On the picture they paint, the information that agents need to interpret others is information that could well be acquired through learning. To begin with, the Stich–Nichols model shares the benefit of any model incorporating significant simulationist elements. In comparison to theory-theory views, there is less you need to know to read minds. If Stich and Nichols are right, to understand other agents, we do not have to represent the process by which other agents plan to achieve their goals at all. Moreover, and most importantly, the Stich–Nichols model reinforces a non-nativist conception of the development of mindreading by decomposing the construction of doxastic and desire worlds into subcomponents, each of which can be both acquired and improved more or less independently of the others. Even on their hybrid picture, interpreting others is an information-rich task. But the information needed is of a kind that can be acquired and upgraded piecemeal. Consider, for example, default belief attribution. In constructing the target’s doxastic word, you load your own beliefs into your Possible World Box, and then modify that belief set, cutting some elements and adding others. In the earlier stages of developing interpretive skills, this revision may be guided by nothing more than sensitivity to different perceptual histories. But at some point agents learn to use linguistic information: people talk about their own beliefs and argue and gossip about the beliefs of others. There are other social cues: our target will be treated with deference on some issues but not others. Thus if our target is treated as an expert on rugby, we will credit her with a rich and accurate set of football beliefs. Action, too, is informative: once we have learned to run practical syllogism in reverse, an agent’s actions tells us a good deal about her beliefs. If we see a distinguished and rather staid professor on his hands and knees with his head under the table, we can infer that he believes he has lost something there. In other circumstances, we might infer that an agent has found something there. Much the same is true of the identification of an agent’s goals. The crucial point is that these capacities can all be built one by one: they are not a package deal. Moreover, each component can be improved gradually. For example, we gradually become better at reading subtle social cues to another agent’s beliefs and preferences. We fine-tune reverse practical inference by learning to constrain the space of likely motives and beliefs which would lead to the actions we actually see. We learn to be better at disregarding pseudo-information, at ignoring exaggeration and malicious gossip.
stich, mindreading, and nativism
159
Poverty-of-the-stimulus arguments are persuasive when the representations of a domain develop accurately despite very noisy data: hence linguistic versions of the poverty-of-the-stimulus argument have sometimes emphasized the ubiquity of performance errors in children’s linguistic experience. They are persuasive when development proceeds despite severe and relevant limitations in the data: hence linguistic versions of the poverty-of-the-stimulus argument have often emphasized the lack of negative data in children’s linguistic experience. They are persuasive when development proceeds despite the fact that the representations are very unobvious given the data, for the concepts needed to describe the data are very different from those needed to predict and explain it. Hence linguistic versions of the poverty-of-the-stimulus argument have often emphasized the abstractness of syntactic principles and their highly indirect relation to the data that confirm them (Cowie 1998). In explaining mindreading, Stich and Nichols posit a mix of “information rich” and “information poor” mechanisms. But the information needed by the information rich mechanisms does not have a noisy, or a limited, or a cryptic relationship to the experience from which the information could be learned.
5 Acquisition and General Learning Abilities Nativism about mindreading seems to be strikingly confirmed by the result that moderately handicapped children pass false belief tests near enough on schedule, for this suggests that mindreading is independent of general learning capacities (Scholl and Leslie 1999; Leslie 2000a, 2000b). I shall suggest an alternative possibility: the apparent independence of mindreading capacities from general learning capacities is an artifact of the concentration on the early development of mindreading, when shallow quasi-perceptual modules are indeed playing the central role in filtering default belief attribution and in desire attribution. If this is correct, we would expect to see over time, as children age, an increasing disparity between the mindreading skills of normal and handicapped agents. So here is my alternative scenario. Early in their mindreading career, normal children depend on shallow modules to detect the desires of other agents. And even after they can exploit their “Possible World Box” to allow for the difference between their beliefs and those of the interpretive target, their default attribution strategy is simple. They attribute their own beliefs to the target, except those beliefs ruled out by differences in perceptual situation. Thus they can allow for the fact that they (but not the target) saw what was inside the Smarties box but in other respects they expect the agent to share their doxastic world. One subsequent developmental change is the acquisition of an increasingly sophisticated and increasingly learning-dependent set of skills for assessing the desires of another agent. Agents begin by using the perceptual cues of facial expression and intentional movement (and perhaps default attribution of one’s own desires). But over time they learn to use language, other social information, and inference from action, and these become increasingly important. As I discussed above, each of these individual components of skill can itself be incrementally improved. A parallel developmental change is an increasingly sophisticated and increasingly learning-dependent set of skills
160
kim sterelny
for overriding default belief attribution. Here too language, other social information, and inference from action also become increasingly important. So this scenario predicts that as mindreading becomes (i) increasingly dependent on upgraded mechanisms of desire identification and (ii) increasingly dependent on greater sensitivity to the differences between the interpreter’s beliefs and the target’s beliefs, the mindreading gap between normal and learning-handicapped agents will become ever greater. The apparent insensitivity of interpretive skills to differences in learning capacity is an artifact of choosing a life stage at which these skills are relatively underdeveloped: it is an artifact of taking the false belief test to be too much of a watershed. If mindreading is really language-like in having its cognitive basis in a module, the innate-module view presumably makes the opposite prediction: it predicts that the disassociation between mindreading capacity and general learning capacity is a more permanent feature of the mental life of learning-handicapped agents.
6 Optimality, Adaptation, Learning The argument so far has co-opted the Stich–Nichols picture of the cognitive architecture of mindreading to argue against nativist views of that ability. I turn now to other elements of their picture: their emphasis on the limits as well as the strengths of our mindreading skills. And (it must be said) the argument becomes much more speculative. We are good but not perfect interpreters of others. Successful coordination typically depends on our being able to anticipate the actions and thoughts of our partners, and we often do succeed in coordinating, even when we must match actions that will take place months in the future. But there are also significant and systematic gaps in our capacity to anticipate others. Some of these gaps are no mystery. For example, we are much less sure of our interpretation of those agents from other cultures. For the mechanisms that filter our default belief and preference attribution depend on contingent, local information that we do not have. Our imperfect interpretation in such cases is no surprise on the Stich–Nichols view, but it is no surprise on anyone else’s view either. Everyone realizes that particular belief-preference profiles are sensitive to individual history. Other gaps in our capacity are potentially more informative, and Stich and Nichols discuss two such blindspots. First, perception. In some respects we are very sensitive to information about perception. In determining another agent’s goals, and in calibrating the differences between their doxastic world and ours, we take into account the focus of perceptual attention and differing points of view. But in other respects we are blind to facts about perception. The discovery of perceptual illusions is surprising precisely because of the existence of such blindspots. Thus Stich and Nichols discuss the wonderful example of change-blindness: if you show subjects two scenes within which there is an unchanged natural focus of attention in the foreground, many will not notice quite significant background alterations.5 We do not anticipate perceptual illusions hence cannot anticipate their doxastic effects. Stich and Nichols discuss a second type of case: context effects on decision making. The famous Milgram experiments were so shocking because they revealed powerful,
stich, mindreading, and nativism
161
unanticipated, and socially dangerous context effects. Many experimental subjects in a social context that included an authority figure were prepared, despite their worries and hesitancies, to inflict what they believed to be dangerous electric shocks at that authority figure’s behest. This sensitivity of judgment and action to context has subsequently been demonstrated in many other contexts. One of the most entertaining of these used subjects from a theological college and probed the willingness of the subject to stop and help someone who had (apparently) collapsed from illness. A striking result from these experiments was the sensitivity of willingness to help to time pressure: subjects running late to give a lecture (on, of all things, the fable of the good Samaritan) were much less willing to help than those not so pressured (Darley and Batson 1973). These discoveries are striking because they are so unanticipated: they show not just irrationalities in human reasoning, they show irrationalities to which mindreading is blind. We are good at working out what others will think and do, but we are not optimal. This gap between optimal and actual interpretive capacities is potentially informative. For the hypothesis that interpretation is a learned automatized skill (albeit one that piggybacks on perceptual adaptations) and the hypothesis that interpretation is an innately structured adaptation make different predictions about the gap between actual and optimal interpretation. If our interpretive capacities are based on a set of innate adaptations, we would expect to see two kinds of systematic failure. First, we would not expect to see accurate prediction in biologically irrelevant contexts, unless prediction in those contexts is a by-product of prediction in relevant ones. Second, information-rich but innate responses to the environment are vulnerable to environmental change. Suppose, for example, that determining what other agents want is an informationally demanding task, and we succeed by having an innate data base about the desires people have and about how those desires manifest themselves in action and expression. We recognize that longing look, because we have its template wired into us for comparison. A system of this kind is vulnerable to environmental change. It will become more error prone if there are shifts in what people want or in how their desires are manifest in action. Wiring-in crucial information is good design if the environment is constant, for learning can be costly and vulnerable to the accidents of developmental environment. But it is good design only if the environment is constant.6 So innately based systems will be far from optimal in a changing world. On the other hand, innate systems escape an epistemic constraint on individual learning: selection can see a pattern in noisy data because it sums over the life experience of many individuals. Evolution works over the timespan of a whole lineage, not the experience of an individual organism. Suppose that when agents lie, they have a slight but real tendency to stutter or slur at the beginning of their fabrication. Even an agent with inferentially optimal techniques for learning about her world may have no way of recognizing this signal from her own experience. There will be honest stutters, and lies with perfect diction. She herself may experience more of the former than the latter. Even if that is not the case, if the covariation is not marked, she will have no way of determining whether her own experiences are evidence of a systematic pattern. But if there is in the population a difference in levels of trust in response to such forms of language, selection can detect this small but significant difference. Indeed, the defenders of evolutionary psychology
162
kim sterelny
have seized on this idea as part of their general case for the adaptive superiority of special purpose innately structured systems over general purpose learning: “many relationships necessary to the successful regulation of action cannot be observed by any individual during his or her lifetime . . . Domain general architectures are limited to knowing what can be validly derived by general purpose processes from perceptual information” (Cosmides and Tooby 1995: 92). In short, if we assess other agents’ doxastic and desire worlds using largely innate though information-rich mechanisms, we would expect environmental change to degrade the accuracy of these assessments but we would expect to be alert to stable but noisy signals. If mindreading depends on learned, automatized skills, we make the opposite prediction. Environmental change should not degrade our mindreading capacities, but we might well fail to discover the value of real but noisy signals, especially as our general learning mechanisms are far from perfect in using noisy data. Which of these predictions is confirmed? The crucial case here seems to be the effects of social context. Prima facie, Milgram effects, bystander effects and the like are of profound biological significance. For decision-making in social contexts is ubiquitous. Why then are our preference-detection mechanisms relatively insensitive to this information? This is especially puzzling because it is the strength of these effects, not their existence, to which we are blind. Thus, to take one of the more entertaining studies Stich and Nichols discuss (pp. 136–7), it was no surprise to the experimental subjects that some students were prepared to defend in public an examination system they detested to please a pretty girl. It was the prevalence of this effect that the subjects failed to predict. Prima facie, this is a feature of the human mind onto which an innate, special purpose adaptation should lock. We need only suppose that interpreting agents have varied somewhat in their cynicism about the robust attachment of others to principle. Given such variation, those who expected – more than their contemporaries – conformist effects would have reaped the rewards of their superior predictive talents, and the population would have been gradually tuned to the right degree of cynicism. Of course, not too much can be made of this suggestion, even if the premises are right. Adaptations can fail to be optimal for any number of reasons. But if our knowledge of the operating principles of other minds is innate, our blindness to the importance of social context is somewhat surprising. If, however, we have to learn how others think and act, it is not so surprising. Even if our general learning capacities were faultless, the strength of context effects would be quite hard to learn from ordinary social interaction. To find out about this feature of the human mind, an interpreter would have to have good information about an agent’s preferences7 prior to the social context in question; good information about how those preferences change or are overridden in that social context; and the interpreter would need to be able to calibrate this effect quantitatively. That is, an interpreter would need this information about enough agents and situations to detect both the general trend and its strength. The task is objectively tough, and we are far from perfect reasoners about statistical information. It took good experimental design to reveal these effects about our decision-making propensities. So it is no surprise that we did not discover them reasoning imperfectly about natural experiments. So my hypothesis is that insensitivity to the effects of social context is the result of a noisy learning environment. We would have to learn about these contexts from imperfect
stich, mindreading, and nativism
163
data, and so our relative blindness to these effects can be naturally explained by appeal to constraints on learning. But equally (as Peter Godfrey-Smith has pointed out to me) they may be the effects of change. Human social worlds have been transformed in the last 10,000 years or so, and perhaps context effects are a consequence of these changes. So an alternative possibility is that our innate interpretive capacities have not caught up with the changed world in which they now operate. I think there are empirical tests (albeit difficult ones) which could discriminate between these two explanations of context blindness. We should test for context effects in environments more typical of those of our ancestors: that is, in social worlds which are smaller in scale, more egalitarian, culturally homogenous, and with little movement in and out. These social worlds are far from anonymous: most agents know one another well. If context effects are due to rapid environmental change, these effects should be less marked in such environments. The shaman’s mask should be less effective than the doctor’s lab coat in inducing Milgramstyle conformity to authority. Likewise, we should test for the effects of informational enrichment. We should provide subjects with plenty of opportunities to learn about context effects and see whether this information becomes incorporated into automatized, routine social judgment. If our blindness to context effects is due to rapid environmental change rather than a noisy learning environment, informational scaffolding will not help much. On the other hand, if scaffolded learning explains the robust and early development of our typical interpretive skills, scaffolding the effects of context should help greatly. I have my hunch about how such tests would turn out, but a hunch is not evidence.
7 Reprise Let me briefly recapitulate the argument of this chapter. Our ability to mindread is a cognitively sophisticated capacity that nonetheless develops robustly and early in all normal agents. In those respects it is like language, and thus it is no surprise that it too has been taken to depend on rich, innate information about its domain. I have argued that we can co-opt the Stich–Nichols picture of mindreading to undermine this nativist suggestion. For it undermines poverty-of-the-stimulus arguments for mindreading by showing that the informational resources needed for mindreading decompose into separable elements, each of which can be acquired and improved independently of the others. This acquisition is promoted by genuine perceptual adaptations for mindreading: we are perceptually tuned to such salient features of other agents as their direction of gaze, their expression, and their orientation. These features of mindreading (if the Stich–Nichols picture is right) help explain a second puzzling fact: mindreading skills seem to be independent of general learning abilities. I have suggested that this appearance may be deceptive. It is generated by the critical role of perceptual adaptations – which are indeed independent of general learning abilities – in the first stages of the development of mindreading. If this suggestion is right, learning plays a central role in upgrading the default strategies for belief and desire attribution, and so in older children and adolescents mindreading skills and learning ability should vary together. Finally, I have argued
164
kim sterelny
that the limits on our mindreading capacities are more plausibly explained by the limits on individual learning than by the limits on adaptive evolution.8,9
Notes 1 A system whose importance I have previously overlooked. It is probably an ancient one, since the great apes are sensitive to these facts (Call 2001). 2 For example, in the discussion of this argument at the “Culture and the Innate Mind” conference in Sheffield, 2003, Botteril, Segal and Carruthers all defended it. 3 For the sake of argument, I shall proceed as if there is a clear distinction between concept nativism and informational nativism, but in fact I doubt that such a distinction can be drawn. First, it is important to note that those who accept nativist ideas about concept possession nonetheless accept that experience is necessary for concept acquisition. Thus innate concepts have to be “switched on” by experience. Thus the distinction between a learned concept and an innate one is typically made by appealing to a distinction between types of experiential cause; for example a distinction between an informationally sensitive learning process and mere triggering by experience (Fodor 1981; Cowie 1998). Yet the experience needed to trigger a concept is experience of instances of that concept. If the concept TIGER is triggered by experience, it is triggered by tigers, not buttercups. Given that concepts, if triggered, are triggered by their instances, the learning-triggering distinction becomes extremely problematic if we further suppose (as Fodor and others do) that the possession of a concept is atomistic. That is, it does not involve propositional content of any kind (Fodor 1998). On this atomist view, having the TIGER-concept does not involve having a theory of tigers. Likewise to have the BELIEF-concept is to have a cognitive mechanism that responds to beliefs; it is not constituted by knowing stuff about belief. If any version of an atomistic view of concept possession turns out to be right, the distinction between learning and triggering looks set to collapse. For concept acquisition, if triggered, is triggered by instances. Yet unless concept possession has an informational component, in what way would such a stimulus be information-poor? Thus, given that concepts are triggered by their instances, conceptual atomism does not seem to support a distinction between triggered and learned concepts. We can draw that distinction if we drop atomism, but in doing so we erode the distinction between concept nativism and information nativism. For example, on descriptivist or theoretical-role views of concept possession, the possession of a concept is tied to an appropriate informational resource. That enables us to draw a triggering/learning distinction. If, for example, possession of the TIGER-concept is constituted by having a certain set of true beliefs about tigers, we can indeed ask whether the triggering experiences of tigers were informationally rich enough for the agent to have acquired from them that information. Along similar lines, if the possession of intentional concepts is tied to information about intentional states and their role in cognition, we can indeed ask whether there was enough accessible information in the signal for the agent to be able to learn enough about (say) belief from that signal. But then concept nativism fades into nativism about folk psychological principles. For example, if an agent can only have the concept of belief if she realizes that beliefs are truth-apt, that they are evidentially tied to perception and that they combine with preferences in practical inference, then concept nativism becomes a version of nativism about folk psychological principles. For the agent would have to represent those principles in order to acquire those concepts.
stich, mindreading, and nativism
165
4 Thus Scott Atran appealed to this model of concept acquisition in developing a parallel argument for the innateness of the species concept: see Atran 1998. 5 There is a wonderful video that illustrates a similar effect. You show a subject a scratch basketball game and assign quite a demanding task: count the number of times the ball is passed. In that context, many subjects (and I was one) will not notice such a dramatic event as a man in a gorilla suit walking through the game. 6 Of course, it is true that innate systems can generate adaptive response to variation in the environment by wiring-in conditional response. However, there are limits to such wired-in conditional responses. The range of variation and its significance must itself be constant. 7 Or beliefs, for there are context effects on judgment, too. 8 Thanks to Peter Godfrey-Smith for his particularly incisive comments on an earlier version of this paper. 9 I would like to thank the editors for inviting me to contribute to this collection for, amongst much else, it gives me the opportunity to place on the public record my gratitude to Steve, who was immensely generous to me both personally and academically very early in my professional career, when I needed it most. In being a recipient of Steve’s generosity, I am far from alone – he has consistently been an extraordinarily generous supporter both of his own students and of others early in their career. So, Steve, thanks.
References Atran, S. (1998) “Folk Biology and the Anthropology of Science: Cognitive Universals and Cultural Particulars.” Behavioral and Brain Science 21: 547–609. Call, J. (2001) “Chimpanzee Social Cognition.” Trends in Cognitive Science 5(9): 388–93. Cosmides, L. and Tooby, J. (1995) “Origins of Domain Specificity: The Evolution of Functional Organization,” in L. A. Hirschfeld and S. A. Gelman (eds.), Mapping the Mind: Domain Specificity in Cognition and Culture, Cambridge: Cambridge University Press. Cowie, F. (1998) What’s Within? Nativism Reconsidered. Oxford: Oxford University Press. Darley, J. M. and Batson, C. D. (1973) “From Jerusalem to Jericho: A Study of Situational and Dispositional Variables in Helping Behavior.” Journal of Personality and Social Psychology 27: 100–8. Dennett, D. C. (1978) Brainstorms. Montgomery: Bradford Books. Fodor, J. A. (1975) The Language of Thought. New York: Thomas Y. Crowell. Fodor, J. A. (1981) Representations. Cambridge, Mass.: MIT Press. Fodor, J. A. (1987) Psychosemantics. Cambridge, Mass.: MIT Press. Fodor, J. A. (1998) Concepts: Where Cognitive Science Went Wrong. Oxford: Oxford University Press. Godfrey-Smith, P. (2002) “On the Evolution of Representational and Interpretive Capacities.” The Monist 85(1): 50–69. Godfrey-Smith, P. (2004) “On Folk Psychology and Mental Representation,” in H. Clapin, P. Staines and P. Slezak (eds.), Representation in Mind: New Approaches to Mental Representation, New York: Elsevier Publishers. Godfrey-Smith, P. (2005) “Untangling the Evolution of Mental Representation,” in A. Zilhao, Cognition, Evolution, and Rationality – A Cognitive Science for the XXIst Century, London: Routledge. Leslie, A. (2000a) “How to Acquire a Representational Theory of Mind,” in D. Sperber, Metarepresentations: A Multidisciplinary Perspective. Oxford: Oxford University Press: 197–224.
166
kim sterelny
Leslie, A. (2000b) “Theory of Mind as a Mechanism of Selective Attention,” in M. Gazzaniga (ed.), The New Cognitive Neurosciences. Cambridge, Mass.: MIT Press: 1235–1247. Lycan, W. G. (1990) “The Continuity of Levels of Nature,” in W. G. Lycan (ed.), Mind and Cognition. Oxford: Blackwell: 77–96. Nichols, S. and Stich, S. (2004) Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds. Oxford: Clarendon Press. Peterson, C. C. and Siegal, M. (1999) “Insights into Theory of Mind from Deafness and Autism.” Mind and Language 15(1): 77–99. Ravenscroft, I. (1999) “Predictive Failure.” Philosophical Papers 28(3): 143–68. Scholl, B. and Leslie, A. (1999) “Modularity, Development and ‘Theory of Mind.’” Mind and Language 14(1): 131–53. Sterelny, K. (2003a) “Charting Control-Space: Comments on Susan Hurley’s ‘Animal Action in the Space of Reasons.’ ” Mind and Language 18(3):257–66. Sterelny, K. (2003b) Thought in a Hostile World. New York: Blackwell. Sterelny, K. (2006) “Cognitive Load and Human Decision, or, Three Ways of Rolling the Rock up Hill,” in S. Stich, S. Laurence and P. Carruthers, The Innate Mind: Vol. 2: Culture and Cognition. Cambridge: Cambridge University Press. Stich, S. and Nichols, S. (1992) “Folk Psychology: Simulation or Tacit Theory?” Mind and Language 7(1): 35–71. Stich, S. and Nichols, S. (1995) “Second Thoughts on Simulation,” in M. Davies and T. Stone (eds.), Mental Simulation: Evaluations and Applications, Oxford: Blackwell: 87–108.
against moral nativism
167
10 Against Moral Nativism JESSE J. PRINZ
In the 1960s and 1970s, philosophical interest in innateness was rekindled by the rise of modern linguistics. That resurgence was facilitated by Stephen Stich’s (1975) anthology Innate Ideas, which takes readers on a tour from Plato, Locke, and Leibniz, to Chomsky, Katz, and Quine. That volume also marked a new philosophical interest in interdisciplinary research. One year after publication, the Sloan Foundation reified interdisciplinary approaches to the mind by offering institutional support and selecting the label “cognitive science.” Twenty years later, cognitive science is going strong, and debates about nativism have broadened to include many different mental capacities. Stich has been at the cutting edge of these trends; he has been an important contributor, a crusader for interdisciplinary approach, and a mentor to some of the finest new voices in the field. His most recent contributions to the innateness debate, in collaboration with Chandra Sripada, focus not on language, but on morality. That will be my topic. Sripada and Stich (2006) argue against strong forms of moral nativism, but they do think moral cognition has an innate basis. I will argue that moral nativism should be abandoned altogether, but much of what I say is in harmony with their views. I will address Sripada and Stich’s work in my concluding section. My goal here is not to undermine their arguments, but to offer a parallel discussion. I am a fellow traveler in a terrain that Sripada and Stich have helped to chart out. My debt to Stich will be apparent in every page; he has been instrumental not only in illuminating this specific debate, but in shifting philosophy to an unprecedented form of methodological naturalism. Stich’s naturalism raises the bar by demanding that philosophers acquaint themselves with psychology, biology, and anthropology. Philosophers under his influence cannot indulge in a priori reflection without blushing.
1 Born to Be Good? Moral norms are found in almost every recorded human society. Like language, religion, and art, morality seems to be a human universal. Of these universals, morality is arguably the only one with precursors in nonhuman animals. Many species communicate, but
168
jesse j. prinz
they do not have richly inflected and recursive languages. Apes mourn their dead, but they do not have funeral rites. Bower birds make beautiful structures to attract their mates, but there is no reason to think this is a precursor of human creative expression. But animals do console each other, empathize, and reciprocate. It has even been suggested that some primates have a nascent sense of fairness. It seems, then, that we have good evidence for the claim that morality is an evolved capacity. Animals may not have moral systems in exactly the same sense that we do, but the resemblance is intriguing. It is tempting to conclude that human morality is an evolutionary successor to capacities found in other species. On the picture, morality is innate. We are born to be good. The concept of innateness is closely related to domain specificity. To say that a capacity is innate is, in part, to say that we have biological machinery dedicated to attainment of that capacity. Friends of innateness claims often emphasize universals. If morality is part of the bioprogram, and the evolved aspects of human psychology are generally monomorphic, then there must be moral universals; i.e., there must be aspects of our moral systems that are found among all normally developing members of our species. Defenders of moral universals might adopt the minimal view that we have a general capacity for acquiring moral norms, while denying that the content of those norms is preordained. Or they might adopt the modest proposal that there are innate moral rules, while admitting that epigenetic factors exert a non-trivial influence on how these rules operate in different societies. Defenders of immodest moral nativism would say that innate moral norms have specific contents: the actual norms that govern our lives are innately fixed, and culture exerts little influence. In this chapter, I will argue against moral nativism. I will proceed in stages. First I will argue against immodest moral nativism, then against the modest view, and finally against the view that we have only a minimal innate moral competence. Morality, I will claim, is a by-product of capacities that were evolved for other purposes. Morality is a spandrel. There is no mechanism dedicated to the acquisition of moral norms, and the same antinativist conclusion may even be true for nonmoral norms. The fact that our lives are thoroughly permeated by norms may be an accident.
2 Are There Moral Universals? Defenders of immodest moral nativism would need to establish two things. First, they would need to identify some moral norms that can be found in all (or almost all) cultures. I add the parenthetical hedge because certain highly aberrant cultural conditions might prevent a biologically driven capacity from developing properly. For simplicity, however, I will say that immodest nativists believe in “universal” moral norms. Second, immodest nativists would need to show that innate, domain specific mechanisms are the best explanation of how those norms are acquired. The discovery of pancultural norms would not prove immodest moral nativism on its own. There can be cultural universals that are not innate. In every culture, people believe that the sun is bright, but that probably isn’t an innate belief. It is a belief that every one acquires using general-purpose perceptual capacities. It just so happens that every sighted person observes the sun and is capable of
against moral nativism
169
discerning that the sun is bright. In this section, I will postpone the question of domainspecific mechanisms, and focus on universals. Are some moral norms found universally? To fully address this question, one would have to take an exhaustive list of the moral values in a randomly chosen culture and then investigate whether any of those values can be found in all other cultures. That would be a massive undertaking. To keep things manageable, I will restrict this discussion to three illustrative norms that have been widely observed and widely investigated in both human societies and other species. These norms constitute some of our best current candidates for moral universals. The first norm that I want to consider can be stated as the injunction: “Don’t harm innocent people.” Barring sadists and psychopaths, this looks like a norm few of us would reject. We condemn those who harm others, and we feel terribly guilty when we cause harm, even if inadvertently. We have numerous laws against stealing, torturing, killing, and delimiting the freedoms of others. Violations of these laws carry serious legal consequences and moral outrage from the community. If unsolicited harm were tolerated, societies would break down. It is hard to imagine that any culture could survive without a norm against harming the innocent. Yet, a moment’s reflection can show that this norm is regularly violated. The Aztecs, for example, purportedly captured innocent people on a regular basis, and sacrificed them in public ceremonies that ended in lavish cannibal feasts. Harris (1985) argues that this has been a common practice historically. He cites many examples, including this sixteenth-century missionary’s account of how a sacrificed slave was treated by the Tupinamba of Brazil: They soon tore [the slave’s body] into pieces with great rejoicing, especially the women, who went around singing and dancing, some [of the women] pierced the cut off members [of the body] with sharp sticks, others smeared their hands with [the victim’s] fat and went around smearing [the fat on] the faces and mouths of the others, and it was such that they gathered [the victim’s] blood in the hands and licked it . . . (p. 209)
Murderous acts are often commonplace without the cannibalistic element. The Yanomamö of the Amazon basin go on raids of neighboring villages and kill the first man they encounter (Chagnon 1968). The Ilongot of Luzon in the Philippines practiced headhunting into the middle of the twentieth century. Rosaldo (1980) reports that they used to decapitate strangers in order to “lighten their hearts” after a loved one died. The Romans hosted gladiatorial tournaments for half a millennium, and thousands of ordinary citizens would pile into the Colosseum to watch people being torn limb from limb. Slavery has been commonplace throughout human history, and countless societies have engaged in violent conquests to obtain resources or expand. Notice that, in all these cases (and examples can be multiplied ad nauseam), the harms were regarded as morally acceptable by many people or even morally good. In response, one can only concede that there is no general moral stricture against harming innocent people. There may, however, be a workable revision. Most of the time, when a society tolerates harming innocent people, those people are members of a different social group.
170
jesse j. prinz
So our injunction needs to be revised: “Don’t harm innocent members of the ingroup.” This is an improvement, but there are still apparent counterexamples. In ritual contexts, harm is often tolerated. Scarification, piercing, fasting, adult circumcision, dangerous drugs, enemas, flagellation, and immolation are just a few examples. A number of Plains Indian groups in North America practice the O-Kee-Pa, in which participants have their bodies suspended by hooks. One might say that these cases are not counterexamples, but principled exceptions. Ritual contexts trump moral rules and, indeed, the emotional impact of these rituals is heightened by the fact that harm is considered immoral in other contexts. As with gang initiations, one can prove one’s strength and loyalty by enduring serious harm. But rituals demonstrate that strictures against harm are culturally variable. In our culture, religions that engage in harmful rituals are abhorred. In any case, we do not need to look to rituals to find examples of tolerated harm within a social group. Consider the treatment of women. Rape and subjugation of women has been an accepted social practice in many societies. One might respond by arguing that women are not regarded as members of the ingroup by those engaged in abusive practices. Societies that tolerate violence against women regard women as inferior and, hence, as members of a different group than the men who perpetrate such violence. This reply underscores the pliability of “ingroup.” The ingroup might be defined by geographic or cultural boundaries, on the one hand, or by boundaries such as sex, race, and social class. Strictures against harming members of the ingroup will vary significantly depending on the local requirements for group membership. Indeed, social psychologists have long known that people can form ingroups extremely easily. For example, Sherif et al. (1954) randomly divided a group of boys into two camps, and within a few days the members of each camp started socially ostracizing the members of the other camp, insulting them, fighting with them, and stealing from them. This raises a danger of trivialization. If we define “ingroup” in terms of geography or culture, the stricture against harming members of the ingroup are substantive. But suppose we define an “ingroup” as any group whose members an individual regards as fully worthy of moral consideration. Note the stricture is trivial. It amounts to the injunction: Don’t harm those who you regard as people you oughtn’t harm. Still, one might insist, this counts as a moral universal. In every society, each person affiliates with other people and considers it wrong to harm those other people. Perhaps the universal stricture against harm amounts to the simple norm that we should avoid harming some people some of the time. Even this norm may have counterexamples. Consider the Ik of Uganda. In a controversial study, Turnbull (1972) argues that the Ik would regularly steal from each other and laugh at each other’s suffering. Turnbull reports that a member of the group would pry open the mouth of an old man to get his food. In another case he says, “Men would watch a child with eager anticipation as it crawled toward the fire, then burst into gay and happy laughter as it plunged a skinny hand into the coals” (112). Perhaps the Ik are not a genuine counterexample. Perhaps they lost their natural tendencies to avoid harm under the pressure of economic hardship. Or perhaps Turnbull’s account of the Ik is uncharitable. I am willing to grant that. I suspect that there is a universal stricture against harming some people some of the time. The case of the Ik raises the possibility
against moral nativism
171
that adverse conditions can promote vicious self-interest, which is hardly surprising. The Ik can be regarded as a limiting case. If the formation of ingroups is a flexible process then, under extreme conditions, the ingroup can reduce to a single member: me. Out first candidate for a moral universal – the stricture against harm – turns out to be highly flexible because it applies only to select individuals and the selection process is quite unconstrained. On its own, “Don’t harm members of the ingroup” is profoundly unhelpful. One needs to know the conditions of ingroup membership (as well as the culturally specific conditions under which ingroup members can be harmed). It would be better to describe this stricture against harm as a norm schema. It cannot be used as a guide to action until other information is filled in, and that additional information is open-ended. Let me move on to a second example. It is often observed that many mammalian species are hierarchically organized. This generalization includes homo sapiens. In human collectives, there are usually differences in social station or rank. Higher ranking individuals have greater authority than lower ranking individuals and usually greater access to resources. In our own society, we try to downplay rank. We emphasize the middle class, fight poverty, and celebrate upward mobility. But these very values reveal the existence of social stratification. Upward mobility is a euphemism for rank climbing. Moreover, there is evidence that members of our society are highly obedient to authority. In his infamous experiment at Yale, Milgram (1974) demonstrated that randomly chosen volunteers are willing to inflict (what they believed to be) highly dangerous electric shocks on complete strangers. All subjects continued to increase the voltage despite shouts from the victim (who was actually a stooge in the experiment), and 65 percent administered maximum electricity, well after the victim had entirely stopped responding. The subjects administered these shocks simply because the experimenter, clad in a lab coat, asked them to. When the experiments was conducted by an experimenter who looked less imposing than Milgram or when it was performed at a less prestigious institution, there was a 15 percent drop in the number of subjects willing to administer maximum voltage. Social hierarchies are underwritten by moral norms. We respect authorities and we subordinate ourselves to them. We feel contempt for those who are rude to the parents, teachers, and community leaders. We also condemn authority figures who abuse power or don’t deserve it. We implicitly regard the social order as natural, and we disapprove of those who do not take their rightful place. In societies that mark status with title or terms of address, failure to use the right words in speaking to a member of higher rank can lead to embarrassment. In traditional societies, violations of hierarchy norms are taken much more seriously. Throughout the world, people literally bow their heads in the presence of authorities – a gesture that exaggerates and symbolically re-enacts the natural expression of submission and shame. In sum, social rank hierarchies seem to be a cultural universal, and they seem to be sustained through moral attitudes. Perhaps there is a moral norm that says, “Respect and obey authorities.” The first reaction one might have to this suggestion is that not all authorities are worthy of respect. Should we respect and obey evil dictators, incompetent teachers, and abusive parents? The injunction would be better phrased: “Respect
172
jesse j. prinz
and obey legitimate authorities.” As with the injunction against harm, there is some risk of trivialization. Which authorities are legitimate? Those that deserve our obedience and respect. So the injunction becomes: “Respect and obey those whom you respect and obey.” But, as above, the trivialization does not undermine the norm. It remains a substantive claim that we all respect and obey some people some of the time. If that fact is underwritten by moral norms, then we have a moral universal. But notice that the injunction to “Respect and obey some people some time” is schematic in several ways. Most obviously, there is the question of whom to respect. Is authority determined by age, gender, skin color, strength, wisdom, charisma, looks, or family line? These are common options, but others are imaginable too. Twins are considered closer to God in some African cultures (such as the Nuer and the Kedjom), and the ancient Egyptians used to persecute redheads. The question of who has authority in a given culture depends on who had the power to claim authority in the past, who has attributes that accord with the local cosmology and ideology, and many other factors. The injunction to respect and obey authorities is also schematic in another way. There can be considerable variation in what counts as obedience and respect. Bowing, mentioned above, is one popular display of obedience. Others include foot kissing, saluting, terms of address, and class-specific wardrobes. These perceptible signs of deference are but one dimension of variation. A more important dimension concerns the degree of authority granted to those of high rank. Many pre-state societies, including bands and some tribes, are said to be egalitarian. This does not mean they have no social hierarchies. Often certain individuals (e.g., “the elders”) are given special respect. But those individuals do not wield any concrete power. They do not have resources to redistribute or police to enforce rules. Their power consists in the fact that others consult them for advice and treat them deferentially. More “advanced” societies have headmen or chiefs who have greater access to resources. Statehood begins when there is enough infrastructure for these authority figures to enforce control over others and collect taxes or tributes. Within states, there is considerable differentiation in what people do for a living. With that differentiation, it is easier to form social classes because some jobs are more desirable, require more training, and produce more wealth. True social stratification begins with the transitions from simple societies to states. Not all states are equally stratified. I alluded to the upward mobility in our own society. That is a morally cherished ideal, even if it is not a reality. In other advanced societies, the ideal is quite different. In India, for example, individuals born into lower castes are often still expected to stay there. As in our society, these lower classes are poor, limited to certain kinds of jobs, and often identifiable by race. The difference is that, until quite recently, public ideology has reinforced this high degree of stratification in India. One factor driving stratification in India is religion. Hinduism reinforces class differences, and believers think social advancement comes through reincarnation, not achievement. The racial differences between achuta (untouchables) and Brahman are easy to see, and they may be historical in origin. There is some evidence that the indigenous people of India were invaded by paler Indo-Europeans during the last millennium, and the vanquished have been subjugated ever since.
against moral nativism
173
Let me note, finally, that there are cultural differences in obedience. Milgram’s study, mentioned above, was carried out in a number of different cultures. In all, subjects were alarmingly willing to administer dangerous electric shocks to complete strangers. There were, however, differences. In the US sample, 65 percent of the participants turned the shock dial to maximum voltage (450 volts, marked XXX). In Germany, the number of maximally obedient subjects rose to 85 percent (Mantell 1971) and, in Australia, the number dropped to 40 percent (Kilham and Mann 1974). It is reasonable to conclude that there is an interaction between obedience and culture. Some cultures promote obedience, and some have a more overt skepticism about authority. Any norm enjoining us to respect and obey authorities must be filtered through this cultural veil. The degree and nature of our obedience can vary. The authority norm is schematic. People generally obey and respect authorities, but this command is useless without knowing who deserves authority, how respect is shown, and the degree of obedience. There is open-ended variation in all these factors, and considerable variation in the extent and rigidity of hierarchical stratification. I will briefly consider one more candidate for a moral universal: the prohibition against incest. Incest is avoided in many species, and inbreeding is believed to carry biological risks. Anthropologists tell us that the majority of human societies – perhaps all of them – have incest taboos. The injunction, “Don’t engage in incestuous sexual relations” seems to be a human universal. Or is it? As with the other two norms I have considered, this one admits of striking variation. The first question to address is: Which sexual relations are incestuous? In particular, one might wonder, which relatives are off-limits. In contemporary Western societies, sex and marriage between first cousins is often considered taboo. Indeed, in the eighth century, the medieval Church banned marriage between cousins separated by seven degrees (you couldn’t marry descendants of your great-great-great-great-great-grandparents’ descendants – not that you would have any way of figuring out who they are). Jack Goody (1983) speculates that this prohibition was designed to prevent consolidation of wealth within a single family. (He also argues the Church moralized monogamy to serve the same end!) Now the Catholic Church has retreated to the first cousin rule, but this is not universal. In some cultures, first-cousin marriage is encouraged. It is standard practice for many parts of Asia and Africa. For example, Pederson found that about 30 percent of married Palastians were married to first cousins, and Hussien found that over 60 percent of Nubians were married to first cousins or closer relatives. In the West, first-cousin marriage has not been unheard of; Albert Einstein and Charles Darwin both married first cousins. There also appears to be very little health risk in cousin marriage (Bennett et al. 2002). What about the immediate family? There is at least one society in the historical record that seems to have encouraged both parent–child and brother–sister incest: the Zoroastrians of ancient Iran (Slotkin 1947; Scheidel 1996). Here is a representative ninth-century text: pleased is he who has a child of his child, even when it is from some one of a different race and different country. That, too, has then become much delight which is expedient, that
174
jesse j. prinz
pleasure, sweetness, and joy which are owing to a son that a man begets from a daughter of his own, who is also a brother of that same mother; and he who is born of a son and mother is also a brother of that same father; this is a way of much pleasure, which is a blessing of the joy . . . the family is more perfect; its nature is without vexation and gathering affection. (Quoted by Slotkin 1947: 616)
Zoroastrians may be an exception in their overt endorsement of all forms of incest, but other groups seem to condone some forms some of the time. Let’s begin with father–daughter incest. Among the Thonga of South Africa, fathers are permitted to have sexual relations with their daughters before a hippopotamus hunt (Junod 1962), and father–daughter marriages have been documented among royalty in the ancient world. Such cases do not prove that father incest isn’t accepted in ordinary conditions, but they suggest that there may be no biological mechanism making us revile such relations naturally. Further evidence comes from incidence of sexual abuse. In America, father– daughter incest is distressingly common; 16,000 incidents are reported annually, and many more cases may go unreported (Finkelhor 1983). Mother–son incest is probably less common than father–daughter incest, but it occurs. In common chimpanzees, there is better evidence for mother–son incest avoidance than for any other kind, but the evidence comes from the fact that male chimps attempt to have sex with their mothers, who less than politely refuse (Goodall 1986: 466f). Freud would have a field day. In bonobos, a few cases of successful mother–son copulations have been reported (de Waal and Lanting 1997). These copulations are rare, but that is not necessarily due to an incest taboo (Leavitt 1990). Mothers and sons are also different in age and, typically, different in rank. Statistics about the frequency of mother–son incest are meaningful only when compared to statistics about sex between female–male pairs with comparable differences in age and rank. In our own species, there are occasional reports of mother–son incest. In Japan, news media raised alarm when they spread rumors that some mothers were sexually indulging their sons in order to help them focus on their all-important exams, rather than the vicissitudes of romance (Allison 2000). There are, in addition, many societies in which mothers soothe their young sons by stroking their genitals (Broude 1995). This latter practice probably isn’t viewed in an entirely sexual way, but it casts doubt on the suggestion that there is a deeply rooted biological aversion to sexual contact between mothers and sons. Turn, finally, to brother–sister incest. In chimps, siblings rarely copulate because adolescent males (or females, in the case of bonobos) leave the natal group. This pattern of group migration is often interpreted as evidence for an innate incest avoidance mechanism, but it is possible that it is an innate mechanism to regulate population size by dispersal or a mechanism to promote peace between neighboring groups (Leavitt 1990). What about human beings? Among royalty, sibling incest has been well documented in Egypt, Rome, Hawaii, and South America. But sibling incest isn’t restricted to royalty. In Graeco-Roman Egypt, census records demonstrate that more than 30 percent of marriages were between siblings in some urban areas. This has puzzled historians for some time because there is no history of nonroyal sibling incest in Egypt prior to this period or in Greece and Rome. Shaw (1992) offers a plausible explanation. The extant
against moral nativism
175
census records are from regions heavily populated by descendants of Greek immigrants, who arrived after Alexander the Great conquered Egypt. Those immigrants may have been discouraged from marrying indigenous people because the Greek leaders (the Ptolemies) would not have wanted Greek citizens to become overly sympathetic to the vanquished Egyptians. Since the initial immigrant populations were small, discouraging outgroup marriage would have led to a dearth of marriageable partners. Lifting prohibitions on incest was a natural solution. But why, Scheidel (1996) asks, did these Greeks end up marrying their siblings instead of cousins and kin who would have been living close by? (Greeks lived near their extended families.) One possibility is prestige transmission. The Ptolemies, like many other Egyptian pharaohs, married their siblings. When incest prohibitions were lifted, it may have become a fad to emulate the leaders. Variation in incest norms also shows up in how societies punish guilty parties. In a large-scale review, Thornhill (1991) finds considerable differences in severity of punishment. For example, the Trumai of Brazil merely frown upon incest, and the Incas of Peru gouged out the eyes of a man who committed incest, and quartered him (p. 253). Interestingly, severity of punishment and the range of relations that are regarded as incestuous are both highly correlated with social stratification. Stratified societies have stricter incest prohibitions presumably because incestuous relations provide a way of consolidating wealth and moving up the social ladder thereby. The upshot of all this is that, while incest avoidance is common, it is not universal in all forms. “Don’t engage in incestuous sexual relations” is too schematic a prescription to follow on its own. Different cultures regard different relatives as off-limits, and there are culturally sanctioned violations of the rule even within the immediate family. In order to avoid incest, we need to know which relations are incestuous (cousins?), which acts are incestuous (stroking your child’s genitals?), and which contexts allow for exceptions (are you going on a hunt?). The strength of the prohibition is also variable. If you live in an egalitarian society, it may seem less wrong than if you live in a stratified society. The three norms that I have been considering are among our best candidates for moral universals. My goal here has not been to deny their (near) universality. Instead, I have tried to show that these norms take on different forms in different cultures. Similar conclusions could be defended for other apparent universals. In most cultures, people believe that good deeds should be reciprocated, but there is variation in what deeds count as good, who is required to reciprocate, how much reciprocation is required, and whether that reciprocation has to take the same form as the initial act of kindness. In most cultures, people believe that resources should be distributed fairly, but there is great variation in the standards of fairness, with some cultures emphasizing equitable division, some emphasizing egalitarian division, and most cultures tolerating some gross departures from fairness. In most cultures, people place moral value on long-term romantic bonds or marriages, but the number and gender of marriage partners is notoriously variable, as are attitudes towards extramarital sex. Are there universal moral proscriptions? That depends. Most societies have moral rules pertaining to harms, social hierarchies, the exchange of resources, and various aspects of sexuality, but the content of those norms varies. Immodest moral nativists suggest that we are hardwired to have moral norms with specific content. That is not the case. At
176
jesse j. prinz
best, we are hardwired with moral norms that have schematic content, which then get filled in through enculturation and learning. The examples that I have been considering suggest that modest moral nativism is more defensible than immodest moral nativism. However, I will now argue that modest moral nativism faces serious objections as well.
3 Is There a Morality Acquisition Device? Here is a tempting picture of how moral rules are acquired. We are born with a small set of schematic moral principles – principles of the kind I have been discussing. Each of these principles has parameters that are set by exposure to the behavior of caregivers, peers, and other salient individuals in one’s culture. We can think of the innate mechanism as a Morality Acquisition Device (MAD). The schematic rules are our Universal Morality (UM), and the behaviors used in setting the parameters in those rules constitute Primary Moral Data (PMD). The analogy, of course, is to language and, in labeling the innate mechanism a Morality Acquisition Device, defenders of this analogy presume that morality is acquired via domain-specific resources. This is the picture suggested by modest moral nativists. There are alternatives to this picture. One alternative is that the schematic rules in question are not innate. Perhaps all societies acquire prohibitions against harm, rank violations, or incest via convergent cultural evolution, and these norms are then learned using general mechanisms. Another possibility is that we have innate schematic norms against harm, rank violations, and incest, but these norms are not moral norms. Rather they are norms of a more general kind, and they only get moralized through a learning process. Either of these possibilities would undermine modest moral nativism. Both are consistent with minimal moral nativism, however. Both are consistent with the possibility that we have an innate, domain-specific moralization mechanism (MM). This mechanism can convert norms that are not initially moral into moral norms. To decide between modest moral nativism and one of the other alternatives, we must consider two questions. First, could a cultural evolution story explain the near universality of the schematic rules that I have been discussing? Second, is there reason to think that these rules are universally moral? A negative answer to either question would cast doubt on the existence of a MAD. If the universals could be culturally evolved, then there is no pressure for them to be innate, and if the universals are not treated morally in all cultures, they may not qualify as an innate morality, even if they have an innate basis. If I can show either of these two things, I will seriously weaken the case for modest moral nativism. This is my objective in the next two subsections. After that, I will turn to the question of an innate moralization mechanism (MM).
3.1 Genes vs. genealogy Moderal moral nativists assume that our moral rules are written, albeit schematically, in our genes. An alternative possibility is that they are the products of cultural history. This
against moral nativism
177
was essentially Nietzsche’s view. He invited us to regard each of our values as an artifact – a human creation devised to serve some psychological or social end. In arguing against immodest moral nativism, I have already implied that Nietzsche was onto something. The variation in moral rules demonstrates that culture is making a constitutive contribution to morality. Perhaps this can be pushed even further. On the modest nativist view, culture is basically switching a few parameters and filling a few slots in our moral schema. On a genealogical view, moral rules come into being through cultural processes. To assess whether the genealogical view is plausible, let’s see whether the norms that I have been considering might have a cultural explanation. Just about all societies have injunctions against harm. In many cases, we have seen, these injunctions are restricted to members of the ingroup, however ingroups happen to be defined. There is some reason to think that these rules have a basis in natural tendencies towards empathy and facilitation which can be found in social mammals quite generally. Wolves don’t randomly turn on members of their own pack. Natural selection has prevented that. But (for reasons I will discuss below), there is little reason to think that this natural tendency is a moral rule in other creatures. If we are disposed to kindness, why do we have moral rules against harm? The answer may have a lot to do with our intelligence. Human beings can reflect on the advantages of harm. We know that we can overpower others and take things from them that we would like to have. A simple tendency to treat others empathetically is too weak to overcome a cool cost/ benefit analysis. The weakness of our empathetically driven concern for others is manifest in the monstrous cruelties that human societies regularly inflict against members of other groups. What is to prevent us from overriding empathy within our groups, especially when we can get away with it? What is to prevent us from forming small coalitions with kin and attacking the family next door? This danger is greatly compounded as human group sizes increase. As Hume pointed out, empathy is inversely related to distance. In order for societies to expand, they need to devise a mechanism to prevent us from harming those we don’t know very well. I suspect that moral prohibitions against harm are the culturally evolved solution to these problems. They ensure societal stability when empathy is discounted through distance or short-term cost/benefit analysis. Cultural rules make us feel guilty when we harm others, even if we don’t feel empathetic. But why moralize harm at all? Why not enforce harm norms through heavy sanctions? The answer is simple. By morally condemning harm, cultures can shape behavior without having to police it. Next consider rules pertaining to rank hierarchies. Some mammals have rank hierarchies, and others do not. What about us? Is social rank a biological inevitability or cultural construct? The variations in human hierarchies suggest that culture plays a strong role. Furthermore, there is a plausible story about how such hierarchies might be culturally evolved. Harris (1977) traces the path from egalitarian hunter-gatherer societies into heavily stratified states. The change, he suggests, is driven by human mastery of ecological resources. Imagine an egalitarian society in which certain individuals are especially adept at obtaining highly valued food. Such individuals can gain favor by redistributing the food that they acquire rather than keeping it to themselves. In so doing, they can gain the loyalty of followers, who band together to collectively help in the
178
jesse j. prinz
accumulation and redistribution process. In this way, good redistributors can increase their social prominence even more and become Bigmen. If ecological conditions allow food to be farmed, stored, and hoarded, the power differential can increase. Bigmen can become a primary source for food during hard times because individuals working on their own cannot coordinate large-scale storage efforts. Bigmen thereby attain real authority and become chiefs, and their descendants inherit this status. Once a society develops ways of storing food, some individuals can be freed from the drudgery of food production. They can dedicate their time to the creation of other commodities and technologies. These individuals can also become soldiers or police, and, through military conquest, they can allow a chiefdom to expand. Expansion leads to statehood. Police collect tribute from neighboring villages in return for “protection” and access to the commodities. Professional diversification and expansion through conquest both lead to increased stratification. Within a state, there will usually be different classes with different access to material resources and correlated differences in authority and power. In order to ensure stability in the face of inequality, stratified cultures need to moralize deference to those who have higher social rank. In many societies, those with supreme power claim to have divine rights. Some societies denigrate the poor as inferior or even untouchable. According to Nietzsche, the Christian Church maintained social hierarchies by inculcating the idea that poverty is a virtue. Without moralization, there is risk of revolution. Turn now to incest taboos. There may be an innate tendency to avoid incest but, for reasons given below, that does not mean that we are biologically programmed to find incest immoral. The transition from incest avoidance to incest taboos may be driven by culture. Malinowski (1927) argued that incest taboos are needed because sex within the family would disrupt socialization. If parent–child incest were allowed, it would be difficult for parents to play the role of authorities and educators; the role of love and the role of teacher can come into conflict. For similar reasons, it is often forbidden for teachers to have sexual relations with their students. Sibling incest may become taboo for another reason. If siblings marry each other, a family will not form close blood ties to another family. In early human populations, forming ties to other families would have been essential to forming successful bands and tribes, which would have been essential, in turn, for survival. Indeed, during periods of high mortality, a family that always intermarried would quickly die out. A blanket prescription against sibling incest is a good strategy for forging social ties to others. Groups that failed to devise such prescriptions may have failed to attain the level of cohesion needed for survival.
3.2 Moral and nonmoral norms I have just been arguing that our universal moral norms could be products of cultural evolution, rather than biological evolution. However, this should not be interpreted as the claim that cultures devise these norms from scratch. Each may be built up on innate tendencies that do not initially qualify as moral. Perhaps we are innately disposed to avoid harming others or to avoid incest, but not innately inclined to regard such behaviors
against moral nativism
179
as morally wrong. To make sense of this possibility, we need to distinguish moral norms from other kinds of norms. Let’s begin with a general definition (for a related definition, which emphasizes independence from institutions, see Sripada and Stich 2006): Norms are: 1 rules governing how people behave that are 2 psychologically internalized (not merely being codified in a book); and 3 enforced by rewards or punishments. Norms govern many aspects of our behavior: how we display our emotions in public, how we dress, how loud we speak, how we line up for the movies, how we raise our hands in class, and so on (for discussion, see Smith 2004: ch. 4). Many of these norms are nonmoral. Moral norms are a subset of norms, distinguished by their moral character. There are various theories of what moral character consists in (see Sripada and Stich 2006 for some examples). According to some theories, moral norms are distinguished by their subject matter; according to other theories, they are distinguished by the procedures by which they are discovered or the reasons by which they are justified; according to a third class of theories, moral norms are distinguished by the particular way in which they are psychologically internalized and enforced. I subscribe to a theory of this last kind. I think a moral norm is a norm that is enforced by certain emotions (see Prinz 2007). This view has been known historically as sentimentalism. Roughly, a person regards something as morally wrong (impermissible) if, on careful consideration, she would feel emotions of disapproval towards those who did the thing in question. I think “moral rightness” can be defined with reference to moral wrongness. Something is morally right (obligated) if failing to do it is morally wrong. Something is morally permissible if there is no moral norm against it. There are a number of different emotions of disapproval. Core examples include guilt, shame, disappointment, resentment, anger, indignation, contempt, and disgust. There is evidence that different kinds of moral rules recruit different emotions of disapproval (Rozin et al. 1999). We feel anger towards those who harm others, contempt towards those who disrespect members of other social ranks, and disgust towards those who commit incest. If we harm another person, we feel guilty, and if we violate norms of rank or incest, we feel ashamed. I cannot defend these claims here (see Prinz 2007). I will merely point out three relevant facts. First, emotion structures in the brain are active when people make moral judgments (Greene and Haidt 2002). Second, people who are profoundly deficient in emotions (psychopaths) never develop a true comprehension of morality (Blair 1995). Third, if we encountered a person who claimed to find killing (or stealing, or incest, etc.) morally wrong but took remorseless delight in killing and in hearing tales of other people killing, we could rightfully accuse him of speaking disingenuously. All this suggests that emotional response is essential to moral cognition. Norms that are not implemented by emotions of disapproval are not moral norms. Some of my arguments will depend on this premise, and will not be convincing to those who reject sentimentalism. But, in discussing cultural evolution, I have already provided an independent case against modest moral nativism.
180
jesse j. prinz
Now let’s return to question of interest. Are the universal norms that I have been considering universally moral? The answer seems to be no. Begin with strictures against harm. Most mammals feel emotional distress when they see a conspecific suffer. That emotional response may contribute to widespread avoidance of gratuitous harm, but it does not mean that all mammals regard harm as morally wrong. Empathetic distress is different from anger and guilt. A rat might resist hurting another rat because of empathy, but it will not feel guilty if it harms another rat or angry at other rats who commit harms. Rats don’t moralize. Like rats, we may be biologically prone to feel empathy, but not to moralize harm. Moralization takes place under cultural pressure. Initially we feel distress when we cause harm to others but, through socialization, we also come to feel guilty. Our moral educators tell us that we should feel bad when we hurt each other or take things that aren’t ours. They teach us by example to get angry at those who violate these norms, even when we are not directly involved. Moralization inculcates emotions of disapproval. To support this hypothesis, it would be useful to show that some cultures do not moralize harms. This is difficult to do because there is enormous cultural pressure to prohibit harm against the ingroup. The Ik may be an exception, but their tolerance of ingroup harm emerged under conditions of extreme hardship and deprivation. But, even if most cultures have moral norms against harm, the nature of those moral norms can vary in interesting ways. Consider Read’s (1955) analysis of the Guhuku Gama who live in the New Guinea highlands. Like the Yanomamö of the Amazon, members of this tribe frequently go on raids in which they kill their neighbors. They don’t think it is morally wrong to harm members of other groups. They probably do think it is morally wrong to harm members of the ingroup, but they think about that wrongness very differently than we would. First, they do not invoke general moral principles (“It’s wrong to kill”). Second, they often explain their moral behavior in prudential terms (“if you don’t help others, they won’t help you”). Third, they construe their prohibitions against harm as consequences of the dependent on specific relationships. Just as parents have an obligation to protect their children, then Guhuku Gama think they have an obligation to take care of members of their group. Harm is not wrong in itself, but only wrong relationally: it is morally wrong for you to kill a member of your clan, but not morally wrong for someone from another clan to kill that same person. Ironically, when strangers kill a member of the clan, the Guhuku Gama seek revenge in order to settle the score, but when one member of a clan kills another, there is no punishment; that could trigger a devastating cycle of revenge within the clan. It is fair to say that the Guhuku Gama culture has moralized ingroup harm derivatively by moralizing personal obligations to kin and clan. This would not work in larger societies. We live alongside strangers. As I suggested above, expansion requires moralization. In a large, pluralistic society like our own, we moralize harm itself; we say people have the right not to be harmed regardless of how they are related to us or each other. Such variations suggest that we do not have an innate moral norm against harm. Moralization comes with enculturation and takes on various forms. Now consider rank. All along I have been claiming that social hierarchies are enforced by moral norms. This is certainly the case in many societies. For example, it is morally
against moral nativism
181
forbidden to disrespect one’s parents in some parts of the world, and we tend to morally condemn leaders who do not deserve their power. But these moral attitudes may not be hardwired. In egalitarian societies, there may not be enough stratification for moralistic rank norms to take hold. In Western free market societies, there is a tendency to idealize upward mobility and deny major differences in social classes. In North America, we are less prone to view rank in moral terms than members of more traditional stratified societies. This variation is consistent with the hypothesis that rank norms are not innately moral. They become moralized in societies that have a special stake in preventing people from obtaining higher social status. Finally, consider incest. Once again, we must distinguish innate moral norms from nonmoral behavioral tendencies. If it can be shown that there is an innate tendency to avoid incest, as exhibited in some species, it does not follow that there is an innate moral injunction against incest. A moral rule, or taboo, would require emotions of disapproval. Chimps avoid some forms of incest, but there is little evidence that incestuous chimps feel shame. Likewise, one can question whether we have innate moral prescriptions against incest. Surprisingly, in an extensive cross-cultural study, Thornhill (1991) found that only 44 percent of societies, have explicit prohibitions of incest within the immediate family. This finding is consistent with the hypothesis that people naturally avoid incest but don’t moralize it. If we don’t like it to begin with, we don’t need to devise a strong taboo. According to Fortes (1949: 250), the Tallinasi of Ghana “do not regard incest with the sister as sinful. When they say it is forbidden . . . they mean rather that it is disgraceful, scandalous” (quoted in Fox 1980: 36). This suggests that sibling incest is innately avoided, not moralized. That said, there is even some debate about whether incest avoidance is innate. The strongest arguments for innateness come from two sources: studies of aversive inbreeding effects and studies of “negative imprinting.” It is certainly true that inbreeding can be deleterious initially, but repeated inbreeding within a group can actually lead to a healthy and stable gene pool over time; those who inherit harmful recessive genes will die out, and a pool of good genes will be left behind. Therefore, natural selection would not necessarily favor incest avoidance. The term “negative imprinting” refers to a hypothesis that was originally suggested by Westermarck (1891). He claimed that brothers and sisters are biologically programmed to lose romantic interest in each other if they cohabit during early childhood. This hypothesis has been supported by two empirical findings. First, men and women who are raised collectively on Israeli kibbutzim very rarely marry, even if they are not related (Shepher 1971). Second, in Taiwan, some families adopt a daughter and raise her to marry their son, so they can avoid paying costly a costly bride price latter on; these “minor marriages” are much more likely to fail than marriages arranged in adulthood (Wolf 1970). Both of these findings suggest that childhood cohabitation turns off romantic feelings. This conclusion is challenged in a critical review of the evidence by Leavitt (1990). Leavitt notes that children on kibbutzim engage in sexual play with each other when young, and they are subsequently discouraged from sexual activities in adolescence. He blames low marriage rates on sexual mores within the kibbutzim and ample opportunities to find other mates; kibbutzniks are encouraged to delay marriage until after their mandatory military service, by which time they have met
182
jesse j. prinz
many potential mates outside of the kibbutz. Leavitt appeals to other factors in explaining the failure rate of Taiwanese minor marriages. First, such marriages are regarded as low status because they are intended to escape the cost of contracting a bride; second, there is a general taboo in Taiwan against sibling marriage, which may infect attitudes towards marriage between adoptive siblings; third, children raised as siblings have rivalries and other experiences during childhood that may make it hard to re-construe each other romantically later on; fourth, marriages arranged later in life can take the couple’s interests and personalities into account, whereas marriages arranged from early childhood cannot; fifth, in ordinary marriages, the son’s parents make a considerable investment in finding a bride, and the in-laws have a strong bond with their daughter, so there is more family pressure to make the marriage work. If Leavitt is right, incest taboos may have no innate basis. I think this is probably too strong a claim. It’s more likely that we are born with natural tendencies towards exogamy. If we are like chimps, we experience sexual wanderlust: we like to find lovers outside the natal group. But this becomes a moral norm only under cultural pressure. I think the case against modest moral nativism is overdetermined. Defenders of that view postulate a set of innate schematic moral rules than get elaborated in different ways through culture. The best evidence for modest moral nativism is that some moral rules can be found, with minor variations, universally. Against this picture, I presented two kinds of counterarguments. First, there are cultural explanations of why most cultures would end up with the rules in question. Second, there are variations in the extent to which these rules are moralized, and in the nature of the moral attitudes towards them when they are moralized; such variations suggest that these rules may not be innately moral even if they are underwritten by innate mechanisms. To these two arguments, I would now briefly add a third. Innate capacities don’t take much instruction. They can be acquired by triggering or casual observation. Most importantly, innate rules can be acquired without “negative data.” We don’t need to be corrected in order to figure out how to extend an innate rule to new cases. Moral rules, in contrast, seem to involve a fair degree of instruction. Children are naturally empathetic, but they steal, lie, hurt, and disrespect authorities. Despite periodic claims to the contrary, children who are raised with opposite siblings are often unsubtly informed about incest taboos; among the Trobriand islanders, for example, brothers and sisters are not allowed to talk to each other or look at each other (Malinowski 1927). We receive a lot of moral instruction through explicit rules, sanctions, story telling, role models, and overt attitudes expressed by members of our communities. The primary moral data are not impoverished. Without extensive guidance, children might not learn the correct moral rules. If we were innately moral, then we might not have to spend so much time instructing children and enforcing laws. I think moderate moral nativism is wrong. There are no innate moral rules – not even schematic moral rules. Cultural universals are not the result of a shared UM. Instead, universals derived from similarities in some of our evolved nonmoral tendencies, and similarities in the needs of social groups. Universal moral rules are the result of convergent cultural evolution.
against moral nativism
183
4 Morality Without Innateness 4.1 Is morality a spandrel? If moderate moral nativism is wrong, there is no Morality Acquisition Device or Universal Morality (no MAD or UM). How then is morality acquired? Minimal moral nativism replaces the MAD with a generic Moralization Mechanism (MM). The MM is an innate, domain-specific capacity to moralize. It takes nonmoral rules as inputs, and produces moral rules as outputs. For example, the MM could convert incest avoidance into an incest taboo. Should we postulate such a mechanism? Moralization is clearly a real process. Rozin (1999) presents a number of recent examples. In America, we have moralized drug use, obesity, and cigarette smoking in recent years. Prior to moralization, these things may have been regarded as harmful, but they were not regarded with indignation, judgmental disgust, guilt, or shame. Perhaps moral prohibitions against harm, rank violations, and incest are acquired through the same process that leads us to moralize drugs, fat, and tobacco. I think this is an interesting question for future research. The question I want to ask here is whether the process of moralization depends on an innate MM. The alternative hypothesis is that our capacity to moralize is a by-product of capacities that evolved for other purposes. Capacities that emerge as inevitable by-products of other capacities are called spandrels (Gould and Lewontin 1979). If moralization is a spandrel, then minimal moral nativism is wrong. To address this issue, let’s get clear on what moralization consists in. If moral norms are norms that are enforced by emotions of disapproval, then moralization is a process by which we become disposed to experience those emotions. Those who moralize tobacco feel angry when they see someone smoking in a public space or feel guilty if someone catches them smoking. Do we have innate mechanisms for acquiring this pair of evaluative sentiments? Perhaps not. First consider anger. When anger is experienced on its own, it is not a moral response. In nonhuman animals, anger is an emotion that promotes aggression: it is a response to a threat from a conspecific. Animals need not have any moral sense to show rage. Suppose, like animals, we are disposed to feel angry when we perceive a threat. Now consider guilt. The term “guilt” always has moral connotations, but guilt may actually derive from a nonmoral emotion. When we feel guilty, we reflect on our faults, we become downtrodden, or even suicidal. This is exactly the same profile as another emotion: sadness. It is tempting to say that guilt is just sadness brought on by doing something wrong. Why should doing something wrong make us sad? The answer is simple. When we violate rules, members of our community are negatively affected. People we care about are hurt, and people we depend on distance themselves from us. Both of these things make us sad. But sadness did not evolve as a moral emotion. It is an emotion that occurs when we lose something we care about. It just so happens that breaking rules causes us to lose things that we care about. So guilt is an accidental by-product of sadness. Let’s now put anger and guilt together. Consider a child who grows up in a community where adults have decided that [*j]-ing is wrong. Violating local standards is a threat
184
jesse j. prinz
to the community, so they react with anger when people [*j]. As an observer, the child learns to react with anger as well. Now consider what happens if the child is caught [*j]-ing. Members of the community will react with anger or refuse to associate with the child. This makes the child sad. Thus, when the child considers [*j]-ing in the future, she resists because she doesn’t want to feel sad about her actions (i.e., guilty). A child who leans to feel angry and guilty about [*j]-ing has moralized [*j]-ing. This process of moralization does not require a fancy innate mechanism evolved for the purpose of moral learning. It just requires garden-variety anger and sadness, a capacity to recognize emotions in others, and being raised in a community of people who already have some moral norms. Similar stories might be told about how people come to acquire other emotions associated with moral disapproval. The point is that these are not distinctively moral emotions; they are nonmoral emotions that have been adapted to ground moral norms. If the story that I have been sketching is right, we can acquire moral rules without a special innate mechanism dedicated to that end. Morality emerges out of more general emotional capacities that were evolved for other purposes. If I am right, there is no MM. Minimal moral nativism is false. Morality is a spandrel.
4.2 Is there an innate norm acquisition mechanism? The general outlook defended in this discussion closely parallels ideas defended by Sripada and Stich (2006). Like those authors, I have argued that moral judgments are not universal across cultures, despite some similarities, and I have argued that emotions play an important role in acquisition and implementation of moral norms. Like them, I have also explored these ideas with an interest in explaining how moral norms are acquired. Sripada and Stich are agnostic about how moral norms differ from other norms, and they think we are not yet in a position to determine how much innate machinery we need to explain the acquisition of moral norms. I have been less agnostic, arguing explicitly against moral nativism. Even if I am right, Sripada and Stich raise an interesting question in their discussion. Supposing there is no innate mechanism for moralization, might there be a more general mechanism for the acquisition of norms? Sripada and Stich suppose there is. They propose a “boxological” model for norm acquisition that contains an “acquisition mechanism” which takes proximal environmental cues as inputs and generates entries in a “norm data base” as outputs. I have no qualms with boxology. Functional decomposition is the major aim of cognitive science, and flowcharts are a useful tool in the endeavor. But I do think one has to exercise caution when labeling the boxes. The label “norm acquisition mechanism” implies that there is a mechanism whose function in the acquisition of norms. I think the postulation of such a mechanism is premature and methodologically risky. It is akin to postulating a module for every well-documented cognitive capacity. Some capacities may be underwritten by domain-specific modules, but many of them – perhaps most – may be underwritten by mechanisms that are more domain-general. In discussing moral norm acquisition, I suggested that morality is underwritten by domain-general emotions. I now want to suggest that norms in general are acquired by means of mechanisms that are not
against moral nativism
185
designed especially for the function of norm attainment. There is no norm acquisition mechanism per se, but rather several more general mechanisms that happen to result in the attainment of norms. Let me illustrate with three examples. First, consider etiquette norms. We do not chew with our mouths open because we have been encouraged by members of our community to construe it as disgusting. It is easy trigger core disgust (i.e., disgust that has the function of protecting us from contamination) in the context of biological processes (see Nichols 2002). If caregivers show disgust when children chew with open mouths, children are more likely to acquire the view that such behavior is disgusting. Sripada and Stich call this a Sperberian bias (after Dan Sperber). Now, consider norms of how close we stand to each other during conversations. Human children can learn physical behaviors by imitation or behavioral mirroring. If we stand too close to someone, that person will pull away, and we unwittingly pick up on the distance pattern. Violations of distance norms may induce agitation in our interlocutors, but the norms can be acquired without experiencing that agitation. Finally, consider norms about what side of the street to drive on. These are likely to be learned by explicit instruction and conscious observation. We explicit formulate the rule and train ourselves to act on it, as if it were second nature. Sripada and Stich cite evidence that many norms are learned through explicit instruction. The first thing to note about these examples is that nonmoral norms are acquired in a variety of different ways. The second thing to notice is that none of these ways requires a norm acquisition mechanism. Emotional conditioning, unconscious imitation, and learning though instruction are all general capacities that serve us outside normative domains. The suggestion that we need a special mechanism for norm acquisition requires further support. Sripada and Stich might reply by pointing out that there are several constraints on norm acquisition that appear to be specific to norms. First of all there are social heuristics. For example, we don’t imitate just anyone; we are biased in favor of imitating people who have prestige. Sripada and Stich believe that such biases count against anti-nativist theories of norm acquisition. I disagree. Imitating prestigious people is valuable outside the domain of norms. For example, it is advantageous to imitate successful people when we learn basic skills (e.g., in hunting or gathering). Unless these skills are norms, prestige bias cannot be described as a dedicated component of a norm acquisition mechanism. Sripada and Stich might pursue a second line or response. They might argue that the content of some of our norms is innately prepared. This could be the case with incest avoidance which, though not innately moralized, may qualify as an innate nonmoral norm. I am a bit reluctant to call incest avoidance a norm. It is unclear whether it is implemented (without training) by punishment. It is also questionable whether incest avoidance is a rule as opposed to a behavioral disposition, but I will not attempt to explicate that distinction here. For even if incest avoidance is an innate norm, that does not show that there is a norm acquisition mechanism. We don’t need an acquisition mechanism for a norm that is already there. I doubt that there are any innate moral norms, but I will not attempt to argue that there are no innate nonmoral norms. There is a third line of response that Sripada and Stich might consider. I have argued that some norms are underwritten by emotions that are not specific to the normative
186
jesse j. prinz
domain. There is, however, at least one emotion that appears dedicated to social norm acquisition: embarrassment. Embarrassment is a social emotion, and its primary function seems to be “saving face” when we violate social norms. To this suggestion, I have two replies. First, if embarrassment is dedicated to norm acquisition, it is not the sole means of norm acquisition, and, therefore, it would be an exaggeration to call embarrassment the norm acquisition mechanism. Instead, we would have to say that norms are acquired through domain-specific and domain-general mechanisms. Second, embarrassment could not aid in norm acquisition if its sole function was registering norm violation. Consider how that story would have to go. To teach a child a norm, a caregiver would have to induce embarrassment in the child. But, if embarrassment could be induced only by norm violations, then embarrassment could be induced only if the child already thought that her behavior violated a norm. In other words, the child would already have to have the norm allegedly being acquired (or, at any rate, some related norm applicable in the same situation). Here is an alternative story. Suppose that embarrassment is not primarily a response to norm violation, but rather a response to unwanted attention; e.g., embarrassment arises when someone stares at you in a situation when you don’t want to be stared at. We are embarrassed to give public speeches or to receive awards and gifts even though no norms have been violated. On this model, embarrassment can be used to acquire norms because people will not want to engage in conduct that attracts unsolicited attention. Embarrassment is still a social emotion on this picture, but its utility in norm acquisition is a by-product of its primary function which is to withdraw from unwanted attention. Perhaps this emotion evolved because of its role in facilitated norm acquisition, but it may have evolved for some other purpose. It is rarely a good thing to receive attention when you don’t want it. These remarks are far from conclusive. With Sripada and Stich, I think there is great interest in trying to determine the mechanisms that allow us to acquire norms, and I applaud them for making explicit proposals and for acknowledging how much we still don’t know about these phenomena. With them, I also think there are mechanisms involved in norm acquisition (trivially so, since some norms are acquired), and some of these mechanisms may be domain-specific. At present, however, it remains an open possibility that norm acquisition relies heavily on more general mechanisms. This possibility is consistent with what Sripada and Stich have to say, but I think the boxological reification of a norm acquisition mechanism prematurely biases inquiry, by adopting the vocabulary of nativist accounts. Cultural variation in norms should lead us to explore the possibility that the mechanisms of norm acquisition, like the contents of our norms, are not innate. More accurately, the innate mechanisms that contribute to norm acquisition may not be dedicated to that purpose.
Appendix: Moral Anti-nativism and Moral Relativism I have argued that morality is not innate. It is a by-product of other capacities. In making this case, I suggested that moral rules vary significantly across the globe. This has implications for moral relativism. It suggests that, as a matter of fact, there is cultural
against moral nativism
187
variation in moral norms. It also suggests that such cultural differences may be impossible to rationally resolve. Moral norms are products of nonrational enculturation, not deliberation and deduction from shared first principles. People moralize different things because they are inculcated into different value systems – systems that have emerged though cultural evolution under the pressure of social and ecological conditions that may be specific to a particular group. If we tried to convince a member of the Guhuku Gama that he should not kill someone in the next village, he would reply that he has no obligation to anyone in that village. If we tried to convince a Bedouin that she shouldn’t marry her first cousin, she would express her disgust at the thought of marrying someone who isn’t even related. To make progress in a moral debate, there must be a bedrock of shared moral principles. For example, if you and I both agree on when life begins and on the principle that all humans have a right to life (both issues have a moral dimension), we can agree about abortion. Otherwise, not. Across cultures, there is no shared moral bedrock. Therefore, the biological underpinnings of morality cannot be used to adjudicate between competing values. This amounts to a strong form of descriptive moral relativism. As a matter of descriptive fact, there are cultural differences in morality that cannot be resolved by appeal to shared moral norms. Some philosophers think that moral disputes can be resolved by the power of cool reason (i.e., reason without evaluative premises). I will not address that contention here. For the record, I think cool reason is fundamentally incapable of resolving debates about basic moral values. Cool reason can no more do that than it can resolve debates about whether Brigitte Bardot is better looking than Marilyn Monroe. If I am right about that, then cross-cultural moral disputes cannot be rationally adjudicated. This conclusion has bearing on metaethical moral relativism. Metaethical moral relativism is the view that there is not a single true morality. Different moral codes can have equal claim to being right. If we have no way to resolve moral disputes by appeal to a shared innate code or the dictates of reason, then there are two possibilities. One is that there is a single true morality whose status as such is fundamentally unknowable. Such a morality would be useless to us because it could not guide action when moral disagreements arise. This possibility is not just silly; it is demonstrably false. If I am right in believing that moral convictions are affect-laden and learned, then there is nothing in virtue of which one unique set of moral convictions could be true. The truthmaker for moral claims lies not in Plato’s heaven, Kant’s deductions, or Darwin’s descent, but in us. This brings us to the second possibility: there is not a single true morality. Moral convictions are cultivated by human societies, and moral facts are determined by these convictions. This means that morality is a work in progress. Perhaps we can play a role in revising morality to suit various nonmoral needs.
References Allison, A. (2000) Permitted and Prohibited Desires: Mothers, Comics and Censorship in Japan. Berkeley, CA: University of California Press.
188
jesse j. prinz
Bennett, R., Motulsky, A. G., Bittles, A. H., Hudgins, L., Uhrich, S., Lochner Doyle, D., Silvey, K., Scott, R. C., Cheng, E., McGillivray, B., Steiner, R. D. and Olson, D. (2002) “Genetic Counseling and Screening of Consanguineous Couples and Their Offspring: Recommendations of the National Society of Genetic Counselors.” Journal of Genetic Counseling 11: 97–119. Blair, R. J. (1995) “A Cognitive Developmental Approach to Morality: Investigating the Psychopath.” Cognition 57: 1–29. Broude, G. J. (1995) Growing Up: A Cross-Cultural Encyclopedia. Santa Barbara, CA: ABC-Clio. Chagnon, N. A. (1968) Yanomamö: The Fierce People. New York: Holt, Rinehart and Winston. de Waal, F. B. M. and Lanting, F. (1997) Bonobo: The Forgotten Ape. Berkeley, CA: University of California Press. Finkelhor, D. (1983) The Dark Side of Families: Current Family Violence Research. Newbury Park, CA: Sage Publications. Fortes, M. (1949) The Web of Kinship Among the Tallensi. Oxford: Oxford University Press. Fox, R. (1980) The Red Lamp of Incest. New York: E. P. Dutton. Goodall, J. (1986) The Chimpanzees of Gombe: Patterns of Behavior. Cambridge, MA: Harvard University Press. Goody, J. (1983) The Development of the Family and Marriage in Europe. Cambridge: Cambridge University Press. Gould, S. J. and Lewontin, R. C. (1979) “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme.” Proceedings of The Royal Society of London, Series B, 205, 581–98. Greene, J. and Haidt, J. (2002) “How (and Where) Does Moral Judgment Work?” Trends in Cognitive Sciences 6: 517–23. Harris, M. (1977) Cannibals and Kings: The Origins of Culture. New York: Vintage. Harris, M. (1985) Good to Eat: Riddles of Food and Culture. New York: Simon and Schuster. Junod, Henri A. (1962) The Life of a South African Tribe, Vol. II, Mental Life. New York: University Books. Kilham, W. and Mann, L. (1974) “Level of Destructive Obedience as a Junction of Transmitter and Executant Roles in the Milgram Obedience Paradigm.” Journal of Personality and Social Psychology 29: 696–702. Leavitt, G. C. (1990) “Sociobiological Explanations of Incest Avoidance: A Critical Review of Evidential Claims.” American Anthropologist 92: 971–93. Malinowski, B. (1927) Sex and Repression in Savage Society. New York: Harcourt, Brace. Mantell, D. M. (1971) “The Potential for Violence in Germany.” Journal of Social Issues 27: 101–12. Milgram, S. (1974) Obedience to Authority. New York: Harper & Row. Nichols, S. (2002) “On the Genealogy of Norms: A Case for the Role of Emotion in Cultural Evolution.” Philosophy of Science 69: 234–55. Prinz, J. J. (2007) The Emotional Construction of Morals. Oxford: Oxford University Press. Read, K. E. (1955) “Morality and the Conception of the Person among the Gahuku-Gama.” Oceania 25: 233–82. Rosaldo, R. (1980) Ilongot Headhunting 1883–1974. Stanford: Stanford University Press. Rozin, P. (1999) “The Process of Moralization.” Psychological Science 10: 218–21. Rozin, P., Lowery, L., Imada, S., and Haidt, J. (1999) “The CAD Traid Hypothesis: A Mapping Between Three Moral Emotions (Contempt, Anger, Disgust) and Three Moral Codes (Community, Autonomy, Divinity).” Journal of Personality and Social Psychology 76: 574–86. Scheidel, W. (1996) “Brother–Sister and Parent–Child Marriage outside Royal Families in Ancient Egypt and Iran: A Challenge to the Sociobiological View of Incest Avoidance?” Ethology and Sociobiology 17: 319–40.
against moral nativism
189
Shaw, B. (1992) “Explaining Incest: Brother-Sister Marriage in Graeco-Roman Egypt.” Man 27: 267–99. Shepher, J. (1971) “Mate-Selection among Second-Generation Kibbutz Adolescents and Adults: Incest-Avoidance and Negative Imprinting.” Archives of Sexual Behavior 1: 293–307. Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., and Sherif, C. W. (1954) Experimental Study of Positive and Negative Intergroup Attitudes between Experimentally Produced Groups: Robbers Cave Study. Norman, OK: University of Oklahoma Press. Slotkin, J. S. (1947) “On a Possible Lack of Incest Regulations in Old Iran.” American Anthropologist 49: 612–17. Smith, M. N. (2004) Property Rights, Social Norms and the Law. Doctoral thesis, University of North Carolina, Department of Philosophy. Sripada, C. and Stich, S. (2006) “A Framework for the Psychology of Norms,” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind, Vol. 2: Culture and Cognition. Oxford: Oxford University Press. Stich, S. (ed.) (1975) Innate Ideas. Berkeley, CA: University of California Press. Thornhill, N. W. (1991) “An Evolutionary Analysis of Rules Regulating Human Inbreeding and Marriage.” Behavioral and Brain Sciences 14: 247–93. Turnbull, C. (1972) The Mountain People. New York: Simon and Schuster. Westermarck, E. (1891) A History of Human Marriage. New York: Macmillan. Wolf, A. (1970) “Childhood Association and Sexual Attraction: A Further Test of the Westermarck Hypothesis.” American Anthropologist 72: 503–15.
190
stephen stich
11 Replies STEPHEN STICH
I was both delighted and flattered when Mike Bishop and Dominic Murphy first told me that this volume was in the works. And I was quite thrilled when I got the list of distinguished philosophers who had agreed to contribute, though I quickly came to realize that the job of preparing responses to such an outstanding group of critics would be a daunting one indeed. Before plunging in, I want to express my gratitude to Mike and Dominic, to all the contributors, and to the series editor, Ernie Lepore. Thanks are also due to Boris Yakubchik, who helped assemble the references for my replies.
Reply to Devitt and Jackson Both Devitt and Jackson focus on my critique of ontological arguments that invoke the strategy of semantic ascent and appeal to a theory of reference. Though they agree that the critique raises some important problems, they disagree sharply on how to react to those problems. Devitt shares my view that we should reject the arguments, though he thinks the reasons I offer are misguided, and offers some reasons of his own. Jackson, by contrast, thinks that the arguments can be retained provided that they invoke the right theory of reference, and he offers a criterion for choosing the right theory – “the one which is of interest when we do ontology” (p. 68). In this reply, I’ll discuss Devitt’s chapter first, then turn to Jackson’s.
Devitt The reasons I set out for rejecting ontological arguments that use the strategy of semantic ascent and appeal to claims about reference rely heavily on my discussion of what the theory of reference is trying to do, and on this issue Devitt and I agree on some points and disagree on many others. To focus in on this complex pattern of agreement and disagreement, I’ll sketch the path that led me to an exploration of the foundations of the theory of reference. The ontological debate that first raised these issues for me was, of course, the debate over eliminativism.
replies
191
The story starts with the observation that while arguments for eliminativism vary significantly in detail, they all have the same structure. The first premise of the argument maintains that commonsense mental states like beliefs and desires can be viewed as posits of a widely shared commonsense theory – “folk psychology” – which underlies our everyday discourse about mental states and processes, and that terms like ‘belief’ and ‘desire’ can be viewed as theoretical terms of this folk theory. The second premise maintains that folk psychology is a seriously mistaken theory. This premise has been defended in many ways, with different authors focusing on different putative defects. The conclusion that eliminativists draw from these two premises is that beliefs, desires, and the other posits of folk psychology do not exist. But, as Bill Lycan (1988) pointed out with characteristic verve and clarity, the conclusion does not follow from these premises taken alone. To fill the gap, Lycan noted, most eliminativists either explicitly or tacitly rely on some version of the description theory of reference for theoretical terms. Description theories are not the only game in town, however. Inspired by the influential work of Putnam (1975) and Kripke (1972), many philosophers, including Devitt himself (Devitt 1981; Devitt and Sterelny 1999), have embraced one or another version of the causal-historical theory of reference. And if a theory in that vicinity provides the correct account of reference for the theoretical terms of a commonsense theory, then the argument for eliminativism fails even if the premises are true. With a single caveat, I think Devitt is in agreement with all this. The caveat is that description theories and causal-historical theories do not exhaust the options. Information-theoretic theories like Dretske’s (1981) and teleological theories of the sort developed by Millikan (1984) and Papineau (1987) are also in the running. Devitt is quite right about this. So thus far we are in complete agreement. If eliminativists’ arguments explicitly – or more often tacitly – invoke an assumption about the reference of theoretical terms in a folk theory, then even if we grant that the first two premises of the eliminativists’ argument are true, we will have to determine which theory of reference provides the correct account for these terms in order to assess the soundness of the argument. But prior to plunging into the debate over the merits of competing theories of reference, it is important to be clear about just what counts as getting a theory of reference right. What facts is a theory of reference attempting to describe or account for? What job is the theory expected to do? It is curious and puzzling that contemporary analytic philosophers, some of whom have offered detailed and insightful accounts of the goals, methods, and explanatory strategies in other disciplines, have been notably unreflective about their own. While there may be some exceptions to this – some areas of philosophy in which philosophers have been concerned with their own methodology – the theory of reference clearly was not one of them. At the time Deconstructing was written, one could not go to the library and read what theorists in this area had said about their goals since, to a good first approximation, they hadn’t said anything at all. So I set out to describe some of the projects that people might have in mind when they put forward a theory of reference or debate the virtues of competing theories. At this point, Devitt is still on board, I think. Like me, he believes that it is important to get straight on what the goals of a theory of reference are, and he has offered his own account of what theories of reference should and should not be trying to do.
192
stephen stich
In Deconstructing, I distinguish two families of answers to the question, “What is a theory of reference trying to accomplish?” The first family of answers – which I call the folk semantics family – aims to come to grips with the fact that appeal to intuitions plays a central role in debates about the virtues of competing theories of reference; more often than not these intuitions concern what various terms would denote in bizarrely counterfactual situations. So, for example, in Naming and Necessity, Kripke (1972) asks us to imagine that the incompleteness theorem was actually proved by a man called ‘Schmitt’ and that after some skullduggery the theorem was attributed to Gödel. On a description theory, an “ordinary man” whose only interesting belief about Gödel is that he is the discoverer of the incompleteness theorem would be referring to Schmitt when he uses the name ‘Gödel’. But our intuitions tell us otherwise, and this, it was widely agreed, is a major embarrassment for the description theory for proper names. Similarly, Putnam introduced us to the imaginary planet of Twin Earth where the liquid that falls as rain and fills the lakes is XYZ not H2O, and this spawned an enormous literature exploring our intuitions about the reference of ‘water’ when used by Earthlings, Twin Earthlings and a bewildering variety of travelers who are magically transported from one planet to the other (Pessin and Goldberg 1996). The correct theory of reference, it is widely assumed, must accommodate most of these intuitions. What sort of project could make sense of all this intuition-mongering? What goal could be served by it? One obvious idea turns on the notion of a tacit theory that has played a large role in cognitive science. People, cognitive scientists tell us, have a significant number of tacit theories – bodies of mentally represented information (or misinformation) which they cannot explicitly state but which manifest themselves in their intuitive judgments and behavior. No doubt the most widely discussed of these is grammar, a tacit theory which, according to Chomsky and his many followers, is exploited in the production and comprehension of natural language and manifests itself in people’s intuitive judgments about grammatical properties of utterances. Folk psychology and folk physics are also widely thought to be tacit theories. It is plausible to hypothesize that there is also a tacit theory, call it folk semantics, which guides people’s intuitive judgments about what terms refer to. If the goal of a theory of reference is to describe this tacit theory, then collecting intuitions about a wide range of cases, and insisting that the right theory of reference must be compatible with most of these intuitions, would be a perfectly sensible methodology. This gives us one account of what it is to get a theory of reference right. The right theory is the one that correctly describes the tacit theory that (inter alia) guides reference intuitions. That’s not the end of the story about the folk semantics account, however. In motivating the idea that reference intuitions might be subserved by a tacit theory, I mentioned three suggestive analogies: folk psychology, folk physics, and grammar. But there is an important difference between grammar and the other two. This difference emerges when we ask whether the hypothesized tacit theory is true. It is important to keep in mind that in each of these three cases, there are actually two theories under discussion. On the one hand, there is the theory constructed by a psychologist or linguist or philosopher that purports to describe a tacit theory. That theory is true if it correctly describes the tacit theory represented in the relevant people’s heads. But what about the tacit theory itself?
replies
193
What determines whether or not that theory is true? In the case of folk physics and folk psychology, the basic outlines of the answer are straightforward enough. Folk physics is a theory about the principles governing the behavior of middle-sized physical objects, and folk psychology is a theory about states and processes underlying behavior. So the tacit theory is correct if it correctly describes those principles or processes. In the case of folk physics, we know from the work of McCloskey (1983), Clement (1983) and others that important parts of the theory are false. In the case of folk psychology, the jury is still out; whether or not folk psychology is largely true is a central issue dividing eliminativists and their opponents. But now what about grammar? Here the dominant view – the one advanced by Chomsky and his followers – is that a sentence is grammatical in a dialect if and only if it is classified as grammatical by the grammar represented in the minds of speakers of that dialect. On this account, while a linguist can certainly be mistaken in her characterization of the grammar of a dialect, the grammar itself – the tacit theory she is trying to describe – cannot be wrong in what it entails about the grammaticality of sentences, since what makes a sentence grammatical in a dialect is simply the fact that the grammar tacitly known by speakers of that dialect entails that it is grammatical. With this distinction in hand, let’s return to the theory of reference. On the folk semantics account, the goal of a theory of reference is to correctly describe the tacit folk semantic theory represented in the heads of some group of speakers. But what is the status of the claims made by that tacit theory? Here, it seems, there are two possible answers. If we pursue the analogy with grammar, then folk semantics is constitutive for reference: a term (in a dialect) refers to an object if and only if the folk semantic theory entails that the term refers to the object. So there is an important sense in which folk semantics can’t be wrong. On the other hand, if we pursue the analogy with folk physics or folk psychology, then folk semantics might well be mistaken. Whether or not it is mistaken depends on how well it matches up with the facts about reference, and folk semantics itself does not create these facts. So to know whether or not folk semantics is correct, we’ll have to pursue some other inquiry. Just as the science of physics is the standard by which we decide whether folk physics is correct, and the science of psychology is the standard by which we decide whether folk psychology is correct, it will be the job of some appropriate branch of science to tell us whether folk semantics is correct. Now if the job of a theory of reference is to describe a tacit folk semantic theory, then so long as we are focused on accomplishing that job it matters little which of these options is correct. The issue is important, though, when the theory of reference is pressed into service in metaphysical or ontological arguments like the argument for eliminativism. For what we need to know in assessing that argument is whether or not terms in a seriously mistaken theory refer. On the grammar analogy, the pronouncements of an internally represented folk semantics settle the matter. But on the folk physics and folk psychology analogy, the pronouncements of folk semantics might be quite mistaken. It’s not a folk theory but an appropriate branch of science that tells us about the reference of various sorts of terms. So if we are concerned to assess the eliminativists’ argument, then we’re left with only two options: adopt the grammar analogy, or forget about folk semantics and explore what the appropriate science has to tell us about reference. Having reached this point in Deconstructing, I went on to argue
194
stephen stich
that neither of these options is viable, and thus that the eliminativists’ argument is fatally flawed. Devitt has a much more sanguine view of the second option, and that is, I think, by far our most interesting and important disagreement. I’ll turn to that disagreement shortly. But first, let me say where I think Devitt and I agree and disagree with respect to the idea that the goal of a theory of reference is to describe a folk semantic theory. The pattern of agreement and disagreement here is rather complicated. Let’s start with the following question: What do philosophers who debate the virtues of competing theories of reference think that they are up to? What do they aim to accomplish? In Deconstructing, I speculated that, if pressed, most philosophers would take themselves to be engaged in some version of the folk semantics project; they are trying to “capture the details of a commonsense theory about the link between words and the world.” It is particularly gratifying that Devitt agrees about this, since, while I dabble in the philosophy of language from time to time, Devitt is a pro who knows both the literature and the folks who produce the literature far better than I. Another point on which Devitt and I agree is that if a theory of reference is pursuing the folk semantics project, and if the analogy with folk physics is the right one, then there is no reason to think that the results of the project will give us the right account of reference. So while it might be of psychological interest, the project is of no help in evaluating the eliminativist argument. What about the other analogy, the one that takes its inspiration from the Chomskian account of grammar. What’s Devitt’s view on that option? Here, I confess, I am not entirely sure. In recent years, Devitt has been a trenchant critic of the Chomskian account of grammar, and has argued that “the idea that the grammar is represented in the minds of speakers is implausible and unsupported.”1 This is not the place to debate the merits of Devitt’s challenging and controversial critique of Chomsky. But even if he is right, it is not clear that it would tell us much about the version of the folk semantics project that opts for the Chomskian analogy. For even if there is no grammar mentally represented in the minds of speakers, it might well be the case that speakers have a tacit folk theory of reference. People do, after all, have reference intuitions of the sort that Kripke, Putnam, and many other philosophers explore and exploit. And, as Frank Jackson often says about intuitions of this sort, surely they are not random. So something is guiding people’s judgments when they offer intuitions about reference, and a tacit theory is certainly a prime candidate. Does Devitt disagree? I don’t know the answer to that. The idea that there is a tacit theory of reference subserving reference intuitions is common ground between the linguistics analogy and the folk physics analogy. What is distinctive about the linguistics analogy is the contention that a person’s tacit theory of reference can’t be wrong because the word–world relation specified by that tacit theory is constitutive of reference for that person. This, I’m sure, is an idea that Devitt would reject, since it leads pretty directly to the claim that knowledge of some semantic facts is a priori, and Devitt thinks that no knowledge is a priori. But I am not much inclined to follow him here. For it seems entirely possible that an internally represented theory or an internally represented body of rules could sustain a sort of a priori knowledge. Suppose, for example, that I invent a variant on the standard game of chess by adding a few new rules. I dub the new game ‘Stich-chess’ and I memorize the rules. One of the new rules
replies
195
specifies a new way of check-mating your opponent: if a capture leaves your opponent with only three pieces on the board, then he has been check-mated. It seems entirely plausible to say that the new rules are partly constitutive of what it is to check-mate someone in Stich-chess, and that I can know a priori that a move which leaves an opponent with only three pieces on the board is a check-mate in this game. If this story is coherent, and surely it is, then I can’t see why a theorist could not adopt a similar view about the way in which an internalized theory of reference leads to a priori knowledge about reference. In Deconstructing, I did not argue that there was anything incoherent or implausible about the folk semantic project when paired with the linguistics analogy. What I did argue is that if that’s what the theory of reference is up to, then the results would be of no help in assessing the soundness of the eliminativists’ argument, or in resolving other sorts of ontological issues. The argument I offered was long, complicated and, many readers have told me, more than a bit obscure. A few years after the book was published, Mike Bishop and I came up with a much shorter and more elegant way of making the point (Bishop and Stich 1998). To use a theory of reference and the strategy of semantic ascent to draw ontological conclusions, one must assume that the word–world relation, R, that the theory characterizes satisfies a “disquotation” principle like one of the following: (1) (x) Fx iff ‘F_’ stands in the R relation to x. (2) ¬(∃x) ‘F_’ stands in the R relation to x → Fs do not exist.2 But that is not an assumption that one gets for free. There are, after all, endlessly many word–world relations that don’t satisfy these disquotation principles. So some argument is needed to justify the claim that the relationship defined by an internalized folk semantics does satisfy them. I have no idea how to produce that argument and, as far as I know, no one else does either. And without such an argument, the folk semantics project is of no help at all in determining whether the eliminativists’ argument is sound. Let’s turn now to the other family of answers to the question: “What is a theory of reference trying to accomplish?” – the family that invokes some version of what I’ve called the “proto-science” project. Here the central idea is that just as we look to physics to tell us what matter and energy are, and we look to biology to tell us what genes are, perhaps we can look to science to tell us what reference is. The problem, of course, is that there is no science to turn to. No empirical inquiry has offered us anything like the sort of account of reference that would be of use in assessing the eliminativist argument. However, taking a cue from Cummins (1989), we might try to get some indirect help from the sciences. While no existing science tells us what reference is, some branches of science seem to invoke reference, or at least some reference-like word–world relation, in setting out their explanations and theories. Linguistics is the most obvious candidate, though cognitive psychology, anthropology or even history might be thought to make use of such a relation. If they do, then we can try to do for those sciences what Cummins, Egan, Ramsey, and others have tried to do for representation – the head–world relation invoked in various branches of cognitive science.3 That is, we can try to describe in detail
196
stephen stich
the sort of relation that is presupposed by these sciences, to say what features that relation must have for the theories and explanations that invoke the relation to be convincing or at least plausible. In Deconstructing, I expressed skepticism “that any of these areas of inquiry make genuinely explanatory use of a reference-like word–world relationship” (44). But in light of Devitt’s comments, I’m inclined to back off on this point. While Devitt has not convinced me that linguistic semantics invokes a substantive (as opposed to a deflationary) notion of reference, he has convinced me that the issues involved are contested and complex. A second point I made in my brusque dismissal of the proto-science project in Deconstructing was one suggested by Cummins’ work on representation. It is naive, Cummins insists, to suppose that each branch of cognitive science invokes the same notion of representation. Indeed, if Cummins is right, it is worse than naive, it is false. Similarly, if linguistics, cognitive science, anthropology and history all invoke a reference-like word–world relation, it would be naive to assume that they all invoke the same relation. And if, as the proto-scientific project progresses, it turns out that different relations are in play in these different disciplines, then it looks like the philosopher who wants to use the results of the proto-scientific project to assess the eliminativists’ argument is beset with an embarrassment of riches. For if there are different reference-like word–world relations used in different sciences, which one should the philosopher rely on in assessing the tacit third premise of the eliminativists’ argument? Without some well-motivated answer to this question, the proto-science project may turn out to be of no help at all. Devitt proposes a different way of understanding the proto-science project, a way which, he thinks, sidesteps this problem. Before turning to Devitt’s version of the protoscience project, I want to note an important point that I missed in Deconstructing. Suppose that the embarrassment of riches problem can be addressed in some principled way; suppose, for example, that Devitt is right and that only linguistic semantics invokes a reference-like word–world relation. So there is only one candidate for a Cummins-style proto-scientific explication. That by itself would not show that the word–world relation invoked by linguistic semantics is the one to be used in assessing the plausibility of the eliminativists’ arguments and other, similar, ontological arguments. For as Bishop and I argued, to be useful in ontological arguments, a word–world relation must satisfy disquotation principles like (1) and (2). And neither the fact that a word–world relation is the one exploited in linguistic semantics, nor the fact that researchers in that area call the relation “reference,” provides any obvious reason to suppose that the relation satisfies the disquotation principles. Here, just as with the word–world relation favored by folk semantics, we need an argument. And as far as I can see, no one has the faintest idea how to provide that argument. In introducing his version of the proto-science project, Devitt begins with what he thinks is an obvious answer to the question that has been center-stage in our exchange: “What makes a theory of reference true or false? Well, the nature of the reference relation does. What is a theory of reference supposed to do? Well, characterize that nature” (p. 48). As he notes a bit later, however, “the simple and obvious claim that the task of the theory of reference is to characterize reference strikes people as naive because there is thought to be a special problem about identifying reference. Which relation is it
replies
197
the task to characterize?” (p. 50). Devitt thinks there is no problem in specifying which relation to characterize, or at least there is no special problem here. All we need do is identify some obvious and uncontroversial examples. And to do that, it is perfectly OK to rely on our intuition, with the clear understanding that later theorizing can lead us to reclassify some of the examples we thought were intuitively obvious. Here’s how Devitt develops the idea: The first stage of a theory of any property F or relation R involves identifying some apparently uncontroversial examples where F or R is instantiated and some apparently uncontroversial examples where it is not instantiated. These examples can then be examined in the second stage to discover what is common and peculiar to F or R in the hope of determining its nature. In that first stage we should consult those most expert at identifying cases of F or R. If we are concerned with, say, being a gene, being an echidna, or being an isotope of, then we can look to scientists for the identification. But when our concern is with being referred to by, it is doubtful that anyone is more expert at identification than the folk. So these most basic folk intuitions about reference, intuitions that identify paradigm cases of it, play a role in the first stage of the theory of reference . . . Even these basic intuitions are not infallible. Theorizing at the second stage can lead to the rejection of results at the first stage: apparently uncontroversial examples turn out to be controversial; whales are not fish after all; tomatoes are not vegetables; unacceptable strings of words turn out to be grammatical. There is even less reason to think that any richer folk intuitions or theories about the nature of reference must be true. (pp. 49–50)
So there is, Devitt thinks, no need worry about which relation a theory of reference is trying to characterize, and no need to be concerned that different branches of science might invoke different word–world relations. The sciences have no role to play in the first stage of the proto-science project, as Devitt construes it, though they do come in at the third stage where the task is to show “that the word–world relation that is reference is scientifically useful” (p. 50). Is this account of what a theory of reference is trying to do naive? Far from it. Rather, I think, it is straightforward, sensible, and initially quite appealing. But I also think it is seriously under-described, and that when we start asking about some of the missing details much of the appeal of the idea dissolves. Let’s begin by focusing on the “apparently uncontroversial examples” that we identify by relying on “intuitions that identify paradigm cases.” Which intuitions are these? Devitt mentions the relation between ‘Jemima’ and Jemima and between ‘cat’ and cats as paradigm cases of the reference relation, and the relation between ‘Jemima’ and Fido and between ‘cat’ and dogs as clear cases of non-reference. So presumably the intuitions that give rise to these judgments are among the “basic folk intuitions” with which the project begins. But what about the sorts of intuitions that Kripke and Putnam relied on in their critique of description theories of reference – intuitions about Gödel and Schmitt or about Twin Earth. Do they count as “basic folk intuitions”? There is good reason for Devitt to say yes here. For it was intuitions like these that enabled philosophers to tease apart the description theory and the causal-historical theory – to see clearly that there were cases where the theories had quite different implications – and to argue that description theories were problematic for
198
stephen stich
a range of cases. So if intuitions like these are excluded in the first stage of Devitt’s project, then there is a real risk that the second stage of the project will not be feasible. The job of that second stage, recall, is to “discover what is common and peculiar” to all the examples picked out in the first stage. But if in the first stage we are restricted to intuitions like those about the relation between ‘cat’ and cats (in the actual world but not, for example, in worlds where there are animals just like cats with very different evolutionary histories) then we are likely to find that there are lots of relations that all the examples instantiate. For any finite set of examples, there will, of course, be an infinite number of relations that they instantiate. I assume that Devitt would appeal to methodological criteria – criteria which are hard to state but easier to use – in order to rule out most of these. However, those criteria surely will not rule out either relations characterized by standard versions of the description theory or relations characterized by standard versions of the causal-historical theory. We’ll need intuitions like those about Gödel and Schmitt and those about Twin Earth to rule out these alternatives. Without them, the second stage simply isn’t viable. So it looks like Devitt will have to say that Gödel/Schmitt intuitions and Twin Earth intuitions will count as “basic folk intuitions” in the first stage of his project.4 That, however, poses problems of a different sort. For there is now good reason to think that, despite the important role they played in the emergence of the causal-historical theory, the folk are hardly in agreement on these “basic folk intuitions.” Rather, it seems, the intuitions are culturally local. Machery et al. (2004) gave a pair of vignettes closely modeled on Kripke’s Gödel/Schmitt case to two groups of fluent English speakers. One group were Americans whose cultural background was European, the other group were Hong Kong Chinese at the University of Hong Kong. The majority of Americans in this study, like the overwhelming majority of analytic philosophers (whose cultural background is also European!), had intuitions compatible with the causal-historical theory and incompatible with the description theory. The majority of Chinese, however, had intuitions compatible with the description theory and incompatible with the causalhistorical theory. This is, to be sure, only a single study and much more work remains to be done. But if these results are robust and generalizable, then Devitt’s project is in trouble. Without Gödel/Schmitt intuitions and their ilk, he won’t have enough cases to pick out a unique relation whose nature will be studied in the second stage of the project. But if he wants to include Gödel/Schmitt intuitions, he’ll have to offer some principled reason for excluding the intuitions offered by one cultural group or the other. And it is, to put it mildly, less than clear what that reason might be. A second way in which Devitt’s project is under-described emerges when we ask what the theorist is supposed to do if it turns out that the “basic folk intuitions” pick out quite different word–world relations for different sorts of terms. This is hardly an unexpected outcome. Indeed, Devitt himself conjectures that description theories will turn out to be correct for some words (including ‘bachelor’ and ‘vixen’ [p. 52]) and he has argued at length that causal-historical theories are correct for many other terms (Devitt 1981). At first blush, it might seem that there is no real problem here. If reference turns out to be two relations (or three or four), so be it. Isn’t that entirely analogous to what happened in the case of that old philosophical chestnut, jade? In the first stage of the inquiry into the
replies
199
nature of jade, the experts (jewelers and gem cutters, I suppose) got to pick out paradigm cases of jade. But when scientists inquired into the nature of the stones picked out, it turned out that there were two quite different minerals in the sample: jadeite and nephrite. So there are two kinds of jade, but this creates no particular problem. If it turns out that there are two (or more) kinds of reference, the analogy suggests that this will be equally unproblematic. The analogy breaks down, however, when we ask how the theorist is supposed to deal with contested examples. If the jewelers and gem cutters disagree about a particular stone, we can give it to the relevant scientists and let them determine what it is. If it’s jadeite or nephrite, then it’s jade. If it is some other mineral, it’s not jade. But now what are we supposed to do about contested cases of reference? Some people think that many terms used in past theories do refer, and that “a causal theory (or perhaps a part-causal part-description theory) of reference is often more plausible for the words of past theories” (p. 47). Others think that some version of the description theory is the right one for these terms, and that the terms do not refer at all. Given the controversy, obviously intuitions about the reference of these terms can’t be used to pick out “paradigm cases.” So presumably we’ll have to let the theory decide, just as we did in the case of contested cases of jade. But how, exactly, is the theory supposed to help us? In these cases, the term in question typically does have the sort of causal (or causalhistorical) links to things that really exist that were found in uncontested cases. Historical tokens of ‘phlogiston’, as Philip Kitcher points out, had a rich web of causal relations with oxygen (Kitcher 1993). However, it is also the case that these terms are associated with descriptions of the sort that play a central role in fixing reference in other uncontested cases (cases like Devitt’s ‘bachelor’ and ‘vixen’ perhaps). But those descriptions aren’t satisfied (or even approximately satisfied) by anything. So if the causal or causalhistorical relation is the one which determines reference for these terms – if that’s the kind of reference they have, so to speak – then they do refer. If the description-theoretic relation is the kind of reference they have, then they don’t refer. But which kind of reference do they have? As far as I can see, Devitt’s account doesn’t even begin to address that question. And without an answer, the project Devitt describes is of no help at all in characterizing “the nature of the reference relation” in just those cases where such a characterization was thought to be philosophically important. As we saw earlier, Devitt agrees that most people actually engaged in debates over which theory of reference is the right one seem to be doing folk semantics. The threestage project he describes is not the one he thinks these people are engaged in, but the one he thinks they should be engaged in. It’s the one that will tell us “the nature of the reference relation.” Here we disagree sharply. I don’t think the project he has described has any serious prospect of telling us “the nature of the reference relation” and I doubt that there is any way of modifying the account that will enable it to do that. Moreover, I’m inclined to agree with deflationists like Field (1986, 1994) and Horwich (1990) who maintain that the quest for the nature of the reference relation is misguided, since reference is not a substantive relation at all. Obviously this is not the place to offer a defense of that view and, truth be told, I have little to add beyond what Horwich and Field have already said. While Devitt and I have very different views about reference, we agree that using the strategy of semantic ascent and then appealing to a theory of
200
stephen stich
reference is not a good way to make progress in the eliminativism debate or in debates about other ontological issues. Here we both disagree with Frank Jackson. So it is time to take a look at Jackson’s view.
Jackson Jackson endorses my account of how the eliminativism debate became “embroiled” in the theory of reference. (I wish I had thought of using that word!) He also endorses the conclusion I once wanted to draw: “It matters for the truth of eliminativism which theory of reference is correct. Ergo, if that question has no one ‘correct’ answer, there is no one ‘correct’ answer to whether or not eliminativism is true or false” (p. 64). Moreover, as Jackson was instrumental in persuading me years ago, the problem is not restricted to eliminativism: “if [the] argument works for the propositional attitudes, it works for the big bang, cures for AIDS, Venus, and so on” (p. 62). For the reasons I was groping toward in Deconstructing, and managed to express more clearly (I hope) in Bishop and Stich (1998), I am no longer inclined to think that the truth of eliminativism, or of any other ontological claim is linked in any interesting way to the quest for the correct theory of reference. But here Jackson remains unconvinced. He still maintains that the two projects are importantly connected. Jackson thinks “that the best policy is to be easy-going about what counts as representation,” (p. 65) and he counsels the same attitude toward debates about which theory of reference is correct, as these debates have been pursued in much of the philosophical literature. “Be the theory of reference proto-science, or the theory that best captures reflective intuitions, or a bit of both, there is no reason to hold that there is one theory of reference” (p. 66). But this casual attitude does not extend to the notion of reference used in debates over ontology. “Perhaps . . . there are many notions of reference, but there had better be a preferred notion in play for the purposes of raising ontological questions” (p. 67). And in the last third of his chapter, Jackson tells us how to find that theory. He doesn’t actually offer a theory, since “that’s a job for a book”; what he does offer is “a criterion for choosing among a range of candidate word–world relations, the one which is of interest when we do ontology” (p. 68). The picture of language that is central to Jackson’s story is inspired by Lewis, Grice and ultimately by John Locke. And the version that Jackson sketches is, as one would expect, subtle, sophisticated, and richly informed by the ongoing philosophical literature on these matters. The “fundamental point” is that “language is a learnt, conventionbased system of representation” and that “our possession of language is how we are able to make public and communicate the contents of our mental states” (p. 69). “Very roughly,” Jackson tells us, “we start with mental states that represent how things are, most especially beliefs and thoughts that things are thus and so, and somehow hit on an implicit understanding that certain noises and marks are to be given the task of making public these representational contents”(p. 69, italics added). This requires a coding system, and “a term’s reference is a key part of this coding system” (p. 69). Jackson then sketches how we can use facts about this coding system to zero in on the right reference
replies
201
relation. To put the matter very crudely indeed, the right reference relation is the one that will enable the coding system to do its job. There is much to admire in Jackson’s account, and much to debate. But, as I see it, even if Jackson is right, his account will not address the problem that motivated the project in the first place, for his account does not help at all in trying to decide whether the eliminativists’ ontological conclusion follows from their two explicit premises. Rather, the bump in the rug just moves to another place. Perhaps the easiest way to see the point is to note that if one accepts Jackson’s general view of the relation between language and thought, then there is no need to say or write anything at all in posing the eliminativists’ argument. One could simply run through the argument in thought – in foro interno as Ernie Sosa might say. I am painfully aware that this is possible, since I have done it myself many, many times. Let me suppose that beliefs and desires are the posits of a tacit, folk-psychological theory, I think to myself, and that the theory is seriously mistaken for the following reasons . . . Would it follow that beliefs do not exist? Or should I conclude that beliefs do exist, but that folk psychology is wrong about many of the claims it makes about them. Having a detailed account of the word–world relation picked out by Jackson’s criterion would do me no good at all in attempting to answer this question, since that account tells me about words in a public language – “a learnt, convention-based system of representation” that we use “to make public and communicate the contents of our mental states.” It tells me what “certain shapes on paper or sound wave patterns in the air” (p. 69) refer to. But I haven’t scribbled any shapes on paper or emitted any sound wave patterns. I’m sitting quietly, thinking about eliminativism. At this point I can imagine someone protesting that I’m being unfair to Jackson. To be sure, I can think about the eliminativists’ argument without saying or writing something, the critic concedes, but I am still using words in a public language; I am using them in thought. There is no way of thinking the italicized thoughts in the previous paragraph without those words (or the words of some other natural language) running through my mind. So I can use Jackson’s account of the reference of public language words to determine the reference of my thoughts about beliefs. And that will enable me to determine which conclusion follows from the premises. Though the issues raised by this protest are complex and controversial, I am inclined to think the critic is right. Indeed, one can’t think that beliefs and desires are posits of a tacit, folk-psychological theory unless one thinks it in some natural language or other. But the crucial point here is that while I can, and do, agree with the critic, Jackson can’t. On the Locke–Grice–Lewis picture of language that Jackson embraces, “we start with mental states [most especially beliefs and thoughts that things are thus and so] that represent how things are” (p. 69). These thoughts “have (reasonably) determinate contents which we can report using sentences,” where contents are taken to be “states of affairs or propositions” (fn. 18). It is the content of our thoughts about beliefs that determines what the natural language term ‘belief ’ refers to. So if Jackson were to appeal to his account of the reference of the term ‘belief ’ to determine the content of thoughts about beliefs, he’d be trying to pull himself up by his own bootstraps. To oversimplify a bit, on Jackson’s account, the reference of terms in natural language is determined by the reference thoughts. But if he is right about this, it does not help us at all, since we now
202
stephen stich
need to know what determines the reference of thoughts. And about this, Jackson’s theory of reference tells exactly nothing.
Reply to Egan Egan calls her rich and interesting paper “Is There a Role for Representational Content in Scientific Psychology?” and she tells us that “the question that forms the title of this paper is . . . addressed directly to Stich.” It’s an important question, certainly, and an appropriate one since, as she notes, my current position on the matter is less than clear. In From Folk Psychology to Cognitive Science I offered an account of our ordinary practice of ascribing content to mental states like beliefs. On that account, content ascriptions have what Egan dubs “the R properties” – they are vague, context-sensitive and observer-relative; they are influenced by the ideological similarity between the attributor and the target and by the similarity between the reference of the terms that the attributor uses to express the belief and the reference of the terms that the target would use to express it. I went on to argue that a taxonomy of mental states which relies on commonsense content attributions is ill suited for use in a scientific theory whose ultimate aim is to explain behavior. If that’s right, I concluded, then content has no role to play in scientific psychology. But in Deconstructing the Mind, published 13 years later, my view was much more guarded. “The jury,” I said, “is still out on the question of whether successful science can be constructed using intentional categories” (p. 199). What prompts Egan’s question is not that I changed my mind, but that I haven’t offered any clear and focused explanation of why I changed my mind. I haven’t said why I am no longer convinced by the arguments in Folk Psychology aimed at showing that the answer to Egan’s title question is no. Some philosophers might reject those arguments because they believe that Folk Psychology mischaracterized the commonsense concept of content, and that on a correct account a content-based taxonomy would not have the R properties.5 That’s a concern that I think should be taken seriously; I’ll return to the topic later. But it is not a concern that Egan shares. She thinks that my arguments aimed at showing that a content taxonomy will have the R properties are persuasive (p. 16) and that Folk Psychology “established” the point (p. 19). She also says that the case against content in Folk Psychology is a “tour de force” (p. 16). (Wow! A tour de force! Thanks, Frankie.) But one of the many valuable things I’ve learned from Egan is that “tour de force” is not a success term. Egan thinks that the arguments aimed at showing that content has no role to play in scientific psychology fail, and that content does play a crucial role in the explanatory strategy of scientific psychology. To make the case, Egan focuses on computational psychology and sketches an account of the explanatory strategy of that central branch of psychology in which content does play an important role. On her account, content does not play an individuative role in computational cognitive theories. Rather, content or semantic interpretation is necessary to explain how, in a given context, the “abstractly characterized” processes of a computational theory “constitutes the exercise of a cognitive capacity” (p. 21, emphasis in the
replies
203
original). “The semantic interpretation forms a bridge between the intentionally characterized explananda of the theory and the abstract, mathematical characterization of the mechanism that constitutes the explanatory core of a computational theory” (p. 22). Though she does not make the point explicitly, I think it is clear where Egan thinks that the anti-content arguments of Folk Psychology fail. Those arguments were aimed at showing that the laws or principles invoked in the explanatory core of a computational theory should not be couched in terms of content, and on this Egan agrees. But what the Folk Psychology account failed to see, Egan suggests, is that appeal to content is necessary if computational psychology is to offer a satisfying account of the link between those abstractly characterized processes and “the questions that define a psychological theory’s domain” – questions that are “typically couched in intentional terms” (p. 21). Egan’s account of the explanatory strategy of computational cognitive theories, which she elaborates at much greater length elsewhere (Egan 1992, 1995, 1999, 2003), is an important contribution to the philosophy of psychology. It is well informed, insightful, carefully defended, and vastly more sophisticated than the story about computational psychology that I told in Folk Psychology. It isn’t the only game in town, however. Other writers, most notably Cummins (1989) and Ramsey (2007) have offered competing accounts that are also far more sophisticated than anything to be found in Folk Psychology. However, I can’t appeal to these competing accounts to buffer me from Egan’s critique, for while Cummins and Ramsey differ with Egan on many important details, they agree that content has an important role to play in the explanatory strategy of computational psychology. So one might be tempted to conclude that if any of the accounts in this vicinity are right, then one of the core conclusions of Folk Psychology is wrong: contrary to what I argued there, intentional categories do have an important role to play in cognitive science. But I think that this would be too hasty. The issues here are rather more complicated. To see why, it will be useful to take a closer look at Cummins (1989). In that book, as I’ve noted, Cummins gives an account of the explanatory strategy of the computational theory of cognition, an account in which content plays an important role. However, Cummins does not assume that the notion of content that is of use in computational psychology is the same as the one invoked in commonsense psychology. Indeed, in the first chapter of the book he warns against that assumption. To suppose that “commonsense psychology” (“folk psychology”), orthodox computationalism, connectionism, neuroscience, and so on all make use of the same notion of representation seems naive. Moreover, to understand the notion of mental representation that grounds some particular theoretical framework, one must understand the explanatory role that framework assigns to mental representation. It is precisely because mental representation has different explanatory roles in “folk psychology,” orthodox computationalism, connectionism, and neuroscience that it is naive to suppose that each makes use of the same notion of mental representation. (Cummins 1989: 13–14)
Later in the book, Cummins argues that the assumption is not just naive, it is false: “The kind of meaning required by the CTC [the computational theory of cognition] is, I think, not Intentional Content [the kind of content invoked in commonsense psychology]
204
stephen stich
anymore than entropy is history. There is a connection, of course, but at bottom representation in the CTC is very different from intentionality” (Cummins 1989: 88). If Cummins is right, then neither Egan’s account of the explanatory strategy of computational psychology nor competing accounts like Cummins’ and Ramsey’s pose a challenge to the arguments in Folk Psychology that were aimed at showing that content had no role to play in scientific psychology. For those arguments were quite explicitly focused on the sort of content that plays a role in commonsense psychology, not on some perhaps related but importantly different notion of content invoked in computational theories of cognition. So if Cummins is right, the question that Egan poses in her title is ambiguous. On one reading it is asking about the sort of representational content invoked in commonsense psychology, and on the other it is asking about an importantly different sort of content. It is entirely possible that the answer to her question is no on the first reading and yes on the second. Of course this irenic outcome presupposes that Cummins is right about the difference between representational content as it is invoked in commonsense psychology and representational content as it is invoked in computational psychology. Egan might not grant that the two notions of content are different, but I am inclined to think she should. “A semantic interpretation of a computational system,” she tells us, “is given by an interpretation function that specifies a mapping between equivalence classes of physical states of the system and elements of some represented domain. . . . In computational models of perception, the content ascribed to internal states of the device will be determined by the distal properties tracked by these internal states” (pp. 21–2). On this account, the ascription of content to states in the visual system will not have some of the central features that, I argued in Folk Psychology, characterize the ascription of content in commonsense psychology. Consider, for example, the role of what I called “ideological similarity.” In Folk Psychology, I proposed a thought experiment inspired by the real case of Mrs T, a woman suffering from a degenerative brain disease (Stich 1983: 53–60). For much of her life, Mrs T, who was born in 1880, had an avid interest in politics and was well informed on the topic. She was deeply shocked by the assassination of President William McKinley in 1901. However, in her seventies her illness began to manifest itself. To make the point as clearly as possible, I assume that the only effect of the disease was a progressive loss of memory. In the early stages of her illness, Mrs T had trouble remembering recent events like who had been elected in a senate race she had been following or where she had left her knitting. As the affliction got worse, she had trouble remembering who the president was, who George Washington was, or where the White House was located. Shortly before her death, she was asked “What happened to McKinley?” and immediately responded, “He was assassinated.” But she could not say whether assassinated people die, or what death is, nor even whether she herself was dead. I maintained that commonsense psychology would no longer attribute the content McKinley was assassinated to the mental state underlying Mrs T’s ability to say “McKinley was assassinated,” and argued that this was because commonsense content attribution is sensitive, in a more or less holistic way, to similarity between the attributor’s entire stock of beliefs and the target’s entire stock. Not everyone agrees with this interpretation of the thought experiment, but Egan is not among the dissenters. She believes that the
replies
205
arguments in Folk Psychology have established that content ascriptions are sensitive to ideological similarity. What makes this important for current purposes is that, on Egan’s account, attribution of content to internal states in computational models of perception is not ideologically sensitive. Recall that, according to Egan, in these models “the content ascribed to internal states of the device will be determined by the distal properties tracked by these internal states.” And Mrs T’s loss of memory would have little or no effect on that tracking relationship. Thus a computational modeler of Mrs T’s visual system would characterize the content of states in that system in exactly the same way before the onset of her illness and after it had become very severe. So it looks like the irenic response to Egan’s title question is indeed a live option. The answer is no if the representational content in question is the sort presupposed by commonsense psychology and yes if it is a notion of representational content like the ones sketched by Egan or Cummins. I don’t think it would be wise to leave the matter here, however, since there is another line of argument in Egan’s paper that can’t be escaped by invoking the ambiguity of “representational content.” Thus far we’ve focused on Egan’s account of computational psychology, which occupies much of her paper. In her final section she argues that, while beliefs, desires, and the other propositional attitudes that loom large in commonsense psychology do not now, and likely never will, play any role in computational psychology, “other branches of scientific psychology do invoke beliefs and desires” (p. 24, italics in the original). If that’s right, then it seems that the answer to Egan’s title question is yes, even when “representational content” is understood as the kind of content that plays a role in commonsense psychology. The two branches of psychology that Egan mentions are attribution theory and the part of developmental psychology that “attempts to characterize the commitments that infra-linguistic infants bring to their interactions with the world” (p. 25). I don’t think that either of these is a good example to make her point. Attribution theory has hardly been flourishing in recent years and, while developmental psychology most certainly has been flourishing, Egan’s claim that “developmental theories attribute to infants beliefs . . . about how objects move in space” is hardly uncontroversial. But I’m not inclined to quibble here since, if Egan’s examples are not optimal, there are lots of other examples that she might invoke. In cognitive social psychology, work in the “heuristics and biases” tradition has uncovered startling and important facts about the ways people form beliefs and preferences;6 in developmental and abnormal psychology, work on how normal children acquire an understanding of mental states, and why children with disorders like autism don’t, is up to its ears in talk of beliefs and desires;7 the list would be easy to expand. I knew relatively little about this work when I wrote Folk Psychology – indeed much of it had not yet been done – and I rather naively assumed that all of “serious, scientific psychology” would gradually evolve into computational theories. But clearly that has not happened, and I see no reason at all to suppose that it will. Where does all this leave us? Well, if the relevant parts of cognitive social psychology, developmental psychology and abnormal psychology count as scientific psychology – and I most emphatically maintain that they do – and if, as seems to be the case, these branches of psychology invoke representational content, then on either reading of Egan’s title question, the answer is yes.
206
stephen stich
Let me close my reply to Egan’s paper with a caveat, a puzzle, and a concern. It certainly seems to be the case that various flourishing branches of psychology invoke familiar mental states like beliefs and desires, and exploit the common sense representational content of these states in a variety of ways. But appearances can be deceiving. One of the lessons of Cummins’ penetrating analysis of the computational theory of cognition is that, while it might seem that these theories are invoking the commonsense notion of content, they are actually relying on an importantly different notion. Might something similar be true in cognitive social psychology, developmental psychology, etc.? It strikes me as unlikely, but without the sort of careful analysis of the explanatory structure of these sciences – analyses of the sort that Cummins, Egan and Ramsey have given us for computational psychology – it is hard to be sure. Unfortunately, while philosophers have been avid consumers of work in cognitive social psychology and developmental psychology, there has been almost no work on the explanatory structure of these sciences. These are important projects for a variety of reasons, and it is to be hoped that philosophers will soon pursue them. But until that work is done, it would, I think, be wise to hedge our bets on whether these sciences are making substantive use of the commonsense notion of representational content. That’s the caveat. The puzzle is posed by a pair of facts. First, a number of branches of scientific psychology that apparently invoke commonsense content are clearly flourishing. Second, if the account in Folk Psychology is even roughly on the right track, commonsense content attribution has properties that would seem to pose serious obstacles to a fruitful science. It is vague, observer-relative and context-sensitive. Moreover, folk practice sometimes classifies together mental states that differ dramatically, attributing the same content to psychological states in Einstein’s head, in a brain-damaged person’s head and in a dog’s head.8 Somehow the sciences in question succeed in sidestepping the problems that a content-based taxonomy of mental states might engender. How do they do it? As I note in my reply to Godfrey-Smith, his work offers some suggestive hints. But we won’t have a completely satisfying answer until these parts of psychology attract the sort of detailed and insightful philosophical analyses that Cummins, Egan, and Ramsey have provided for computational psychology. Finally, the concern. Throughout this response I have been assuming, along with Egan, that the analysis of commonsense content attribution offered in Folk Psychology is more or less on the right track. But I think that’s a dangerous assumption. In that analysis I used a philosophical method which was standard at the time and had been widely used in philosophy since Plato. I conjured imaginary scenarios, asked what “we” would say about them, and relied on my own intuitions (and the intuitions of a handful of colleagues and graduate students) as the sole source of evidence. In recent years I have become increasingly skeptical of that method. A growing body of literature suggests that the intuitions of mostly white, mostly Western, mostly male academics, all of whom have survived the curious selection practices that lead to graduate school fellowships and jobs at universities, often do not coincide with the intuitions of people who are not members of the philosophers’ club.9 If our project is to understand how ordinary folk go about the process of content attribution, then these findings raise a serious concern: Why should we think philosophers intuitions are a good source of evidence?
replies
207
Reply to Cowie In a memorable passage in Quine’s Word and Object – one of many – Quine reminds us in “the immortal words of Adolf Meyer, where it doesn’t itch don’t scratch” (Quine 1960: 160). For many years, I succeeded in following Meyer’s advice in dealing with the concept of innateness. I was aware, of course, of the difficulties that philosophers, cognitive scientists, and biologists were encountering in their efforts to give an account of innateness. And I had considerable sympathy with Paul Griffith’s suggestion that the notion of innateness was a muddled bit of folk biology that should be banished from serious science. Indeed, on two occasions I invited Griffiths to make the case in my Rutgers graduate seminar, which he did, with great learning and enthusiasm. At the same time, as Cowie notes, I acted as though I was “quite committed to the concept’s remaining part of the discourse of cognitive science” (p. 75) – sponsoring conferences on nativism, editing volumes on The Innate Mind (Carruthers, Laurence and Stich 2005, 2006, 2007), and writing books and papers in which innate mental mechanisms were invoked to explain important features of mindreading (Nichols and Stich 2003) and moral cognition (Sripada and Stich 2006). There was clearly a tension there, and now and again I worried about it a bit. But it wasn’t until I read Cowie’s rich and interesting paper that the tension rose to the level of an itch. Fortunately, Cowie is a full-service philosophical interlocutor; in addition to generating an uncomfortable itch, she also provides a comforting scratch. By examining the recent history of the gene concept and the not so recent history of the concepts invoked in classificatory systems in organic chemistry and its predecessors, she makes a compelling case for the conclusion that “good things can come from bad concepts” (p. 97). She goes on to argue, convincingly I think, that modern discussions of innate traits look a lot like eighteenth-century discussions of plant materials . . . And just as late eighteenth-century chemists knew that their taxonomic principles were wrong ones, yet couldn’t burn the boat they were fishing from, we can know that the innate/not innate distinction is not quite the right one to make, yet keep on making it nonetheless. (p. 96)
Philosophical prudishness, she urges, should give way to “vulgar pragmatism” since “premature elimination can be disastrous” (p. 97). So, with the itch slaked, I’ll go on invoking the notion of innateness since, as Cowie eloquently argues, “you can’t investigate something if you don’t have a way of thinking about it,” and “the concept of innateness enables us to think about developmental phenomena that we don’t yet fully comprehend” (p. 96). The vulgar pragmatist strategy that Cowie advocates can, of course, be used to deflate eliminativist worries focused on other concepts, including those of intentional psychology. On the last page of Deconstructing the Mind, I argued that “what ‘legitimates’ certain properties (or predicates, if you prefer) and makes others scientifically suspect is that the former, but not the latter, are invoked in successful scientific theories.” I went on to say that “the jury is still out on the question of whether successful science can be constructed
208
stephen stich
using intentional categories” (Stich 1996: 199). But that was then, and this is now. As I note in my reply to Egan, it now strikes me as undeniable that a number of different branches of psychology are producing exciting and important discoveries that we have no idea how to describe or explain without invoking intentional categories. So, to respond to my former eliminativist self, and to those who are still tempted by views of that sort. I am delighted to borrow a sentence from Cowie. “It’s simply not possible to design, conduct and interpret the very experiments one needs to perform in order to show the way forward without using some concepts – and if the bad old concepts are the only ones available, well: it’s put up or shut up” (p. 96).
Reply to Goldman In a number of publications, Shaun Nichols and I have argued that the term ‘simulation’, as it is used in the literature on mindreading, picks out no natural or theoretically interesting category, and we’ve urged that the term be retired (Stich and Nichols 1997; Nichols and Stich 1998, 2003). The main aim of Goldman’s paper is to respond to this challenge by providing a “defense of the naturalness, robustness, and theoretical interest of simulation” (p. 138). But he also has a secondary aim, which is to argue that there is “a serious lacuna” in the Nichols and Stich book, Mindreading (2003). We have, according to Goldman, failed to discuss the literature in cognitive neuroscience, though “this is where the best evidence for simulation, including simulation-based mindreading, resides. It is a major omission of Nichols and Stich (2003) to neglect cognitive neuroscience” (p. 138). Goldman’s contention that simulation is a natural, robust, and theoretically interesting category raises important issues, and most of this reply will be devoted to examining Goldman’s defense of that claim. But before getting onto that, I need to say something about Goldman’s accusation that Nichols and I neglected important evidence. From the edgy tone of Goldman’s remarks about our “grudging appreciation of simulationism’s virtues” in our “seemingly endless stream of articles” (p. 137), the reader might perhaps infer that Nichols and I had intentionally ignored relevant neuroscience literature, or worse that, like Jerry Fodor in some of his moods, we believed that neuroscience could be of no value in the study of mindreading, or in addressing broader issues about how the mind works. Nothing could be further from the truth. Here is what we say on the issue: On our view . . . findings about the structure and functioning of the brain can, and ultimately will, impose strong constraints on theories of mindreading of the sort we will be offering. For the moment, however, this . . . is of little practical importance, since as far as we have been able to discover, there are few findings about the brain that offer much guidance in constructing the sort of cognitive account of mindreading that we’ll be presenting. . . . In principle, our stance regarding evidence is completely eclectic. We would have no reservations at all about using evidence from neuroscience or from history, anthropology or any of the other social sciences to constrain and test our theory, though we have found relatively little in these domains that bears on the questions we’ll be considering. (Mindreading, pp. 11–12, emphasis added)
replies
209
Clearly, Nichols and I have no principled objection to using evidence from neuroscience in deciding among alternative accounts of mindreading. Were we, then, just slipshod scholars who failed to realize that the many interesting and important studies Goldman cites were there to learn from? Here again, the charge is without foundation. In Goldman’s recent book on mindreading (Goldman 2006), he draws the useful distinction between “low-level mindreading” and “high-level mindreading”. Roughly speaking, high-level mindreading is the formation of beliefs about propositional attitudes like beliefs, desires, intentions, and decisions, while lowlevel mindreading is the formation of beliefs about sensations, like feelings of pain, and about emotions, like fear, disgust, and anger.10 Face-based emotion recognition, which Goldman discusses in his chapter in this volume, occupies almost the entire chapter on low-level mindreading in Goldman’s book. But, as Goldman notes, cases of low-level mindreading are “a somewhat atypical sector of mindreading, different from the cases usually treated in the literature, especially the philosophical literature. The stock examples in the literature are attributions of garden-variety propositional attitudes: belief, desire, intention, and so forth” (p. 145, italics added).11 And as Goldman argues in his book, “there seem to be different mechanisms for mindreading [emotions and feelings] . . . than for mindreading the attitudes, more primitive and automatic mechanisms” (Goldman 2006: 20). In our book, and in our “seemingly endless stream of articles” on mindreading, Nichols and I followed the lead of the philosophical literature and focused on high-level mindreading. The theory we defend in the book is an attempt to characterize the psychological mechanisms underlying high-level mindreading. We do not even try to offer an account of the quite different mechanisms underlying low-level mindreading. To be sure, low-level mindreading is an interesting and important phenomenon. But it is not what our book was about. Now here are some striking facts about Goldman’s chapter: 1 The longest section, “Simulation and Motor Cognition,” is devoted to “simulation in a certain type of non-mindreading activity” (p. 141, emphasis added). 2 The second longest section, “Simulation and Face-based Emotion Attribution,” is devoted to low-level mindreading. 3 There is not a single sentence aimed at showing that neuroscientific results are relevant to assessing theories of high-level mindreading of the sort that Nichols and I proposed. Of course, there were limits imposed on the length of contributions to this volume. But even in the chapter devoted to high-level mindreading in his book, where presumably length constraints were not a major concern, I have been unable to find a single reference to a finding in neuroscience – published prior to mid-2002, when the Nichols and Stich volume went to press – that would be relevant to assessing the sort of theory that Nichols and I develop in our book.12 In light of all this, the only reasonable verdict on Goldman’s accusation that our failure to discuss evidence from cognitive neuroscience is a “major omission” is that it is completely unwarranted. If the charge is that we have some principled antipathy to neuroscience, my rebuttal could not be clearer: we say quite clearly that neuroscience
210
stephen stich
“can and ultimately will impose strong constraints on a theory of mindreading.” If the charge is that we failed to take account of relevant research in neuroscience, my reply is that, when our book went off to the publisher, there was no relevant published research in neuroscience to report. So much for Goldman’s canard about there being a serious lacuna in our book. Let me turn, now, to the more interesting and substantive issue that Goldman raises. Is simulation a natural and theoretically interesting category? To start us off, a bit of history will be helpful. Prior to 1986, most philosophers of mind assumed that mindreading – the process in which we attribute mental states to people, predict future mental states, and predict behavior on the basis of these attributions – was subserved by a commonsense psychological theory, often called “folk psychology.” That assumption played an important role in many philosophical debates. Functionalists maintained that folk psychology determined the meaning of commonsense mental state terms. Eliminativists agreed, though they went on to argue that folk psychology was radically false and thus that commonsense mental state terms did not denote anything. In a pair of important papers, published in 1986, Robert Gordon and Jane Heal suggested another way in which some predictions of other people’s mental states might be made. Rather than use a theory to predict what someone will decide to do, we could exploit the fact that we have a decisionmaking system that is similar to theirs. So all we need to do is pretend to be in the target’s situation – having her beliefs and her desires rather than our own – and then make a decision about what to do in that situation. Of course, we don’t go on to act on that decision. Rather we predict that that is what the target will decide. Since we use our own decision-making mechanism to simulate the mechanism used by the target, “simulation theory” seemed a natural label for the account. The alternative account, which maintained that predictions like this were subserved by a folk psychological theory, came to be known as the “theory-theory.” In 1989, Goldman published a widely discussed paper in which he endorsed and clarified simulation theory, and set out a number of arguments for it and against the theory-theory (Goldman 1989). Soon after, Nichols and I joined the debate with a paper that criticized many of the arguments for simulation theory offered by Gordon and Goldman (Stich and Nichols 1992). That paper also offered the “boxological” sketch of simulation-based decision and behavior-prediction reproduced in Figure 11.1. The sketch apparently did a good job at capturing what advocates of simulation theory had in mind since a number of them reproduced it in their own writings. Goldman himself has reproduced it three times (Goldman 1992, 1993; Gallese and Goldman 1998). In the early years of the debate over simulation theory, many writers – Nichols and myself included – viewed the debate as a two-sided battle which either simulation theory or theory-theory would “win.” But for two reasons this turned out to be a misleading way to think about mindreading. First, it ignored the possibility that the correct account might be a hybrid theory in which some aspects of mindreading are subserved by simulation and others are subserved by a folk psychological theory. As research progressed, many people, including both Goldman and Nichols and I, began to converge on the view that the right account of mindreading is indeed a hybrid account – though there is still plenty of debate about the details. Second, and more directly relevant to the issue at hand, it was far from clear what it would take for one side or the other to “win,” since
replies Perceptual Processes
211
Body Monitoring System
Inference Mechanisms Beliefs
Desires
Decision-making (Practical Reasoning) System Behavior Predicting & Explaining System
Pretend Belief & Desire Generator Action Control
BEHAVIOR
Figure 11.1 A sketch illustrating simulation-based decision and behavior prediction (Stich, S. and Nichols, S. (1992) “Folk Psychology: Simulation or Tacit Theory,” Mind and Language 7: 35–71. Printed with permission from Blackwell Publishing)
there was no clear and agreed on characterization of what to count as a simulation process or mechanism. For the most part, Nichols and I relied on paradigm cases of simulation mechanisms like the one depicted in Figure 11.1. If the correct theory for some aspect of mindreading relied on something similar to those paradigms, we counted it as a victory for simulation theory. This was at best a rough and ready strategy, since we never tried to specify how similar a candidate has to be, or in what respects it has to be similar. But it did enable us to say that some proposed processes were clear examples of simulation while others clearly were not.13 But it gradually became apparent that others were not using the term “simulation” in this way. Rather, the growing group of advocates of simulation argued that just about any process or phenomenon or mechanism that could plausibly be described as a “simulation” in some sense or other counted as a victory for their side, although many of these bore no obvious similarity to mechanisms like the one depicted in Figure 11.1, and had little or nothing in common with each other.14 Frustrated by a situation that made it impossible to have a well-focused argument about the merits of simulation theory – since there was no consensus at all on what simulation is – Nichols and I wrote the passage that Goldman quotes toward the beginning of his paper, in which we urged that the term “simulation” be retired because “the diversity among the theories, processes and mechanisms to which advocates of simulation theory have attached the label ‘simulation’ is so great that the term itself has become quite useless. It picks out no natural or theoretically interesting category” (Stich and Nichols 1997: 299).
212
stephen stich
Goldman apparently has some sympathy with our frustration; he does “not want to defend every application of the term ‘simulation’ that anybody has ever proposed” (p. 138).15 Nonetheless, he thinks there is a natural and theoretically interesting category in this vicinity, and if he is right, then “simulation” would surely be a reasonable name for that category. Giving an account of this category is “a complex and delicate matter” which, Goldman tells us, he “won’t try to cover thoroughly” (p. 139) in his chapter in this volume. For a more detailed account he refers us to the extended discussion in his book. Since, as is often the case, the devil is in the details, I will focus on the account in Goldman’s book. In the book, Goldman carefully sets out a set of increasingly specific and detailed definitions, starting with a first pass at defining “Generic Simulation” and ending with a definition of “Attempted Mental Simulation.” The core idea, as he explains in his chapter in this volume, is that “one process successfully simulates another . . . only if the first process copies, replicates, or resembles the target process, at least in relevant respects” (p. 139). This idea leads to his first definition: “Generic Simulation (initial): Process P is a simulation of another process P′ =df. .P duplicates, replicates, or resembles P′ in some significant respects (significant relative to the purposes of the task)” (Goldman 2006: 36). In discussing this definition, Goldman notes that the target activity, P′, “can be merely hypothetical rather than actual” – as when an episode in a flight simulator simulates a crash that never really happens. But this initial definition needs refinement because similarity is a symmetrical relation while simulation is not. An actual crash does not simulate what happens in the flight simulator. One way to patch the problem, Goldman notes, would be to “require that the simulating process occur out of the purpose, or intention, to replicate the simulated process . . . This won’t quite work, however, because it is doubtful that all simulation is purposeful. Some simulation may be automatic and nonpurposeful” (Goldman 2006: 37). To remedy the problem, Goldman suggests, we can require that one phenomenon counts as a simulation of another “if it is the function of the former to duplicate or resemble the other.” But how, exactly are we to understand this invocation of the tricky and contested notion of function? To his credit, Goldman admits that he has no answer. “I lack a theory of functions to provide backing for this approach, but I shall nonetheless avail myself of the notion” (Goldman 2006: 37). Which he does, in the following, revised, definition of Generic Simulation. Generic Simulation (revised): Process P is a simulation of another process P′ =df. 1 P duplicates, replicates, or resembles P′ is some significant respects (significant relative to the purposes of the task), and 2 in its (significant) duplication of P′, P fulfills one of its purposes or functions. (Goldman 2006: 37) Generic simulation applies to both mental and nonmental processes. Goldman’s next definition focuses in on simulations that are restricted to mental processes. Mental Simulation: Process P is a mental simulation of another process P′ =df. Both P and P′ are mental processes (though P′ might be purely hypothetical), and P
replies
213
and P′ exemplify the relation of generic simulation previously defined. (Goldman 2006: 37–8) But one more refinement is still required, since the definition of Mental Simulation requires that the simulation actually resembles the target, and “a reasonable version of ST [simulation theory] would not hold that the mental processes of mindreaders always match, or even approximately match, those of their targets. ST, like any plausible theory of mindreading, should tolerate highly inaccurate specimens of mindreading . . . What ST essentially maintains is that mindreading (substantially) consists of either successful or attempted mental simulations” (Goldman 2006: 38). To capture the idea of attempted mental simulations, Goldman offers two more definitions. Attempted Generic Simulation: Process P is an attempted generic simulation of process P′ =df. P is executed with the aim of duplicating or matching P′ in some significant respects. Attempted Mental Simulation: Process P is an attempted mental simulation of process P′ =df. Both P and P′ are mental processes, and P is executed with the aim of duplicating or matching P′ in some significant respects. (Goldman 2006: 38, emphasis in the original) Finally, Goldman tells us, “[t]he term aim in these definitions includes covert or implicit aims, not consciously available to the simulator” (Goldman 2006: 39, emphasis in the original). The appeal to an aim or purpose looms large in Goldman’s account of high-level mindreading. However, as we noted earlier, Goldman thinks low-level mindreading is subserved by “more primitive and automatic mechanisms” (Goldman 2006: 20) like those responsible for face-based emotion attribution, discussed in his article in this volume, and his “case for low-level mindreading . . . was predicated on genuine resemblances between states of the attributor and the target” (Goldman 2006: 150). His “case for high-level mindreading, by contrast, rests on the ostensible purpose or function of Eimagination, not on the regular achievement of faithful reproduction” (Goldman 2006: 150). I maintain that Goldman’s definitions fail to pick out a natural category either in the case of low-level mindreading or in the case of high-level mindreading. In both cases, the problems are generated by cases of mistaken mindreading, many of which, on Goldman’s account, will not count as episodes of simulation at all. The problem is easiest to see in the case of low-level mindreading. In Goldman’s article he recounts a number of fascinating studies in which brain-damaged patients failed in low-level mindreading tasks at a dramatically higher rate than normal subjects. But even the normal control subjects make mistakes in these experiments, as do we all when we attribute fear, disgust, and anger to other people on the basis of their facial expressions. Now consider three hypothetical cases. In the first case, the target is angry and after a relatively brief glimpse at her face I come to believe that she is angry. The process is subserved by the sort of primitive, automatic mechanism that Goldman posits. In the second case, the target is not really
214
stephen stich
angry, she is just pretending. Here too, after a relatively brief glimpse of her face I come to believe that she is angry, using the same primitive, automatic process. In the third case, the target is not angry, she is fearful. However, because of the shadow cast on her face by a nearby tree, her facial expression looks more like an anger face than like a fear face. Once again, after a brief glimpse, the same primitive, automatic process leads me to believe that she is angry. On Goldman’s account, the first episode counts as a case of mental simulation. But in the second and third cases there is no “genuine resemblance between the states of the attributor and the target.” So, according to Goldman’s definition, it is not a case of mental simulation at all. Goldman is, of course, free to define terms as he sees fit. However, the issue at hand is whether simulation is a natural and theoretically interesting category, and it is hard to see how a category which includes episodes of accurate face-based emotion recognition but excludes nearly identical episodes of mistaken face-based emotion recognition is either natural or theoretically interesting. In the case of high-level mindreading, Goldman has more wiggle room to accommodate mistaken mental state attributions, since to count as an “attempted mental simulation” a process underlying high-level mindreading need only be “executed with the aim of duplicating or matching” the mental state of the target. Successful matching is not required. Nonetheless, it seems that in many cases of high-level mindreading, there is no reason at all to think that the mindreader has any such aim or purpose. To see the point, let’s once again consider some hypothetical examples. Case 1: You are sitting at a restaurant, wanting to get the check after your meal. As the waiter passes, you glance at your watch, thinking that this will be a discreet way of indicating your desire to him. This leads the waiter to form the belief that you want your check. Case 2: You are sitting at the restaurant after your meal, enjoying a pleasant conversation with your dinner companions. You have no desire at all to get the check any time soon. But as the waiter passes you happen to glance at your watch, and as before this leads the waiter to form the belief that you want your check. On the account Goldman proposes in his book (2006: 183–5), this sort of mindreading is subserved by a “generate-and-test” strategy. The “generate” stage produces states or state combinations that might be responsible for the observed . . . evidence. Hypothesis generation is presumably generated by non-simulative methods. The “test” stage consists of trying out one or more of the hypothesized state combinations to see if it would yield the observed evidence. This stage might well employ simulation. One E-imagines being in the hypothesized combination of states, lets an appropriate mechanism operate on them, and sees whether the generated upshot matches the observed upshot. (Goldman 2006: 184, emphasis in the original)
Goldman acknowledges that he “knows of no theoretical analysis or experimental evidence that bears directly on simulation’s role” (Goldman 2006: 184) in this sort of mental state attribution. But, for argument’s sake, let’s assume that he is right. If all of this generating and testing is going on in the waiter, it is clear that, in most cases at least, the waiter is not consciously aware of it. That’s not a problem for Goldman, since he states very clearly that “a great deal of mindreading, even high-level mindreading, is nonconscious or minimally conscious, so we should allow simulational processes to include E-imaginative states even when the latter are entirely nonconscious” (Goldman
replies
215
2006: 151). So, if Goldman’s generate-and-test hypothesis is correct, Case 1 is a clear example of successful simulation, since the process the waiter goes through resembles the process the target went through, at least “in some significant respects.” But now what about Case 2? Here, we can assume, the processes going on in the waiter, both conscious and unconscious, are identical with those in Case 1. But in this case the process is not a successful mental simulation, since the waiter gets it wrong. The target never went through a process that resembles the process the waiter went through, since the target doesn’t desire to get his check and never intended to convey anything to the waiter. So is this a case of mental simulation? If it is, then it must be a case of attempted mental simulation, and in order for that to be the case, the process must be executed with the aim or purpose of duplicating or matching the mental state of the target. Does the waiter have any such aim? Surely he is not conscious of having it. Is it, then, one of those “covert or implicit aims, not consciously available to the simulator” (Goldman 2006: 39)? Well, perhaps. But why should we think the waiter has such an aim, even unconsciously? It is hardly needed for the rest of the generate-and-test process to do its job (or fail to), since that process could be triggered automatically, much as low-level mindreading processes are on Goldman’s account. Nor, to the best of my knowledge, is there a shred of evidence suggesting that mindreaders have unconscious aims in cases like this. Certainly, Goldman offers us none. But if there are no unconscious aims – if the process is simply triggered automatically – then the waiter’s mistaken mindreading is not a case of simulation at all. Moreover, this is hardly a special case. Goldman is right that a great deal of high-level mindreading is unconscious. And in all cases of unconscious high-level mindreading, the underlying processes might well be triggered automatically, without any unconscious aim or purpose playing a role. If this is how these processes work, then for each sort of mindreading in which successful episodes are subserved by a mental simulation, unsuccessful episodes will not count as simulations at all, even though they are produced by the same mental mechanisms. And if that is how things turn out, then once again it is hard to see why we should regard simulation as a natural or theoretically interesting category. To make a plausible case that mental simulation – as he defines it – is a theoretically interesting category in the study of high-level mindreading, Goldman must give us a convincing reason to suppose that unconscious aims and purposes abound in the processes subserving this sort of mindreading. Perhaps he can do this. But I don’t recommend holding your breath until he does.
Reply to Sterelny Sterelny’s characteristically inventive and engaging chapter discusses the theory that Shaun Nichols and I develop in Mindreading. However, his goal is not to evaluate our theory but to “parasitize” it by arguing that it strengthens the case for an anti-nativist account of mindreading of the sort he favors (p. 156). As Sterelny notes, the exercise would be of no interest if our theory were hopelessly implausible; it’s worth pursuing only if one agrees with Sterelny’s assessment that our theory is “broadly correct.” Though I had never realized it before, being parasitized is flattering!
216
stephen stich
In making his anti-nativist case, Sterelny concentrates on two sorts of innateness claims. The first, concept innateness, maintains that intentional concepts, like the concepts of belief and desire, are innate. The second, information (or knowledge) innateness, maintains that “information needed for mindreading is innate” (p. 156). In our book, Nichols and I argued that a number of the mental mechanisms that contribute to mindreading are innate, and that some of these mechanisms require a fair amount of innate information to get the job done. But we were quite reluctant to make any claims about the innateness of the concepts invoked in mindreading. One reason for our reticence was that we knew of no evidence or argument that made a convincing case either for or against the innateness of a specific concept. A second reason was that there is a great deal of controversy about the nature of concepts and also about how innateness claims should be understood, particularly when what is claimed to be innate is a concept or some similar mental state.16 As a result, debates about the innateness of concepts are typically more than a bit obscure. Those who claim that some concept is (or is not) innate rarely tell us what they think concepts are or what they mean by ‘innate’, so what they are claiming is far from clear. Because of this, Nichols and I adopted the policy of avoiding the issue whenever possible. Sterelny is a more intrepid scholar who boldly slogs into the swamp that Nichols and I were hesitant to approach. Though I am full of admiration for his courage, and would, of course, be delighted if it turned out that our theory could make a substantive contribution to the resolution of debates about innateness in this area, I am far from convinced that Sterelny has made much progress in his defense of anti-nativism either about intentional concepts or about the information invoked in mindreading. My reasons for skepticism are quite different in the two cases. In the case of concept nativism, it is the obscurity of the issues that leaves me unconvinced. In the case of information nativism, I think that Sterelny may have taken aim at a straw man. It is not entirely clear what conclusion Sterelny is defending in his discussion of concept nativism. In the section headed “The Poverty of the Stimulus,” he ends his remarks on concept nativism with a relatively modest claim: “I doubt there is a special, intractable problem of learning intentional concepts, despite the unobservability of intentional states” (p. 158). Perhaps the only conclusion Sterelny wants to draw is that poverty-of-the-stimulus arguments, which attempt to show that intentional concepts could not be learned from the available evidence, and thus must be innate, are not convincing. Indeed, it is hard to see how such arguments could be convincing in light of the fact that Sterelny notes at the beginning of the sentence I’ve just quoted: “We lack a good general theory of the nature and acquisition of concepts.” A few pages earlier he tells us, quite correctly, that “the whole issue of concepts and their possession is deeply opaque” (p. 157). If there is no agreement on what concepts are, no agreement about the mechanisms or processes that account for the acquisition of concepts, and little agreement about which sorts of acquisition mechanisms or processes would count as nativist or as anti-nativist (or empiricist), then surely it is simply premature to debate whether intentional concepts are innate, since we have no serious idea what we are debating. So if all that Sterelny wants to claim is that poverty of the stimulus arguments about intentional concepts are not convincing, he will get no grief from me.
replies
217
However, Sterelny’s brief comments about “iceberg concepts” – “concepts which name a syndrome that includes both a mental state and its distinctive behavioral manifestation” (p. 157) – might be interpreted as an attempt to provide at least a sketch of an anti-nativist account of how the concept of belief and concepts for other “more cryptic mental states” can be learned. Here’s how the story goes: With iceberg concepts, “the intuitive gap between observable activity and our concepts for the mental causes of that activity are less wide. You can point to the distinctive manifestation of an itch. Moreover, an agent who has mastered the concept of an itch has mastered the concept of an internal cause of action” (p. 157). Once iceberg concepts have been mastered, they can “facilitate the acquisition of concepts for less overt states. For they prime an agent for the possibility of internal causes of action” (p. 158). All of this may be true. But I don’t think it gets us very far. Though the “intuitive gap” may be less wide in the case of iceberg concepts, it is a gap all the same. We need an account of the mechanism or process that succeeds in crossing this gap by taking the observation of activity as input and producing a concept as output. We also need a motivated way of deciding whether proposed mechanisms and processes count as empiricist (because what they accomplish counts as learning) or nativist (because it doesn’t). We also need an account of the mechanism or process that enables someone who is “primed”17 for the possibility of internal causes of action to acquire18 the concept of belief and of other cryptic states. Since Sterelny proposes no mechanisms or processes and offers no way of deciding which mechanisms are nativist and which are empiricist, I don’t think he’s made much progress in providing an anti-nativist theory of concept acquisition. Let’s turn, now, to Sterelny’s discussion of information innateness, “the idea that we have innate information about intentional states and their roles.” It is there, Sterelny argues, that “the Stich–Nichols model helps greatly.” The Stich–Nichols model reinforces a non-nativist conception of the development of mindreading by decomposing the construction of doxastic and desire worlds into subcomponents each of which can be both acquired and improved more or less independently of the others. Even on their hybrid picture, interpreting others is an information-rich task. But the information needed is of a kind that can be acquired and upgraded. (p. 158)
To illustrate what he has in mind, Sterelny sketches what Nichols and I call “default attribution” – a crucial aspect of mindreading in which the mindreader attributes many of her own beliefs to the target. Early on in development, we maintain, children attribute almost all of their beliefs to other people because they have only a few strategies available for detecting “discrepant” beliefs – beliefs that they do not share with the target. One of the first strategies of discrepant belief detection to emerge relies on an innate Perception Detection Mechanism (PDM) which uses information about a target and her environment to produce beliefs about the target’s perceptions – beliefs like (1) target did not see the chocolate being put into the box. As Nichols and I note, “obviously, a mechanism that can pull this off must be able to draw on a fair amount of information about the links between environmental situations and perceptual states” (Nichols and Stich 2003: 88). By the time normal children are about 31/2, they can use beliefs like (1) to infer beliefs
218
stephen stich
like (2) target does not believe that the chocolate was in the box, which enable them to start building models of the target’s beliefs in which they do not attribute all of their own beliefs to the target. Nichols and I don’t offer any details on how the inference from (1) to (2) is made, though an obvious hypothesis is that it invokes an innate principle that says (roughly) if x didn’t see that p, then x doesn’t know that p, and that this principle gets hedged or modified in various ways as the child gets older. It might seem a bit odd that Sterelny chooses the default attribution process to illustrate how the Nichols and Stich model lends support to his anti-nativist view, since on our account the early emerging parts of the default attribution system must have access to a fair amount of innate information. However, apparently this is compatible with the “non-nativist conception of the development of mindreading” that Sterelny advocates, since that conception includes the idea that “the development of mindreading is stabilized by quasi-perceptual mechanisms (by ‘shallow’ modules)” which do a variety of jobs including playing “an important role in estimating the doxastic world of other agents by factoring-in the role of differences in perceptual points of view” (p. 156). It is in the later stages of the development of the default attribution system, as Nichols and I characterize it, that Sterelny finds support for his anti-nativism. As children mature, we maintain, they acquire an increasingly sophisticated bag of tricks for adding discrepant beliefs to their model of the target’s beliefs. Language facilitates many of these tricks. There is evidence that, by the age of three, children attribute the belief that p to a target when the target asserts that p, even when the child believes that p is false. They also attribute discrepant beliefs as the result of third-person reports about what the target believes (Nichols and Stich 2003: 91–2). As children get older, they learn that not all assertions and third-person reports can be trusted and they gradually develop a body of beliefs and skills that enable them to fine-tune discrepant belief attribution. When Sterelny says that much of the information utilized in mindreading is not innate and can be learned piecemeal, it is that sort of information he has in mind. Also, as Sterelny notes, people gradually learn to exploit social cues in discrepant belief attribution. If you are knowledgeable about rugby and others treat a target as an expert on that sport, then you will attribute most of your beliefs about rugby to the target. But if you know that the target comes from a country where rugby is not popular, you will not attribute your beliefs about rugby to him. “The crucial point,” Sterelny tells us, “is that these capacities can be built one by one: they are not a package deal. Moreover, each component can be improved gradually” (p. 158). There is no need to posit innate information to explain the extensive fine-tuning of the default attribution system that goes on from age three into adulthood; familiar processes of learning are all that is required. I think it is clear that Sterelny is right about this. What is less clear is who Sterelny takes his opponent to be in this debate. His comment about the capacities that fine-tune default attribution not being a “package deal” suggests that the nativist opponent he has in mind thinks that mindreading is a unitary innate capacity. This interpretation is reinforced by his characterization of the view he is criticizing as “modular nativism” in which mindreading is “assimilated to the language model, and regarded as the result of an innate module” (p. 155). Later he tells us again that, on the view he is opposing,
replies
219
mindreading “is . . . language-like in having its cognitive basis in a module” (p. 160). If this “modular nativist” is indeed the opponent that Sterelny has in mind, there can be little doubt that Sterelny has his opponent on the ropes. That’s the good news. The bad news is that Sterelny’s opponent may be a straw man. Sterelny says very little about who he thinks actually advocates modular nativism. But there is some reason to think that one of the people he has in mind is my Rutgers colleague, Alan Leslie, who is widely regarded as one the main defenders of the nativist approach to mindreading, and as the main defender of a modular theory of mindreading. Leslie is also the only major figure in mindreading research who is quoted in Sterelny’s article. However, as Leslie makes clear in many places (including the Scholl and Leslie paper that Sterelny quotes), he does not think that mindreading is a package deal, nor does he think that all of mindreading is the result of an innate module. On Leslie’s theory, adult mindreading is subserved by three distinct systems, only one of which is clearly modular. The modular component, which Leslie and his collaborators sometimes call “ToMM” (the Theory of Mind Mechanism), is a module that is responsible for the mindreading skills that emerge early in development. “ToMM . . . is essentially a module which spontaneously and post-perceptually attends to behaviors and infers (i.e. computes) the mental states which contributed to them. . . . As a result, ToMM will provide the child with early intentional insight into the behaviors of others” (Scholl and Leslie 1999: 147). One of the important roles that ToMM plays is to subserve the sort of unrestricted default belief attribution that is typical of children under three. In attributing beliefs to a target, “ToMM always makes the current situation [i.e. what the mindreader believes the current situation to be] available as a possible and even preferred content” (Scholl and Leslie 1999: 147). So it is ToMM that is responsible for the poor performance of young children on the false belief task. Somewhat later in development, Leslie maintains, another system, the Selection Processor (SP) comes on line. The job of SP, which Scholl and Leslie tell us “may be non-modular” (Scholl and Leslie 1999: 147), is to determine the correct content to attribute when a target’s belief is in conflict with the mindreader’s, and to override ToMM’s inclination to attribute the mindreader’s own beliefs. Finally, still later in development, the mindreading abilities that ToMM and SP make available “are recruited by higher cognitive processes for more complex tasks, and the resulting higher-order [mindreading] activities may well interact (in a non-modular way) with other cognitive processes, and may not be uniform across individuals or cultures” (Scholl and Leslie 1999: 140). What is striking about this account, for our purposes, is how different it is from the “modular nativism” that Sterelny is criticizing, and how similar it is to the account Sterelny defends. On Leslie’s theory there is no innate module that subserves all mindreading skills. Moreover, ToMM, the one component of the mindreading system that Leslie insists is both innate and modular, could well be described in the same words that Sterelny uses to describe the innate modules posited by his account: it is a “quasiperceptual shallow module.” What is more, ToMM’s job is more modest than the jobs that Sterelny’s shallow modules perform.19 The “higher-order” mindreading activities that emerge in the third stage of Leslie’s theory presumably exploit lots of information that is not innate since, like much of what is learned, it varies across individuals and
220
stephen stich
cultures. So Leslie has no trouble accommodating the ways in which beliefs about expertise in rugby, or other beliefs that are acquired by empiricist learning strategies, can influence mindreading. The upshot of all of this is that Leslie does not advocate the sort of modular nativism that Sterelny is concerned to refute. Nor, to the best of my knowledge, does anyone else who is worth refuting.
Reply to Prinz Prinz’s engaging and fact-filled chapter focuses on moral nativism, a view – or more accurately a rather tangled cluster of views – that has become increasingly prominent in recent, empirically informed discussions of moral psychology (Dwyer 1999, 2006; Harman 1999; Hauser 2006; Makhail in press). His paper is primarily aimed at sketching and defending his own provocative and important thesis that morality is “an accident” – “a by-product of capacities that were evolved for other purposes” (p. 168). Toward the end of his paper, Prinz suggests that, to a considerable degree, his account is compatible with the theory that Sripada and I developed in “A Framework for the Psychology of Norms” (S&S 2006). Here is what he says: The general outlook defended in this discussion closely parallels ideas defended by Sripada and Stich (2006). Like those authors, I have argued that moral judgments are not universal across cultures, despite some similarities, and I have argued that emotions play an important role in acquisition and implementation of moral norms. Like them, I have also explored these ideas with an interest in explaining how moral norms are acquired. Sripada and Stich are agnostic about how moral norms differ from other norms, and they think we are not yet in a position to determine how much innate machinery we need to explain the acquisition of moral norms. I have been less agnostic, arguing explicitly against moral nativism. Even if I am right, Sripada and Stich raise an interesting question in their discussion. Supposing there is no innate mechanism for moralization, might there be a more general mechanism for the acquisition of norms? Sripada and Stich suppose there is . . . I think the postulation of such a mechanism is premature and methodologically risky. (p. 184)
Is this an accurate summary of the points on which Sripada and Stich (S&S) and Prinz agree and disagree? I am not convinced that it is. The problem is not that Prinz misdescribes the S&S view – he is far too careful a scholar for that.20 Rather, what concerns me is that I am not at all clear about what Prinz means when he makes claims about morality, moral judgments, moral norms, and the like. I’ll devote most of my reply to elaborating on this theme, since I think it is a manifestation of a much larger and more serious problem that besets a great deal of recent discussion in empirically informed moral psychology. “Moral norms,” Prinz tells us, “are found in almost every recorded human society . . . [Morality] seems to be a human universal.” This leads some to conclude that “morality is an evolved capacity” and that “morality is innate” (p. 168). These are the views that Prinz is arguing against. Before one plunges into this debate, there are two clusters of questions that cry out for answers. In the first cluster are questions like: What is it for a capacity to be innate? and What is it for a capacity to be an evolved capacity? In the
replies
221
second cluster are questions like: What is morality? Which capacities are moral capacities? Most of the philosophers and psychologists involved in recent debates over moral nativism are aware that there is an extensive and sophisticated philosophical literature aimed at answering the first cluster of questions. And many of these authors offer at least a brief account of how they propose to answer them.21 Until quite recently, however, most of these philosophers and psychologists seemed to assume that the answers to questions in the second cluster were obvious. They indicated almost no awareness that there is a large and contentious philosophical literature aimed at answering these questions. Indeed, more than 50 years ago, Alasdair MacIntyre (1957) began an article called “What Morality is Not” with the following sentence: “The central task to which contemporary moral philosophers have addressed themselves is that of listing the distinctive characteristics of moral utterances.” MacIntyre went on to argue that two of the most widely endorsed “distinctive characteristics” – that they are “universalizable” and that they are “prescriptive” – are not, in fact, necessary features of moral utterances. In 1970, MacIntyre’s article was reprinted in a valuable anthology called The Definition of Morality (Wallace and Walker 1970) which also reprinted a dozen other papers by such leading figures as Elizabeth Anscombe, Kurt Baier, Philippa Foot, William Frankena and Peter Strawson, all of which, in one way or another, tackled the question of how morality is best defined. As one might expect from this distinguished array of authors, many of the arguments to be found in the volume are careful and sophisticated. And as one might expect in just about any group of 13 philosophers, no consensus was reached. Nor has there been any convergence on the issue in the decades since then.22 In addition to debating how the notions of moral utterance, moral rule, and moral norm are to be defined, some of the contributors to the Definition of Morality volume, as well as some more recent authors, have discussed a cluster of meta-philosophical questions including: What is a definition of morality supposed to do? And what counts as getting the definition right?23 Though no consensus has emerged on these questions either, two sorts of answers are particularly important for our purposes. The first is that, in seeking a definition of morality, philosophers are engaged in the venerable endeavor of linguistic or conceptual analysis. They are trying to give an account of how the term ‘moral’ (or the expressions ‘moral utterance’, ‘moral rule’ etc.) are used by ordinary English speakers – or perhaps by some subset of speakers, for example those with a modicum of philosophical sophistication. A successful definition would have to comport with the intuitions of the relevant group of speakers on a wide range of actual and hypothetical cases. The second answer is that the goal of the project is to discover what Taylor (1978) calls “the essence of morality.” Philosophers who pursue this project believe that moral utterances or moral rules constitute a natural kind, and their goal is to discover the nature of this kind – the property or cluster of properties in virtue of which utterances or rules are members of the kind. The methodology, for those pursuing this project, is akin to the one sketched by Devitt in his contribution to this volume. We rely on intuition to pick out some clear and obvious members of the kind in question and some clear and obvious cases that are not members of the kind. We then turn to science to determine the
222
stephen stich
property or properties that all (or at least almost all) of the intuitively clear members share and that all (or almost all) of the intuitively clear non-members do not. Not just any property or cluster of properties will do, however. To constitute a natural kind, there will have to be some theoretically interesting nomological generalizations in which the properties are invoked. Though it relies on intuition in its initial stages, this project is ultimately much less beholden to intuition. Once we have succeeded in determining the essential feature(s) shared by most of the intuitively obvious members of the kind, we can simply reject the urgings of intuition on lots of other cases, in much the same way that biologists rejected people’s intuitive judgments that whales and dolphins are fish. There is, of course, no guarantee that either the conceptual analysis project or the quest for the essence of morality will proceed smoothly. As Jerry Fodor has noted, “it seems . . . to be among the most important findings of philosophical and psychological research over the last several hundred years . . . that attempts at conceptual analysis almost always fail” (Fodor 1981). And while history has been kinder to the quest for natural kinds, it is sometimes the case that more than one natural kind is to be found among the intuitively clear examples. When there are just a few, as in the case of jade, there is often no motivated way of deciding which kind merits the pre-existing term. When there are many, as in the case of earth (the putative “element,” not the planet), the most natural conclusion is that the intuitive category does not pick out a natural kind at all. With this as background, let’s turn to Prinz’s account of morality. Here is what he tells us: . . . Moral norms are a subset of norms, distinguished by their moral character. There are various theories of what moral character consists in . . . According to some theories, moral norms are distinguished by their subject matter; according to other theories, they are distinguished by the procedures by which they are discovered or the reasons by which they are justified; according to a third class of theories, moral norms are distinguished by the particular way in which they are psychologically internalized and enforced. I subscribe to a theory of this last kind. I think a moral norm is a norm that is enforced by certain emotions (see Prinz 2007). This view has been known historically as sentimentalism. Roughly, a person regards something as morally wrong (impermissible) if, on careful consideration, she would feel emotions of disapproval towards those who did the thing in question. . . . There are a number of different emotions of disapproval. Core examples include guilt, shame, disappointment, resentment, anger, indignation, contempt, and disgust. There is evidence that different kinds of moral rules recruit different emotions of disapproval (Rozin et al. 1999). We feel anger towards those who harm others, contempt towards those who disrespect members of other social ranks, and disgust towards those who commit incest. If we harm another person, we feel guilty, and if we violate norms of rank or incest, we feel ashamed. I cannot defend these claims here (see Prinz 2007). I will merely point out three relevant facts. First, emotion structures in the brain are active when people make moral judgments (Greene and Haidt 2002). Second, people who are profoundly deficient in emotions (psychopaths) never develop a true comprehension of morality (Blair 1995). Third, if we encountered a person who claimed to find killing (or stealing, or incest, etc.) morally wrong but took remorseless delight in killing and in hearing tales of other people killing, we could rightfully accuse him of speaking disingenuously. All this suggests that emotional response
replies
223
is essential to moral cognition. Norms that are not implemented by emotions of disapproval are not moral norms. (p. 179, italics added)
Prinz does not tell us whether he intends his account as a conceptual analysis or as a hypothesis about the essential features of a natural kind. The first of the two passages I have italicized toward the end of the long quote might be taken as a bit of evidence for the first interpretation. But I think that would be an uncharitable reading, since Prinz’s version of sentimentalism has many consequences which fly in the face of intuition – or at least my intuition. Many violations of etiquette norms evoke disgust on the part of observers and shame on the part of the transgressor. And many violations of religious norms evoke both anger and disgust in co-religionists and guilt in the transgressor. Even when they are not closely tied to religion, transgressions of food taboos and norms for disposing of dead bodies can evoke very strong emotions of disapproval. However, for me and the four other upper middle-class white males that I’ve consulted (adhering rigorously to the standard method of philosophical conceptual analysis) neither etiquette norms nor religious norms are intuitive examples of moral norms, nor are food taboos or burial norms. The second passage I’ve italicized suggests that Prinz intends to be characterizing a natural kind whose extension overlaps significantly with the intuitive extension of ‘moral norm’ and that in so doing he has characterized the essential features of moral norms. However, if that’s his intention, another problem looms, since even if he has succeeded in capturing the essential features of a natural kind, there may well be several other natural kinds in this vicinity. By far the best known candidate is to be found in the work of Elliott Turiel and his associates. Inspired by some of the philosophical literature aimed at providing a definition of morality, Turiel proposed a definition according to which moral rules are those that are authority independent, universally applicable and justified by appeal to harm, justice, or rights (Turiel 1979, 1983; Turiel et al. 1987). With this definition in hand, he went on to design an experimental paradigm (the “moral/ conventional task”) which has been administered to a wide variety of subjects differing in age, religion, and nationality. On one interpretation, the results of Turiel’s experiments show that moral rules, as he defines them, constitute a natural kind, since the properties invoked in the definition form a robust nomological cluster.24 I am quite skeptical about all this because I think there is abundant evidence that the putative nomological cluster is far from robust; when researchers have looked at norms and transgressions outside the narrow range of schoolyard examples that have been the focus of Turiel and his associates, the elements of the cluster come apart.25 But it is not clear that Prinz shares my skepticism, since he endorses Blair’s claim that psychopaths “never develop a true comprehension of morality,” and Blair uses the moral/conventional task to assay whether his subjects have a true comprehension of morality. If I am mistaken and Turiel’s work does indeed pick out a natural kind that overlaps substantially with the intuitive extension of ‘moral norm’, and if Prinz maintains that his sentimentalist account captures the essential features of moral norms, then Prinz owes us some additional argument to justify the claim that his account rather than Turiel’s tells us what is “essential to moral cognition.”
224
stephen stich
Of course, if I am right that Turiel’s account does not succeed in picking out a natural kind, this problem disappears. However, there is another competitor that I am less inclined to dismiss. In “A Framework for the Psychology of Norms,” Sripada and I use the term ‘norm’ for what we argue is “a theoretically important natural kind in the social sciences” (p. 281). We make it clear that our account of norms “is not intended as a conceptual analysis or as an account of what the term ‘norm’ means to ordinary speakers” (p. 281). Rather, our strategy is to give a rough and ready characterization of the kind that will enable us to pick out clear cases, and then to offer a first pass at an empirically informed theory about a psychological mechanism that can explain some of the more striking features of these cases. If the theory is on the right track, “a better account of the crucial features of norms can be expected to emerge as that theory is elaborated” (p. 281). One of the components in our theory is a norm database, and it is the job of the theory to tell us what can and cannot end up in that database. In so doing, the theory will give us an increasingly informative account of the natural kind that we call ‘norms’. A natural question to ask about the S&S theory is: what is the relation between norms, as we characterize them, and the intuitive category of moral norms? Here is what we say on the matter. It . . . strikes us as quite likely that the intuitive category of moral norms is not co-extensive with the class of norms that can end up in the norm database posited by our theory. Perhaps the most obvious mismatch is that the norm database, for many people in many cultures, will include lots of rules governing what food can be eaten, how to dispose of the dead, how to show deference to high-ranking people, and a host of other matters which our commonsense intuition does not count as moral. (p. 291, emphasis in the original)
But as I noted earlier, there is a roughly parallel mismatch between the intuitive category of moral norms and Prinz’s sentimentalist account of moral norms. If this mismatch does not prevent Prinz’s account from being a proposal about the essential features of moral norms, then the S&S account can also be construed as a proposal about the essential features of moral norms.26 But if is construed in this way, Prinz is mistaken in claiming that “[S&S] are agnostic about how moral norms differ from other norms, and they think we are not yet in a position to determine how much innate machinery we need to explain the acquisition of moral norms.” Rather, the S&S account should be read as claiming that moral norms are a natural kind which is identical with the norms as characterized by our theory. And since our theory posits a fair amount of innate machinery, we have a clear disagreement with Prinz about how much innate machinery is required to explain the acquisition of moral norms. Do we now have an accurate account of the relation between the S&S theory and Prinz’s sentimentalist theory? I am still not confident that we do. The discussion in the last two paragraphs began with the assumption “that Prinz intends to be characterizing a natural kind whose extension overlaps significantly with the intuitive extension of ‘moral norm’ and that in so doing he has characterized the essential features of moral norms.” But it is far from clear that this is correct. Recall that, for Prinz, morality is “an accident”. Moral transgressions are just actions which, when performed by others, happen to trigger one or more of a grab bag of emotions – disappointment, resentment,
replies
225
anger, indignation, contempt, or disgust – and when performed by oneself trigger one or more of another grab bag of emotions – primarily guilt or shame. There are, of course, lots of actions which trigger emotions in the first cluster when someone else does them, but which do not trigger guilt or shame when we do them ourselves.27 And, though it is less clear cut, there are probably lots of actions which trigger guilt or shame when we do them, but don’t trigger emotions in the first cluster when others do them. These sorts of dissociations should not be surprising on Prinz’s account, since he thinks that “moralization takes place under cultural pressure . . . Our moral educators tell us that we should feel bad when we hurt each other or take things that aren’t ours. They teach us by example to get angry at those who violate these norms, even when we are not directly involved. Moralization inculcates emotions of disapproval” (p. 180). If this is right, then dissociations will occur whenever an individual is taught to feel anger in response to an action, but is not taught to feel guilt, or vice versa. Moreover, the teaching itself can occur in lots of ways. “We receive a lot of moral instruction through explicit rules, sanctions, story telling, role models, and overt attitudes expressed by members of our communities” (p. 182). In light of the heterogeneity in the kinds of emotions involved, the many different ways in which they can be linked to categories of action, and the fact that a type of action can trigger emotions in one cluster without triggering emotions in the other, it is hard to see how moral norms, as Prinz characterizes them, could be natural kinds. Moral norms, on Prinz’s account, don’t seem to be suitable candidates for being invoked in nomological generalizations. If that’s right, then Prinz faces a menu of unpalatable alternatives. If his sentimentalist account of moral norms is intended to characterize a natural kind whose extension overlaps with the intuitive extension of ‘moral norm’, then there is reason to think he has failed. Accidents don’t make good natural kinds. If his sentimentalist account was intended to capture the intuitive extension of ‘moral norm’, then again there is reason to think he has failed, since there are lots of cases, including food taboos, etiquette norms, fashion norms, and burial norms which intuition (or at least my intuition) does not classify as moral norms though Prinz’s sentimentalist account does. Of course, it may be that Prinz does not intend his sentimentalist theory to be either a conceptual analysis or an account of a natural kind. If that’s the case, then he owes us some account of what he is trying to do. It is hard to see how the merits of his theory can be assessed without some guidance on what the theory is supposed to do or what counts as getting it right. And, to return to the concern with which we began, it is also hard to know whether and where his theory and the S&S theory disagree. The problems I’ve posed for Prinz’s view can all be traced to the fact that he has not told us enough about how he is using terms like “moral norm” and “moral rule” and that makes it all but impossible to evaluate his claims about morality. I have focused on this issue because, as I noted earlier, I think that much the same problem lies behind many debates in moral psychology. Consider, for example, the following provocative claim in the Introduction to a recent paper by Jonathan Haidt and Craig Joseph. The psychological study of morality, like psychology itself . . . , has been dominated by politically liberal researchers (who includes us). The lack of moral and political diversity among researchers has led to an inappropriate narrowing of the moral domain to issues
226
stephen stich
of harm/care and fairness/reciprocity/justice. Morality in most cultures (and for social conservatives in Western cultures), is in fact much broader, including issues of ingroup/ loyalty, authority/respect, and purity/sanctity . . . This chapter is about how morality might be partially innate . . . We begin by arguing for a broader conception of morality and suggesting that most of the discussion of innateness to date has not been about morality per se; it has been whether the psychology of harm and fairness is innate. (Haidt and Joseph 2007: 367)
To make their case for a broader conception of morality, Haidt and Joseph offer a brief overview of norms that prevail in cultures other than our own which include “rules about clothing, gender roles, food, and forms of address” (Haidt and Joseph 2007: 371) and a host of other matters as well. They emphasize that people in these cultures care deeply about whether or not others follow these rules. But this is an odd way to proceed. For surely Haidt and Joseph don’t think that the “politically liberal researchers” responsible for the “inappropriate narrowing” of the moral domain are unaware that rules governing these matters are widespread in other cultures. The issue in dispute is not whether rules like these exist or whether people care about them. What is in dispute is whether these rules are moral rules. To resolve that dispute, we need an answer to the question that is center stage in the philosophical literature on the definition of morality – we need an account of what it is for a rule to be a moral rule. And if the dispute between Haidt and Joseph and those they criticize is substantive, then not just any account will do; it has to be a correct account. But what counts as getting such an account right? That is a question that loomed large in my critique of Prinz, and it is, I suggest, a question that needs to be addressed by just about everyone who makes claims about moral nativism.
Reply to Godfrey-Smith The aim of Godfrey-Smith’s paper is “to cast representationalism within a different overall philosophical framework, supplied by recent philosophy of science” (p. 40) – more specifically by work aimed at understanding model-based scientific theorizing. In much of his paper, Godfrey-Smith lets “what remains of [his] hair down a little,” and presents some ideas that seem to him to be “promising, even though they are unorthodox and in some places disconcerting” (pp. 31–2). I am not in the least disconcerted by his ideas. Indeed, I find many of them quite congenial. However, given the constraints on length in this volume and the need to cover a lot of material in order to put the pieces of his new framework in place, Godfrey-Smith had no choice but to paint with a very broad brush. Since many important points are discussed very briefly, I think that an in-depth response to his innovative ideas would best be postponed until the view has been set out in more detail. What I can offer instead is an endorsement of one of the main conclusions that Godfrey-Smith defends and an expression of hope that, when more fully developed, an intriguing part of his framework might help to resolve the puzzle I raise at the end of my reply to Egan. The conclusion that I am happy to embrace is that “[f]ifty years from now . . . most of the late twentieth-century literature on mental representation will appear to have tried to
replies
227
lay an overly simple and regimented framework onto a very complicated and mixed reality” (p. 42). Cummins (1989), whose work is discussed in my reply to Egan, hints at much the same prediction when he suggested that it is naive to suppose that folk psychology, orthodox computationalism, connectionism, and neuroscience all make use of the same notion of representation. At the end of my reply to Egan, I expressed the hope that cognitive social psychology, developmental psychology, and other branches of psychology get the same sort of careful and informed scrutiny that Egan and others have lavished on computational psychology, and I raised the possibility that these parts of psychology might turn out not to be using the commonsense notion of representational content. What Godfrey-Smith would likely add – and I would concur – is that it might also turn out that each of these branches of psychology is invoking a different notion of representation. Or as he might prefer to put it, they might each be emphasizing different features of his “basic representationalist model.” In Deconstructing the Mind, where my focus was not on representation but on the closely related notion of reference, I suggested that linguistics, anthropology, evolutionary biology, and the history and sociology of science might each require a somewhat different notion of reference, and that in some of these areas of inquiry it might turn out that two or more distinct kinds of reference are explanatorily useful (Deconstructing, p. 45). So while Godfrey-Smith and I may have gotten there via rather different routes, I share with enthusiasm his pluralism about scientifically useful notions in the representation family. The puzzle that I raised at the end of my reply to Egan began with the observation that a number of branches of contemporary psychology that are clearly flourishing appear to invoke the commonsense notions of belief and desire and the commonsense notion of representational content. However, if I am right about commonsense content attribution, it is vague, context-sensitive, observer-relative, and attributes the same content to a very heterogeneous collection of beliefs and a very heterogeneous collection of desires. One might think that these features would pose a major obstacle to the construction of a fruitful science. What is puzzling is that they don’t. Perhaps Godfrey-Smith’s idea of viewing representationalism as an instance of model-based theorizing in science can point the way toward a solution to this puzzle. According to Godfrey-Smith, This kind of scientific work operates by constructing and exploring hypothetical, usually simple, systems that are intended to have some relevant resemblance relation to a real “target” system that we are trying to understand. All the quirkiness, vagueness, and context-sensitivity of the notion of “resemblance” are supposed to be in play here. Part of the point of model-based work in science is that one can try to develop a model system that has useable similarities to a target system while being unclear, indefinite, and changeable about exactly which features of the model are supposed to resemble features of the target, and unclear or changeable about the degree and kind of resemblance intended. (pp. 32–3, emphasis in the original)
So perhaps it is just a mistake to think that the features of commonsense content that I used to generate the puzzle are an obstacle to productive scientific theorizing. If the quirkiness, vagueness and context-sensitivity that is endemic to model-based theorizing usually do not pose a problem when models are used in physics, chemistry, and biology,
228
stephen stich
there is no reason to suppose they will pose a problem in psychology either. GodfreySmith repeatedly stresses the flexibility of models in science and touts it as one of their main virtues. Perhaps that flexibility is facilitating, not hindering, the impressive progress witnessed in those parts of psychology that appear to invoke commonsense content. If Godfrey-Smith is right, science (or at least model-based science) is a lot messier and a lot less explicit than many philosophers of science have supposed, and scientists are much more adept at coping with the mess – indeed flourishing in it – than I supposed when wrote From Folk Psychology to Cognitive Science.
Reply to Sosa Sosa’s topic is the use of intuitions in philosophy. Much of what I have written on the issue has been critical of appeals to intuition in epistemology, though in recent years I have become increasingly skeptical of the use of intuitions in ethics and in semantic theory as well. In the first half of his chapter, Sosa discusses my critique of analytic epistemology, which I use as a technical term for epistemological projects in which conceptual or linguistic analysis is taken to be the ultimate court of appeal for many disputes in epistemology, and intuitions are used to support or challenge the conceptual analysis. I maintain that projects of this sort are widespread in philosophy, and in several publications I have cited parts of Alvin Goldman’s Epistemology and Cognition (1986) as an important example of the sort project I have in mind. Here is the relevant passage from Stich (1988) – the paper on which Sosa focuses. Goldman notes that one of the major projects of both classical and contemporary epistemology has been to develop a theory of epistemic justification. The ultimate job of such a theory is to say which cognitive states are epistemically justified and which are not. Thus, a fundamental step in constructing a theory of justification will be to articulate a system of rules evaluating the justificatory status of beliefs and other cognitive states. These rules (Goldman calls them justificational rules or J-rules) will specify permissible ways in which a cognitive agent may go about the business of forming or updating his cognitive states. They “permit or prohibit beliefs, directly or indirectly, as a function of some states, relations, or processes of the cognizer” (Goldman 1986, p. 60). Of course, different theorists may have different views on which beliefs are justified or which cognitive processes yield justified beliefs, and thus they may urge different and incompatible sets of J-rules. It may be that there is more than one right system of justification rules, but it is surely not the case that all systems are correct. So in order to decide whether a proposed system of J-rules is right, we must appeal to a higher criterion, which Goldman calls a “criterion of rightness.” This criterion will specify a “set of conditions that are necessary and sufficient for a set of J-rules to be right” (Goldman 1986, p. 64). But now the theoretical disputes emerge at a higher level, for different theorists have suggested very different criteria of rightness . . . How are we to go about deciding among these various criteria of rightness? Or, to ask an even more basic question, just what does the correctness of a criterion of rightness come to; what makes a criterion right or wrong? On this point Goldman is not as explicit as one might wish. However, much of what he says
replies
229
suggests that, on his view, conceptual analysis or conceptual explication is the proper way to decide among competing criteria of rightness. The correct criterion of rightness is the one that comports with the conception of justifiedness that is “embraced by everyday thought or language” (Goldman, 1986, p. 58). To test a criterion we explore the judgments it would entail about specific cases, and we test these against our “pretheoretic intuition.” “A criterion is supported to the extent that implied judgments accord with such intuitions, and weakened to the extent that they do not” (Goldman, 1986, p. 66). Goldman is careful to note that there may be a certain amount of vagueness in our commonsense notion of justifiedness, and thus there may be no unique best criterion of rightness. But despite the vagueness, “there seems to be a common core idea of justifiedness” embedded in everyday thought and language, and it is this common core idea that Goldman tells us he is trying to capture in his own epistemological theorizing (Goldman, 1986, pp. 58–9). I propose to use the term analytic epistemology to denote any epistemological project that takes the choice between competing justificational rules or competing criteria of rightness to turn on conceptual or linguistic analysis. (Stich 1988: 105–6)
I’ve taken a dim view of projects like this, arguing that they lead to an unwelcome xenophobia in epistemology. The analytic epistemologist’s effort is designed to determine whether our cognitive states and processes accord with our commonsense notion of justification (or some other commonsense concept of epistemic evaluation). Yet surely the evaluative epistemic concepts embedded in everyday thought and language are every bit as likely as the cognitive processes they evaluate to be culturally acquired and to vary from culture to culture. Moreover, the analytic epistemologist offers us no reason whatever to think that the notions of evaluation prevailing in our own language and culture are any better than the alternative evaluative notions that might or do prevail in other cultures. But in the absence of any reason to think that the locally prevailing notions of epistemic evaluation are superior to the alternatives, why should we care one whit whether the cognitive processes we use are sanctioned by those evaluative concepts? How can the fact that our cognitive processes are approved by the evaluative notions embraced in our culture alleviate the worry that our cognitive processes are no better than those of exotic folk, if we have no reason to believe that our evaluative notions are any better than alternative evaluative notions . . . It’s my contention that this project is of no help whatever in confronting the problem of cognitive diversity unless one is an epistemic xenophobe. (Stich 1988: 107 and 109)
It is clear that Sosa is not happy with this line of thought, though I confess that I am much less clear about what, exactly, his objection is. One might be concerned that analytic epistemology, as I have characterized it, is a straw man – that no one really does take conceptual analysis, supported by appeal to intuition, to be the final court of appeal in many epistemological disputes. But I doubt this is Sosa’s view, since there is nothing in his paper that challenges my interpretation of Goldman or my contention that Nelson Goodman and Peter Strawson can also plausibly be read as practitioners of this sort of analytic epistemology.28 Moreover, the view that many epistemologists proceed in this way is hardly idiosyncratic. A number of other authors have also commented on the pivotal role of appeals to intuition-based conceptual analysis in justifying epistemological
230
stephen stich
theses.29 Rather, my best guess is that Sosa believes that epistemologists should not try to invoke conceptual or linguistic analysis as the final arbitrator in resolving disputes about competing sets of justification rules, competing criteria of rightness and the like, because to do so they would have to endorse something like the reasoning set out in Sosa’s (a)– (e), which “requires controversial claims or assumptions” (p. 103). I am far from confident of this reading, however, because Sosa does not make clear how much of (a)–(e) is offered as an explication of the reasoning required to sustain the analytic epistemologist’s appeal to conceptual analysis, and how much (if any) needs to be assumed only by someone who wants to criticize analytic epistemology along the lines sketched in the previous paragraph. Also, I am far from convinced that either the analytic epistemologist or the critic would have to appeal to “some such reasoning.” The argument in (a)–(e) goes far beyond anything I, or the philosophers I criticize, have endorsed, and Sosa offers no reason to suppose that their project or my critique can only be elaborated by invoking this problematic argument. I think these concerns can safely be set to one side, however, since, while our reasons may be quite different, Sosa and I agree that the strategy of resolving debates about justificational rules, criteria, or rightness and the like by appealing to conceptual analyses supported by intuition is deeply problematic. Sosa’s main motive in criticizing that strategy, I think, is to set it aside so that we can focus on another way in which intuitions are used in epistemology (and in other parts of philosophy) which makes no appeal to conceptual analysis.30 Philosophers can and do rely on “intuition as a source of data for philosophical reflection” (p. 105) without any attempt to vindicate the practice by appealing to the analysis of meanings or concepts. “At least since Plato,” Sosa notes, philosophical analysis has relied on thought experiments as a way to test hypotheses about the nature and conditions of human knowledge, and other rational desiderata, such as justice, happiness, and the rest. Any such practice gives prime importance to intuitions concerning not only hypothetical cases but also principles in their own right. The objective is to make coherent sense of the contents that we intuit by adopting general accounts that will best comport with those intuitions and explain their truth. (pp. 103–4)
I think Sosa is clearly right that the practice he describes is widespread in philosophy, and has been since antiquity. He’s right, too, in insisting that many who pursue philosophy in this way do not take themselves to be engaged in conceptual or linguistic analysis. Where Sosa and I differ is that he thinks this is an entirely reasonable way for philosophers to proceed, while I think it is a method that philosophers should abandon. One of the arguments that I have used against this way of doing philosophy begins with the contention that philosophical intuitions certainly could and probably do differ in different cultural groups. The “certainly could” part of this – the claim that crosscultural differences in philosophically important intuitions is logically possible – is often conceded and then quickly dismissed as a way of discrediting intuition-based philosophy. Granted, the critics argue, it is logically possible that people with different cultural backgrounds have quite different philosophical intuitions; but it is also logically possible
replies
231
that people with different cultural backgrounds have quite different perceptual experiences even in identical environments. So the possibility argument gives us no more reason to be skeptical of intuition-based philosophy than to be skeptical of beliefs based on perception. At best, the critics continue, the possibility argument is just a special case of a quite general argument for skepticism.31 I am inclined to think that the critics are right about this. The claim that people in different cultural groups probably do have significantly different intuitions about philosophically important matters, and the empirical studies offered in support of this claim, have provoked much more elaborate responses from the defenders of intuition-based philosophy. Sosa’s responses, in his chapter in this volume and in several other recent papers, are among the best informed and most acute of these. Though Sosa and I disagree sharply about the role that appeal to intuition should play in philosophy, there are many points on which I think we are in complete agreement. First, and most important, Sosa and I both think that if it is true that people in different cultural groups disagree in their intuitive judgments about philosophically important cases, this would pose a major problem for philosophers who use intuition as a source of data in the way that Sosa recounts. In the paper in this volume, Sosa hedges his bets a bit, saying only that this sort of disagreement would “allegedly pose a ‘serious problem’ ” (p. 106). But in other papers (published earlier, though written later), he is less guarded: One main objection [posed by “those who reject philosophical intuition as useless”] derives from alleged disagreements in philosophical intuitions, ones due in large measure to cultural or socioeconomic or other situational differences. This sort of objection is particularly important and persuasive . . . (Sosa 2007c, p. 60) There will definitely be a prima facie problem for the appeal to intuitions in philosophy if surveys show that there is extensive enough disagreement on the subject matter supposedly open to intuitive access. (Sosa 2007a: 102)
In one of these papers, Sosa goes into some detail on why disagreement in intuition would be problematic. When we rely on intuitions in philosophy, then, in my view we manifest a competence that enables us to get it right on a certain subject matter, by basing our beliefs on the sheer understanding of their contents. How might survey results create a problem for us? Suppose a subgroup clashes with another on some supposed truth, and suppose they all ostensibly affirm as they do based on the sheer understanding of the content affirmed. We then have a prima facie problem. Suppose half of them affirm
232
stephen stich
constitution of the misled, doubt will surely cloud the claim to competence by those who ex hypothesi are getting it right. (Sosa 2007a: 102)
I think that this diagnosis of the problem posed by disagreement in intuitions is both accurate and incisive. The only observation I’d add is that if the intuitions being studied are about “the nature and conditions of human knowledge, . . . justice, happiness, and the rest,” and if the two groups are, say, East Asians and Westerners, then producing a plausible “theory of error” will be, to put it mildly, no easy task.32 It is worth emphasizing the enormous importance of this point, on which Sosa and I apparently agree. For 2,500 years, philosophers have been relying on appeals to intuition. But the plausibility of this entire tradition rests on an unsubstantiated, and until recently unacknowledged, empirical hypothesis – the hypothesis that the philosophical intuitions of people in different cultural groups do not disagree. Those philosophers who rely on intuition are betting that the hypothesis is true. If they lose their bet, and if I am right that the prospects are very dim indeed for producing a convincing theory of error which explains why a substantial part of the world’s population has false intuitions about knowledge, justice, happiness, and the like, then a great deal of what goes on in contemporary philosophy, and a great deal of what has gone on in the past, belongs in the rubbish bin. I think it is too early to say with any assurance who is going to win this bet – though if I were a practitioner of intuition-based philosophy I’d be getting pretty nervous. What is clear is that the stakes are very high, and this underscores the importance of cross-cultural empirical work aimed at studying philosophical intuitions and understanding the psychological mechanisms that give rise to them. A second point on which Sosa and I are in complete agreement is that currently available evidence does not show “beyond reasonable doubt that there really are philosophically important disagreements [in intuition] rooted in cultural or socio-economic differences” (Sosa 2007a: 103). Nor do I have any quarrel with two of the reasons Sosa offers for denying that the experimental results he cites establish disagreement beyond a reasonable doubt. The first of these is that the subjects in the experiments might “import different background beliefs as to the trustworthiness of American corporations or zoos, or different background assumptions about how likely it is that an American who has long owned an American car will continue to own a car . . .” (p. 108). The second is that the results might have been quite different if subjects had been given a third choice, like “we are not told enough in the description of the example to be able to tell whether the subject knows or only believes” (p. 108). Though Sosa very graciously describes the experimental work that my collaborators and I have done on epistemic intuitions as “extensive” (p. 106), the truth is that to date there have been only a handful of rather unsophisticated studies. More and better studies are needed, including experiments that address the concerns that Sosa raises, and a variety of other concerns as well. It is still very early days in the empirical exploration of philosophical intuitions, and no one working in the area would claim that anything has been demonstrated beyond reasonable doubt. That’s a very high standard to set for empirical work in the social sciences. Nonetheless, I am inclined to think that Sosa should be rather more worried than he appears to be. While new evidence certainly might undermine the conclusions about cross-cultural diversity in intuition that my collaborators and I have drawn from existing
replies
233
studies, Sosa has given us no reason to think that it will. Until it does, these studies stand as noteworthy straws in the wind, and most of the straws seem to be blowing in the wrong direction for those who champion intuition-based philosophy. Part of the explanation for Sosa’s nonchalance emerges when we turn to his third reason for doubting that the experimental results indicate genuine intuitive disagreement. The results would pose no challenge at all to intuition-based philosophy if the term ‘knowledge’ picks out somewhat different concepts for the two groups, for then “we fail to have disagreement on the very same proposition” (p. 108). Here again, Sosa is surely right when he says that this might be the case, but is there any reason to think that it really is? Though Sosa does not address the question directly, some of his remarks suggest that the smart money should bet on ambiguity, because covert ambiguity of the sort he is concerned about is very easy to generate. If East Asians are more sensitive to communitarian factors in deciding whether to apply the term ‘knowledge’ to particular cases, while Westerners are more sensitive to individualistic factors, that by itself, Sosa seems to suggests, might be enough to show that the term ‘knowledge’ picks out different concepts in the two groups. But if this is what Sosa thinks, it is far from clear that he is right. There is a vast literature on concepts in philosophy and in psychology (Margolis and Laurence 1999; Murphy 2002; Machery forthcoming), and the question of how to individuate concepts is one of the most hotly debated issues in that literature. While it is widely agreed that for two concept tokens to be of the same type they must have the same content, there is a wide diversity of views on what is required for this condition to be met. On some theories, the sort of covert ambiguity that Sosa is betting on can be expected to be fairly common, while on others covert ambiguity is much harder to generate. For Fodor, for example, the fact that an East Asian pays more attention to communitarian factors while a Westerner emphasizes individualistic factors in applying the term ‘knowledge’ would be no reason at all to think that the concepts linked to their use of the term ‘knowledge’ have different contents (Fodor 1998). For theorists like Frank Jackson, by contrast, if two people have different intuitions about some Gettier cases, and if neither of them is confused about the details of the example, that’s enough to show that they have different concepts (Jackson 1998: 32). So on Jackson’s account, empirical studies like those that Sosa discusses, no matter how well designed and carefully controlled, could not possibly show that people’s intuitions disagree, since prima facie disagreement is conclusive evidence of ambiguity. Since Sosa grants that cross-cultural disagreement on philosophically important intuitions is a genuine empirical possibility, he can’t adopt Jackson’s account of content, though I suspect that Sosa favors an account that is more like Jackson’s than like Fodor’s – one on which covert ambiguity is easy to generate. But since Sosa does not tell us what theory of content he endorses, or why he thinks that the correct theory will make the sort of covert ambiguity that he envisions rather commonplace, there is not much that those of us who are skeptical about intuition-based philosophy can do to move the conversation forward. We can’t do empirical studies designed to test for the sort of ambiguity Sosa is worried about until he tells us more about what that sort of ambiguity is. Though I have never been very clear about the rules of burden-of-argument tennis, I am inclined to think that the ball is in his court.
234
stephen stich
In one of the papers on which Sosa focuses, my collaborators and I argue that, even if Jackson is right about concept individuation, findings like ours, which suggest that culture, SES, and philosophical training have an important influence on epistemic intuitions, would still pose a serious problem for intuition-based epistemology. For on Jackson’s theory (and on others that make it relatively easy to establish the existence of covert ambiguity), it is very likely that the term ‘knowledge’ picks out lots of different concepts when uttered by members of different groups. East Asians, Indians and High SES Westerners all have different concepts; High and Low SES Westerners have different concepts; people who have studied lots of philosophy and people who have studied no philosophy have different concepts. And that, no doubt, is just the tip of the iceberg. Moreover, these concepts don’t simply differ in intension, they differ in extension – they apply to different classes of actual and possible cases. (Nichols, Stich and Weinberg 2003: 245, emphasis in the original)
If that’s right, we ask, then how are we to understand traditional views like Plato’s claim that “wisdom and knowledge are the highest of human things,” or more recent epistemological theories which suggest that if S’s belief that p is an instance of knowledge, then, ceteris paribus, S ought to believe that p?33 If ‘knowledge’ picks out different things for different speakers, they can’t all be the highest of human things. And if S’s belief that p counts as an instance of knowledge, as that term is used by one speaker, but does not count as an instance of knowledge as the term is used by another speaker, ought S to believe p or not? Similar problems arise in interpreting more recent work, like Williamson’s (2000) contention that knowledge is the most general factitive mental state and Hawthorne’s claim that “[t]he practice of assertion is constituted by the rule/requirement that one assert something only if one knows it” (Hawthorne 2004: 23). Obviously, if ‘knowledge’ picks out different things for different speakers, these can’t all be the most general factitive mental state. And if Ann counts as knowing p, as ‘knowing’ is used by one speaker, but does not count as knowing p, as ‘knowing’ is used by another speaker, then what does Hawthorne’s claim entail about Ann if she asserts p? Has she violated the rule or hasn’t she? Of course, it would be easy enough to answer these questions by simply stipulating that ‘knowledge’ is to be understood as expressing the concept of some specific group – high SES white Western males with lots of philosophical training, for example. But while that would resolve the ambiguity, it is a move that cries out for some justification. Why that group? Why is their concept of knowledge better than all the others? This is a line of argument that Sosa finds baffling. Why do we need to choose between the “commodities” picked out by these various concepts of knowledge, he asks. Why can’t we value them all – just as we might value owning river banks and money banks? Sosa doubts there is any conflict between cultural groups which use the term ‘knowledge’ to pick out different epistemic commodities. “[T]here seems no more reason to postulate such conflict than there would be when we compare someone who rates cars in respect of how economical they are with someone who rates them in respect of how fast they can go” (p. 110).
replies
235
While Sosa is baffled by our argument, I am baffled by his bafflement, since the conflict whose existence he denies strikes me as clear and obvious. To make the point quite vividly, an analogy may be helpful. For theorists like Jackson, if two people have divergent intuitive judgments about whether some important cases are instances of X, and if the divergence can’t be attributed to mere confusion, then they are invoking different X-concepts. So, as we’ve seen, if two people have divergent intuitive judgments about Gettier cases, and neither is confused, then Jackson maintains that they are invoking different concepts of knowledge. Jackson makes it clear that he would say the same about cases in the moral domain. If a Yanomamö intuitively judges that it is morally permissible to kill men who are not members of his tribe, take their possessions, rape their wives, and enslave their children, while I intuitively judge that it is not morally permissible to do these things, and if the disagreement can’t be attributed to confusion, then the Yanomamö and I are invoking different concepts of moral permissibility. And if, as I maintain, this case is entirely parallel to the knowledge case, presumably Sosa would deny that there is any conflict here. He might even wonder why we shouldn’t learn to value the “commodities” that the Yanomamö label ‘morally permissible’ even though they are rather different from the commodities to which we apply the label ‘morally permissible’.34 Obviously, something has gone very wrong here, though it is no easy matter to diagnose the problem since there are a number of factors involved. One of them is that Sosa has chosen to express his bafflement by focusing on what we value in the epistemic domain. Norms of valuing do play a role in traditional epistemological debates, but they are not the only sorts of norms that epistemologists have considered. As we noted earlier, Goldman insists, quite correctly, that justification rules (or “J-rules”) play a central role in both classical and contemporary epistemology, and J-rules specify norms of permissibility not norms of valuing. They “permit or prohibit beliefs, directly or indirectly, as a function of some states, relations, or processes of the cognizer” (Goldman 1986: 60). When we focus on these rules, the sort of pluralism that Sosa suggests is much harder to sustain. If a rule, like the one cited a few paragraphs back, says that ceteris paribus we ought to hold a belief if it is an instance of knowledge, and if ‘knowledge’ is interpreted in different ways by members of different groups, then Sosa’s pluralism leads to inconsistency. There will be some beliefs which we ought to believe on one interpretation of ‘knowledge’ but not on the other. Moreover, even in the case of norms of valuing, Sosa’s pluralism can lead to problems. Sosa is surely right to claim that someone who values owning money banks can also value owning river banks. But if there is one of each on offer and the person’s resources are limited, she will have to make a choice. Which one does she value more? Similar quandaries may confront the person who values both of the “commodities” picked out by ‘knowledge’ by the intuitions of two different groups. There will be occasions when she can have one or the other, but not both. So she must decide which she values more. Sosa gives us no guidance on how to go about making these choices, and I am inclined to think that this is not simply an oversight. Rather, I suspect, Sosa does not take these questions to be part of the purview of epistemology. One of the reasons Sosa and I find each other’s arguments baffling is that we have very different views on what
236
stephen stich
epistemology should be doing. For Sosa, epistemology is “a discipline . . . whose scope is the nature, conditions, and extent of knowledge” (p. 110). For me, by contrast, epistemology is a discipline that “focuses on the evaluation of methods of inquiry. It tries to say which ways of going about the quest for knowledge – which ways of building and rebuilding one’s doxastic house – are the good ones, which are the bad ones and why.” The quote is from the first page of The Fragmentation of Reason. In the paragraph that follows, I try to make the case that this conception of epistemology is widely shared. There is no shortage of historical figures who have pursued this sort of epistemological investigation. Much of Francis Bacon’s epistemological writing is devoted to the project of evaluating and criticizing strategies of inquiry, as is a good deal of Descartes’s. Among more modern epistemological writers, those like Mill, Carnap, and Popper, who are concerned with the logic and methodology of science, have tended to emphasize this aspect of epistemological theory. From Bacon’s time to Popper’s, it has frequently been the case that those who work in this branch of epistemology are motivated, at least in part, by very practical concerns. They are convinced that defective reasoning and bad strategies of inquiry are widespread, and that these cognitive shortcomings are the cause of much mischief and misery. By developing their accounts of good reasoning and proper strategies of inquiry, and by explaining why these are better than the alternatives, they hope others will come to see the error of their cognitive ways. And, indeed, many of these philosophers have had a noticeable impact on the thinking of their contemporaries. (Fragmentation, pp. 1–2.)
If I understand him correctly, the sort of epistemology Sosa favors is not in this line of work. It does not even try to offer advice on how one should go about revising one’s beliefs – certainly not advice “all things considered” (Sosa, this volume: p. 111). Rather, the epistemologist’s aim is to characterize the phenomena picked out by terms like ‘knowledge’ and ‘justification’ as he uses the term. If it turns out that people in other groups use these terms to pick out different phenomena, Sosa’s epistemologist might try to characterize those phenomena as well. But it is not the epistemologist’s job to tell us which of these phenomena is better, or which we ought to pursue, or why. One could, of course, engage in an entirely parallel project in the moral domain. A philosopher pursuing that project would try to characterize the phenomena picked out by terms like ‘morally permissible’ and ‘morally prohibited’, as she uses the terms. And if people in other groups use the terms in to pick out different phenomena she might try to characterize those as well. But this Sosa-style moral philosopher would not tell us which characterization of the morally permissible was better, or which actions we should pursue, or which actions we should avoid. This is an interesting project, to be sure, and a valuable one. But by my lights, it is closer to ethnography than to moral philosophy. Much the same, I think, is true of Sosa-style epistemology. As Nichols, Weinberg and I noted, epistemologists who rely on intuitions “have chosen to be ethnographers; what they are doing is ethno-epistemology” (Nichols et al. 2003: 235; emphasis in the original). Moreover, if these philosophers are doing ethnography, then as Weinberg and I have pointed out, their methodology leaves much to be desired (Stich and Weinberg 2001). That is a theme I’ll take up in my reply to Bishop.35
replies
237
Reply to Bishop Bishop’s rich and challenging paper begins with a detailed, largely sympathetic and exceptionally well-informed overview of my work in epistemology. It ends with a section arguing that a modified version of Bishop and Trout’s “Strategic Reliabilism” is pragmatically preferable to the sort of epistemic pragmatism that I defended in Fragmentation. Though I have a quibble here and there, I think Bishop’s account of my work in epistemology is generally accurate. Indeed, on a number of topics he’s understood what I was up to better than I did. In the first section of my reply, I’ll elaborate on some of the points that he makes and put a rather different spin on them in a few places. In the second section, I’ll focus on his suggestion that there is a tension between my attitude toward the use of intuitions in epistemology in Fragmentation and in the papers written with Weinberg and Nichols. In the final section, I’ll explain why I’m not convinced by his pragmatic defense of Strategic Reliabilism.
1 Searching in the wrong place in the wrong way “In a nutshell,” Bishop tells us, “the problem with analytic epistemology is that it searches for answers in the wrong place and in the wrong way” (p. 119). This is a great slogan for a cluster of views that Bishop and I share. But the slogan needs to be interpreted with care. In Fragmentation, and earlier in Stich (1988), I used “analytic epistemology” as a technical term for epistemological projects in which intuitions are used to support linguistic or conceptual analyses, and those analyses are taken to be the ultimate court of appeal in epistemological disputes. Though I did not make this as clear as I should have, the disputes I had in mind were disputes in normative epistemology.36 In the papers I wrote with Weinberg and Nichols some years later, the term “analytic epistemology” is used again. But its meaning is less clear. Though I don’t recall ever discussing the issue with my co-authors, my best guess as to what we had in mind was (something like) epistemological projects in the analytic tradition that use intuitions as an important source of data. The difference is an important one, since there are many projects that count as analytic epistemology on the latter interpretation but not on the former.37 Bishop’s slogan is consistent with my current view on the former interpretation of “analytic epistemology” but not on the more inclusive interpretation. To make this a bit clearer, it will help to set out a taxonomy of some of the epistemological projects in which intuitions might be used as a source of evidence. i)
One project aims to “capture” our epistemic intuitions (or some subset of them, for example our intuitions about when a person does and does not have knowledge or justified belief) by producing a theory that will entail those intuitions – perhaps, as Bishop notes, with “some light revisions in the service of power or clarity” (p. 123). Of course, if different people or different groups of people have significantly different intuitions, then this project will fragment into a cluster of projects, each aimed at capturing the intuitions of a different group.
238
stephen stich
ii) A second project assumes that there is a tacit or implicit theory underlying people’s ability to produce epistemic intuitions. The goal of the project is to give an account of that implicit theory. Often this project is not clearly distinguished from (i), since it seems natural to suppose that the implicit theory underlying our intuitions just is the theory that captures those intuitions. However, I think it is important to keep these two distinct, since there can be lots of different ways of capturing a person’s epistemic intuitions, just as there can be lots of ways of capturing a person’s grammatical intuitions. But on one understanding of what an implicit theory is, at most one of these could be the implicit theory underlying the person’s intuitions.38 iii) A third project aims to analyze or characterize people’s epistemic concepts, like the concept of knowledge or the concept of epistemic justification. Here again, it is easy to ignore the difference between this project and the one that precedes it, since some theorists maintain that concepts just are the tacit theories that guide our application of the associated term. But others adamantly insist that this is not the case.39 iv) A fourth project is the one that is center stage in Sosa’s essay. The goal is to characterize “the nature and conditions of human knowledge and other rational desiderata” like justification. Many philosophers tend to conflate this project with the previous one, where the goal is conceptual analysis. But as Sosa rightly insists, this is a mistake. Analyzing our own (or someone else’s) concept of knowledge is quite distinct from characterizing the nature of knowledge, just as analyzing our concept of water or of disease is distinct from characterizing the nature of water or of disease. v) So far, all the projects on my list are instances of what I think of as descriptive epistemology. They can all be pursued without making any explicitly normative claims. But intuitions might also be used as evidence for a variety of theories in normative epistemology. As Bishop notes, the normative epistemological project that has been center stage in my work “focuses on the evaluation of methods of inquiry. It tries to say which ways of going about the quest for knowledge – which ways of building and rebuilding one’s doxastic house – are good ones, which are bad ones, and why.” Other normative projects aim at evaluating beliefs or other cognitive states. On my view, intuitions are an entirely appropriate source of evidence in the first three of these projects, provided that the cautionary note at the end of (i) is kept in mind. Other sorts of evidence may be of considerable importance in (ii) and (iii), though how that evidence is used will turn on the resolution of some vexed questions about the nature of concepts and implicit theories.40 In (iv) and (v), the use of intuitions is much more problematic. I think the best argument against the use of intuitions in (iv) and (v) relies on an empirical premise: different people (and different groups of people) have different epistemic intuitions. If this is right, then it is hard to see why we should trust our intuitions rather than those of some group whose intuitions disagree with ours.41 This problem does not arise for the first three projects, since in those cases, in contrast with (iv) and (v), the goal of the project is to learn something about the psychology of the
replies
239
individuals offering the intuitions. If different people have different intuitions, then they may well have different implicit theories or different epistemic concepts. But the fact that people’s intuitions differ does not pose a prima facie problem to the use of those intuitions as data. Another widely discussed argument against using intuitions in projects like (iv) and (v) is the “calibration” argument developed by Robert Cummins (1998). This argument draws an analogy between intuition and instruments or procedures used to make observations or gather data in science. Before they are trusted, instruments and procedures need to be calibrated, and to do this “an invariable requirement . . . is that there be, in at least some cases, access to the target that is independent of the instrument or procedure to be calibrated” (Cummins 1998: 117). But in most cases where intuitions are used in philosophy, Cummins maintains, there is no independent access to the target, and in cases where there is independent access, the intuitions are typically superfluous. I’m much less impressed by this argument since, as a number of authors have noted, it threatens to generalize into a much more pervasive skepticism. Here is how Sosa makes the point: “The calibration objection, if effective against intuitions, will prove a skeptical quicksand that engulfs all knowledge, not just the intuitive. No source will then survive, since none can be calibrated without eventual self-dependence. That is so at least for sources broadly enough conceived: as, say, memory, introspection, and perception” (Sosa 2007b: 64). Moreover, even if there is some way of blocking the extension of the argument to perception and memory, it is likely that renouncing reliance on uncalibrated intuition would undermine large parts of mathematics. And that, I think, counts as a reductio of the calibration argument. When Bishop says that analytic epistemology searches for answers in the wrong place, he means that analytic epistemologists rely on epistemic intuitions as a principal source of data. If the epistemologist’s project is (iv) or (v), and if intuitions differ in different demographic groups, then there is excellent reason to think that the epistemologist is looking in the wrong place. When Bishop says that analytic epistemologists look for answers in the wrong way, he means that analytic epistemologists are doing “bad science” (p. 119). Here there is no need to restrict the criticism to a subset of the epistemological projects listed above. Indeed, there is no need to restrict the criticism to analytic epistemology. Philosophers in the analytic tradition regularly make claims about “our” intuition and use these as data for claims about the “ordinary conception” of x or the “folk theory” of y. Moreover, these claims are often of central importance to the philosopher’s project. Here is a passage in which Frank Jackson makes the motivation for such claims admirably clear. How . . . should we go about defining our subject qua metaphysicians when we ask about Ks for some K-kind of interest to us. It depends on what we are interested in doing. If I say that what I mean – never mind what others mean – by a free action is one such that the agent would have done otherwise if he or she had chosen to, then the existence of free actions so conceived will be secured, and so will the compatibility of free action with determinism. If I say what I mean – never mind what others mean – by ‘belief ’ is any information-carrying state that causes subjects to utter sentences like ‘I believe that snow is white’, the existence of beliefs so conceived will be safe from the eliminativists’ arguments.
240
stephen stich
But in neither case will I have much of an audience. I have turned interesting philosophical debates into easy exercises in deduction from stipulative definitions together with accepted facts. What then are the interesting philosophical questions that we are seeking to address when we debate the existence of free action and its compatibility with determinism, or about eliminativism concerning intentional psychology? What we are seeking to address is whether free action according to our ordinary conception, or something suitably close to our ordinary conception, exists and is compatible with determinism, and whether intentional states according to our ordinary conception, or something suitably close to it, will survive what cognitive science reveals about the operations of our brains. (Jackson 1998: 31, emphasis in the first paragraph added; emphasis in the second paragraph is Jackson’s)
Obviously, claims about “our ordinary conception” of belief or free will are empirical claims. What sort of evidence does Jackson have for these claims? Here is what Jackson says: I am sometimes asked – in a tone that suggests that the question is a major objection – why, if conceptual analysis is concerned to elucidate what governs our classificatory practice, don’t I advocate doing serious opinion polls on people’s responses to various cases? My answer is that I do – when it is necessary. Everyone who presents the Gettier cases to a class of students is doing their own bit of fieldwork, and we all know the answer they get in the vast majority of cases. But it is also true that often we know that our own case is typical and so can generalize from it to others. (Jackson 1998: 36–7)
I think the views that Jackson is expressing in this last quote are (or were until very recently) widely shared among philosophers in the analytic tradition. By my lights, they constitute a truly shocking indictment of that tradition. The “serious opinion polls” that Jackson and other analytic philosophers conduct to support their claims about the concepts or implicit theories underlying “our” classificatory practice violate just about every methodological standard that social scientists have adopted to avoid bias and distortion in survey research. The opinions solicited in this sort of “fieldwork” are typically those of students in elite universities, who are hardly a representative sample of “the folk.” Moreover, the students who take philosophy courses – particularly advanced courses where much of this fieldwork is done – are a self-selected group who are unlikely to be representative even of students at elite universities. We know very little about the factors that lead students to take advanced philosophy courses. But one possibility that surely must be taken seriously is that students whose intuitions do not match those of their teachers do not enjoy or do well in lower level courses and do not continue on in philosophy.42 If one were looking for a textbook example to illustrate what social scientists mean by sample bias, the sort of “serious opinion poll” that Jackson has in mind would be a fine candidate. It would also serve as a fine example of experimenter bias since the informal polls are conducted by an authority figure with a strong antecedent belief about how the “experiment” should turn out. As if all of this were not enough, the typical classroom opinion poll conducted by philosophers requires that students indicate their judgment in a very public way, and social psychologists have long known that procedures like this have a strong tendency to suppress dissension opinions.43
replies
241
The fact that disagreement in intuition in different demographic groups would pose a serious challenge for projects (iv) and (v) underscores the importance of doing carefully controlled methodologically sophisticated studies that look for disagreement across demographic divides. But even if, like Jackson, one’s goal is conceptual analysis or the characterization of an implicit theory underlying classificatory practice, it is important to get out of the armchair and do well-designed, carefully controlled studies that look for systematic differences in intuitions. For without this sort of work, there is no way of knowing whether one’s own intuitions (or concepts or implicit theories) differ from those of others. And thus there is no way of addressing Jackson’s concern that one may have turned an interesting philosophical debate into an uninteresting exercise for which there may not be much of an audience.
2 The prima facie tension At the end of his first section, Bishop suggests that there is a prima facie tension between my view about the use of intuition in epistemology in Fragmentation and some of the “hints” to be found in Weinberg, Nichols and Stich (2001). I think Bishop is right; there is some tension there. But the issues are rather more delicate and complex than Bishop suggests. In Fragmentation, I saw little use for epistemic intuitions or for the concepts of epistemic evaluation that they reflect because I assumed they were culturally local and idiosyncratic products of a process of cultural transmission that had very likely led to different concepts and different intuitions in other cultures. Why, I asked, should we think that our intuitions and our concepts of epistemic evaluation are any better than the many alternatives that were likely to exist in other groups? But, as Bishop rightly notes, when Fragmentation was written, there was no serious evidence for the claim that epistemic intuitions and concepts varied cross-culturally. It was little more than a guess. A decade later, Weinberg, Nichols and I decided to explore the issue empirically. What we found offered a bit of evidence for the speculation in Fragmentation. However, the picture was more complicated than I had imagined. Yes, indeed, some epistemic intuitions – including Gettier intuitions! – did appear to be different in different demographic groups. But, at least in the small number of groups we looked at, there were also some intuitions that did not vary significantly. All of our subjects agreed that beliefs based only on a “special feeling” do not count as instances of knowledge. So perhaps there is a universal core to folk epistemology – a set of principles on which everyone agrees. We are, to put it mildly, very far from knowing that this is the case. It will take a great deal more work in experimental philosophy, looking at many more cases in many more demographic groups, before we have any serious idea about the nature or even the existence of this universal core. Suppose it turns out that there is such a universal core. Would there be any normative implications? Could the discovery of a universal core be used to argue that experimental philosophy can play more than the “normatively modest” role that Bishop describes – “confirming or disconfirming the empirical assumptions or empirical implications of genuinely normative philosophical theories” but “incapable of offering up positive
242
stephen stich
normative theories or principles”? (p. 122). The answer is far from clear. On the one hand, the discovery of universal intuitions and core principles would undermine the demographic variation argument which, I maintain, is the best argument against the use of intuitions in epistemology – or, to be a bit more precise, it would undermine the argument when applied to those intuitions. On the other hand, the mere fact that a cluster of intuitions or a set of principles are universally shared surely does not entail that the intuitions can be relied on or that the principles are good ones. In deciding whether or not to rely on universally shared intuitions and principles, one would like to know much more about the source of these intuitions and principles. Why does everyone have them? How did they come to be universally shared? There are some answers to these questions that would cast serious doubt on the reliability of intuition and other answers that would encourage trust. In his critique of the use of intuitions in philosophy, Cummins (1998) argues that all the plausible hypotheses about the source of intuitions fall into the former category. I’m not sure he’s right about this. What I am sure of is that before we can say with any assurance whether or not experimental philosophy can play more than a normatively modest role in epistemology we will have to know a great deal more about the etiology of epistemic intuitions and think much harder about the ways in which etiologies can sustain or undermine confidence in intuition. So I am in agreement with Bishop when he writes: “I don’t believe there is a case to make that our intuitions are in principle irrelevant evidence for or against a genuinely prescriptive epistemological theory. (And as far as I know, no one, including Stich, has made such a case)” (p. 124; emphasis in the original). If there is a case to be made for or against an evidentiary role for intuitions in prescriptive epistemology, it will not be an “in principle” argument. Rather, it will be an argument in which both an empirically supported account of the extent to which epistemic intuitions are (or are not) demographically variable and an empirically supported account of the etiology of intuitions play a central role. Moreover, even if it does turn out that some intuitions are good prima facie evidence for normative epistemological claims, it is entirely possible that they may be overridden by other, better sources of evidence. Here again, I am in broad agreement with Bishop, who notes that “whether it is reasonable to ignore [intuitions] in our epistemological theorizing depends on whether we have an evidential source that is better than our intuition” (pp. 124–5).
3 Pragmatism vs. Strategic Reliabilism In Chapter 5 of Fragmentation, I offered an account of what it is for a belief to be true, and argued that if that account is on the right track, then most people won’t find true belief to be either intrinsically or instrumentally valuable, once they have a clear view of what true belief is. I went on to argue that rather than evaluating belief-forming cognitive processes as a reliabilist would, by asking how well they do at producing true beliefs, we should instead evaluate them on pragmatic grounds, asking which system of cognitive processes would be most likely to achieve those things that we do find intrinsically valuable. In section 3.1 of his paper, Bishop does an admirable job of sketching my
replies
243
arguments for the conclusion that, on reflection, most of us will recognize that we have no good reason to value true beliefs over true* beliefs – where most true* beliefs are also true, and those that are not are pragmatically preferable. Though these arguments are aimed at supporting what Bishop maintains is “one of the more shocking positions in the Stich oeuvre,” (p. 125), he courageously endorses them: “These are powerful arguments. And properly understood, I do not want to challenge them” (p. 127).44 But having endorsed the shocking bit, Bishop resists taking the next step. Even though “we have no good reason to value true beliefs (intrinsically or instrumentally)” (p. 127), he nonetheless thinks that a version of reliabilism can be defended as a way of evaluating belief-forming strategies. The sort of reliabilism that Bishop has in mind is the sophisticated and innovative “Strategic Reliabilism” that he has developed in collaboration with J. D. Trout (Bishop and Trout 2005). The defense he proposes is one that I could hardly oppose in principle, for what he maintains is that Strategic Reliabilism is better than direct pragmatic assessment of belief-forming strategies on pragmatic grounds. Despite initial appearances, there is nothing paradoxical about this approach. A nuanced reliabilism may be pragmatically superior to the direct pragmatic assessment of cognitive strategies if the latter approach turns out to be psychologically unrealistic, recommending belief-forming strategies that are either psychologically impossible or so difficult that the costs would be prohibitive. Bishop contends that we have a strong tendency to value true beliefs and a strong inclination to believe p when we are convinced that p is true, and he thinks these psychological facts spell trouble for direct pragmatism. The cool-eyed pragmatist will be the first to insist that a theory of cognitive evaluation should not make demands on us that we can’t meet. Our judgment and decision-making capacities are deeply imperfect, and the limits on our memory, computing power, time, energy, patience, and will are legion (Stich 1990: 149–58). I’m suggesting the pragmatist add one more imperfection to the list: We tend to value truth, even when, from a pragmatic perspective, we shouldn’t. Once we take this fact about ourselves to heart, the pragmatist is faced with a familiar challenge: What sorts of normative principles, theories or recommendations can we offer that will effectively guide our reasoning but that will clearly recognize and compensate for our built-in limitations and imperfections? Perhaps our regrettable attraction to true belief gives us pragmatic grounds for placing truth at the center of our epistemological theory. Not because truth is more valuable to us than truth* (or truth** or truth*** . . .) but just because we’re stuck valuing true belief. To compensate for our unfortunate attraction to true belief, two different strategies suggest themselves. A direct strategy, which Stich adopts in Fragmentation, places pragmatic virtues center stage. Normative claims about cognitive matters – generalizations about good reasoning as well as evaluations of particular cognitive states and processes – are framed directly in terms of what we intrinsically value. An indirect strategy would place truth (or some other non-pragmatic category) center stage but would find some way to license the adoption of false (or true*) beliefs when they serve our interest. What might such a theory look like? I suggest it might look something like Strategic Reliabilism (pp. 128–9, italics added).
Now I’m afraid that all of this goes by a bit too quickly for me. I’m not sure whether Bishop is claiming:
244
stephen stich
1 that in our role as epistemologists the (putative) fact that we’re stuck valuing true belief makes it impossible or very difficult to advocate belief-forming strategy A over belief-forming strategy B if we believe that B will lead to more true beliefs than A; or 2 that in our role as cognitive agents, we are so strongly attracted to true beliefs that we cannot adopt a belief-forming strategy that leads us to form what we take to be a false belief; or 3 that since (2) is the case, epistemologists ought not to advocate belief-forming strategies that urge cognitive agents to form beliefs they take to be false or resist forming beliefs they take to be true. The first of these options strikes me as distinctly unpromising. When wearing my epistemologist’s hat, I don’t find it at all difficult to advocate the sort of belief-forming strategy that (1) claims it is all but impossible to advocate. Nor am I alone. In one of the more memorable sections of their book, Bishop and Trout (2005) take L. Jonathan Cohen to task for doing much the same thing.45 The most charitable interpretation, I suspect, is that Bishop is advocating both (2) and (3). But, for two reasons, I am unconvinced. The first reason is that, while Bishop asserts (2) on several occasions, he offers no serious evidence. Moreover, since Bishop has an admirable record of citing well-done studies to back up his empirical claims, there is good reason to think that this is not merely an oversight. I know of no studies indicating that “we are naturally drawn to true belief even when it is against our interests to do so” (p. 125), and I am quite confident that if Bishop knew of any he would have mentioned them prominently. The second reason is that, for the issue at hand, (2) is not really the relevant claim. What we need to know is not whether people are naturally drawn to true belief, but whether this putative attraction would survive if people became convinced of (something like) my account of what true belief is. It is pretty clear that Bishop thinks that it would. We value the truth, despite the results of our Stich-inspired deliberations on its idiosyncrasies and practical failings. The truth is like the prodigal son. We might realize that he does not deserve our love, our care, our energy; we might realize that we would be much better off committing those feelings and resources to a more deserving child. But despite what our heads say, we can’t help but embrace him. (p. 127)
He might be right, of course. But the trajectory of people’s values, preferences, and patterns of belief formation under counterfactual conditions – particularly counterfactual conditions that are quite different from anything we’ve observed – are notoriously hard to predict. And here again, Bishop offers no evidence; his prediction is simply speculation. Since his argument turns crucially on unsupported speculation, I think the verdict must be that Bishop has failed to make a convincing case for the pragmatic superiority of Strategic Reliabilism.46
Notes 1 Devitt’s arguments for this highly contrarian claim is set out at length in Devitt (2006). And he calls me an enfant terrible! The quote in the text is from section 2 of Devitt’s chapter in this volume.
replies
245
2 (1) and (2) should be viewed as schemas which can be turned into sentences by substituting a suitable predicate for ‘F.’ A relation, R, satisfies a schema if each substitution instance is true. 3 For more on the work of Cummins, Egan and Ramsey, see my response to Egan in this volume. 4 There is also some textual evidence that suggests this is what Devitt would say. In his book, Coming to Our Senses, Devitt describes “a methodology for descriptive tasks in general” in which we are encouraged to attend to what the appropriate experts would say when presented with “descriptions of phenomena” set out in “thought experiments” (Devitt 1996: 72–6). The method that Devitt sketches in his chapter in this volume is clearly intended to be a special case of that general methodology. 5 See, for example, Fodor 1987, 1990, 1991. 6 See, for example, the essays collected in Kahneman, Slovic and Tversky (1982) and Gilovich, Griffin and Kahneman (2002). 7 Nichols and Stich (2003). 8 Deconstructing the Mind, pp. 26–7. 9 See, for example, Nichols, Stich and Weinberg (2003) and Swain, Alexander and Weinberg (forthcoming). 10 Though Goldman concedes that “a strict definition of high- versus low-level mindreading processes is lacking,” he offers the following somewhat more detailed characterization of high-level mindreading: “High-level mindreading is mindreading with one or more of the following features: (a) it targets mental states of a relatively complex nature, such as the propositional attitudes; (b) some components of the mindreading process are subject to voluntary control; (c) the process has some degree of accessibility to consciousness” (Goldman 2006: 147). 11 Goldman makes much the same observation at several places in his book. Here is an example: “The standard treatments of ‘theory of mind’ have primarily studied the locus of high-level mindreading” (p. 141). Cf. also p. 20. Of course, it is no accident that philosophers have focused on high-level mindreading, since that is the sort of mindreading that is most directly relevant to debates in the philosophy of mind. For a discussion of the links between issues in the philosophy of mind and theories of high-level mindreading, see Nichols and Stich (2003), pp. 5–9. 12 The chapter in question is called “High-Level Simulational Mindreading.” The first three substantive sections of the chapter (7.2–7.4) are devoted to addressing a potential problem for Goldman’s account of high-level mindreading. That account gives an important role to the process of enactment-imagining (or E-imagining) which Goldman characterizes (for the first time!) earlier in the book. The problem that Goldman seeks to address is: “Can E-imagining produce states that truly resemble their intended counterparts?” (p. 149). “How similar is E-imagined desire-that-p to genuine desire-that-p? How similar is E-imagined belief-that-p to genuine belief-that-p?” (p. 151). As Goldman notes, for his theory to be plausible the answer must be that they are often quite similar. But, Goldman laments, “detailed research on these topics, unfortunately, is sparse” (p. 151). What to do? Goldman’s strategy is to look at evidence regarding “two species of E-imagination” that have little or nothing to do with mindreading, viz. motor imagery and visual imagery, “in the hope that what we learn is more widely applicable” (p. 151). In the sections devoted to these topics (7.3 and 7.4), Goldman cites lots of evidence from neuroscience, some of which is also discussed in his chapter in this volume. But, of course, it would be absurd to suggest that Nichols and I should have taken account of this evidence, since (i) it has no direct bearing on mindreading, and (ii) it’s indirect bearing, via the “hope” that the similarities discovered would be “more widely applicable” emerged only after Goldman spelled out the role of E-imagination in his theory of high-level
246
13 14
15
16
17 18 19
20
21
stephen stich mindreading, in a book published three years after ours. The remaining sections of Goldman’s chapter (7.5–7.13) are devoted to issues that are more directly related to high-level mindreading. But in those sections there are only two references to neuroscientific studies that bear directly on high-level mindreading, one by Mitchell, Banaji and Macrae, the other by Samson et al., and both of these were published in 2005. So the “serious lacuna” about which Goldman complains must be that Nichols and I failed to take note of research that had not yet been done. Guilty as charged! In Mindreading, we adopt this policy quite explicitly (cf. p. 134). In Mindreading, pp. 132–4, we offer a list of ten examples ranging from predicting what the consequences would be if war broke out in Saudi Arabia to having a conception of people as “peepholes through which the world reappears, possibly transformed.” It would have been easy to assemble a much longer list. Indeed, it appears that he no longer wants to defend every application of the term that he himself has proposed. The list mentioned in the previous footnote includes several examples of phenomena that Goldman once cited in support of simulation theory but which do not count as mental simulations on his current account, which we will examine below. For the debate over the nature of concepts, see Margolis and Laurence (1999), Murphy (2002) and Machery (forthcoming). For the debate about how to interpret innateness claims, see Cowie (1999), Samuels (2002) and Khalidi (2007). Whatever, exactly, that means. Whatever, exactly, that means. Remember that “the whole issue of concepts and their possession is deeply opaque.” Sterelny’s shallow modules “play an important role in estimating the doxastic world of other agents by factoring-in the role of differences in perceptual points of view” (p.···). On Leslie’s theory this is presumably done not by ToMM, but by SP. Though there is one minor slip in the quoted passage. S&S did not endorse the view that emotions play an important role in the acquisition of norms, since we could find no persuasive evidence either for or against the view. For the record, here is what Sripada and I say on the matter: Though there is a large philosophical literature debating the best interpretation of innateness claims in psychology (Cowie, 1999; Griffiths, 2002; Samuels, 2002), for our purposes we can consider a normative rule to be innate if various genetic and developmental factors make it the case that the rule would emerge in the norm database in a wide range of environmental conditions, even if (as a result of some extraordinary set of circumstances) the child’s “cultural parents” – the people she encounters during the norm acquisition process – do not have the norm in their norm data base. If there were innate norms of this sort then they would almost certainly be cultural universals. Barring extraordinary circumstances, we should expect to find them in all human groups. (S&S: 299, emphasis in the original)
22 23 24 25 26
For a useful discussion of some of the literature, see Gert (2005). For a particularly useful discussion, see Taylor (1978). For further discussion of this interpretation of Turiel, see Kelly and Stich (2007). Kelly et al. (2007); Kelly and Stich (2007). Sripada and I made it clear that this is one way of understanding the relation between moral norms and norms as characterized by our theory. “One possibility is that moral rules might turn out to constitute a natural kind that is identical with the norms characterized by our
replies
27
28 29 30
31 32
33
34
247
theory. On this view, our intuitions about which rules are moral are sometimes simply mistaken, in much the same way that the folk intuition that whales are kind of fish was mistaken” (S&S: 291). For example, when a university official in charge of distributing research grants decides not to award me a grant, her act triggers disappointment in me. But when I am in charge of distributing research grants and I decide not to award one to a colleague, that action does not lead me to feel either guilt or shame. It would be easy enough to generate countless additional examples. For some evidence in support of these interpretations, see Stich (1988): 97–9 (for Goodman), and fn. 15 (for Strawson). See, for example, Cummins (1998) and Kornblith (2002), ch. 5. In another paper on the use of intuitions in philosophy, Sosa writes: “It is often claimed that analytic philosophy appeals to armchair intuitions in the service of ‘conceptual analysis’. But this is deplorably misleading. The use of intuitions in philosophy should not be tied exclusively to conceptual analysis” (Sosa 2007a: 100). Sosa makes an almost identical comment in Sosa (forthcoming), Sec. V. For recent arguments of this sort, see Pust (2000) and Williamson (2007). In Weinberg et al. (2001), we speculate that the differences we find between the intuitions of East Asians and Westerners are linked to systematic differences in cognitive processing that Richard Nisbett and his associates have found between these two groups. In a book reviewing some of that work, Nisbett (2003) makes a persuasive case that, while East Asian and Western cognitive processing have different strengths and shortcomings, it is singularly implausible to think that one style of thought is generally superior to or more accurate than the other. As noted in Weinberg et al. (2001: 431 and fn. 5), theories of this sort have been suggested by a number of leading epistemologists, including Chisholm (1977), BonJour (1985), and Pollock and Cruz (1999). Compare: The fact that we value one commodity, called ‘knowledge’ or ‘justification’ among us, is no obstacle to our also valuing a different commodity, valued by some other community under that same label. And it is also compatible with our learning to value that second commodity once we are brought to understand it, even if we previously had no opinion on the matter. (p. 108)
35 36 37
38
39
I’m grateful to Michael Bishop, Edouard Machery, and Jonathan Weinberg for helpful comments on earlier drafts of this reply. For some discussion of the distinction between normative and descriptive epistemology, see Weinberg, Nichols and Stich (2001). As emerged clearly in my exchange with Sosa in this volume, some epistemologists, including Sosa himself, avoid the appeal to conceptual analysis, and take intuitions to be a source of data about the nature of knowledge. Also, some have little or no interest in normative questions; they are concerned only to analyze epistemic concepts or to characterize the nature of knowledge. Though not directly relevant to our current concerns, the issues here are surprisingly murky and difficult, as Kelby Mason and I discovered to our dismay in a recent correspondence with Frank Jackson ( Jackson, Mason and Stich forthcoming). For further discussion of these debates, see Margolis and Laurence (1999) and Machery (forthcoming).
248 40 41 42
43 44 45 46
stephen stich Bishop agrees that intuitions are “a legitimate source of evidence” (p. 124) for theories in category (i). He does not mention categories (ii)–(iv). For further discussion, see my Reply to Sosa. This possibility was noted, albeit briefly, in Nichols, Stich and Weinberg (2003: 232) where we present some preliminary data indicating that the epistemic intuitions of students who have taken little or no philosophy differ from those of students who have taken two or more courses. Ross and Nisbett (1991): ch. 2. In endorsing these arguments, Bishop has little company. Most commentators refuse to take them seriously. The interesting critique in DePaul (forthcoming) is a noteworthy exception. Ch. 8, sec. 3. The strategy that Cohen recommends, I hasten to add, is very different from the one I would recommend. My thanks to Jonathan Weinberg for his acute and enormously helpful comments on several earlier drafts of this reply.
References Bishop, M. and Stich, S. (1998) “The Flight to Reference, or How Not to Make Progress in the Philosophy of Science.” Philosophy of Science 65: 33–49. Bishop, M. and Trout, J. D. (2005) Epistemology and the Psychology of Human Judgment. New York: Oxford University Press. BonJour, L. (1985) The Structure of Empirical Knowledge. Cambridge, Mass.: Harvard University Press. Carruthers, P., Laurence, S. and Stich, S. (eds.) (2005) The Innate Mind: Structure and Contents. New York: Oxford University Press. Carruthers, P., Laurence, S. and Stich, S. (eds.) (2006) The Innate Mind, Vol. 2: Culture and Cognition. New York: Oxford University Press. Carruthers, P., Laurence, S. and Stich, S. (eds.) (2007) The Innate Mind, Vol. 3: Foundations and the Future. New York: Oxford University Press. Chisholm, R. (1977) Theory of Knowledge. Englewood Cliffs, NJ: Prentice Hall. Clement, J. (1983) “A Conceptual Model Discussed by Galileo and Used Intuitively by Physics Students,” in D. Gentner and A. Stevens (eds.), Mental Models, Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 325–39. Cowie, F. (1999) What’s Within? Nativism Reconsidered. New York: Oxford University Press. Cummins, R. (1989) Meaning and Mental Representation. Cambridge, Mass.: MIT Press. Cummins, R. (1998) “Reflection on Reflective Equilibrium,” in M. DePaul and W. Ramsey (eds.), Rethinking Intuition, Lanham, Maryland: Rowman and Littlefield, pp. 113–27. DePaul, M. (forthcoming) “Ugly Analyses and Value,” to appear in D. Pritchard, A. Millar and A. Haddock (eds.), Epistemic Value. Oxford: Oxford University Press. Devitt, M. (1981) Designation. New York: Columbia University Press. Devitt, M. (1996) Coming to Our Senses: A Naturalistic Program for Semantic Localism. New York: Cambridge University Press. Devitt, M. (2006) Ignorance of Language. New York: Oxford University Press. Devitt, M. and Sterelny, K. (1999) Language and Reality: An Introduction to the Philosophy of Language, 2nd edn. (1st edn. 1987). Oxford: Blackwell. Dretske, F. (1981) Knowledge and the Flow of Information. Cambridge, Mass.: MIT Press.
replies
249
Dwyer, S. (1999) “Moral Competence,” in K. Murasugi and R. Stainton (eds.), Philosophy and Linguistics, Boulder, CO: Westview Press, pp. 169–90. Dwyer, S. (2006) “How Good is the Linguistic Analogy?” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind, Vol. 2, Culture and Cognition, Oxford: Oxford University Press, pp. 237–56. Egan, F. (1992) “Individualism, Computation, and Perceptual Content,” Mind 101: 443–59. Egan, F. (1995) “Content and Computation,” Philosophical Review 104: 181–203. Egan, F. (1999) “In Defense of Narrow Mindedness,” Mind and Language 14: 177–94. Egan, F. (2003) “Naturalistic Inquiry: Where Does Mental Representation Fit In?” in L. Antony and N. Hornstein (eds.), Chomsky and His Critics, Oxford: Blackwell, pp. 89–104. Field, H. (1986) “The Deflationary Concept of Truth,” in G. MacDonald and C. Wright (eds.), Fact, Science and Value, Oxford: Blackwell. Field, H. (1994) “Deflationist Views of Meaning and Content.” Mind 103: 249–85. Fodor, J. (1981) “The Present Status of the Innateness Controversy,” in Representations, Cambridge, Mass.: MIT Press, pp. 257–316. Fodor, J. (1987) Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, Mass.: MIT Press. Fodor, J. (1990) “A Theory of Content, I and II,” in A Theory of Content and Other Essays. Cambridge, Mass.: MIT Press, pp. 51–136. Fodor, J. (1991) “Reply to Stich,” in B. Loewer and G. Rey (eds.), Meaning in Mind: Fodor and His Critics, Cambridge, Mass.: Blackwell, pp. 310–12. Fodor, J. (1998) Concepts. Oxford: Oxford University Press. Gallese, V. and Goldman, A. (1998) “Motor Neurons and the Simulation Theory of MindReading.” Trends in Cognitive Science 2(12): 493–501. Gert, B. (2005) “The Definition of Morality,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2005 edn.), http://plato.stanford.edu/archives/fall2005/entries/morality-definition/. Gilovich, T., Griffin, D., and Kahneman, D. (eds.) (2002) Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press. Goldman, A. (1986) Epistemology and Cognition. Cambridge, Mass.: Harvard University Press. Goldman, A. (1989) “Interpretation Psychologized.” Mind and Language 4: 161–85. Goldman, A. (1992) “In Defense of the Simulation Theory.” Mind and Language 7: 104–19. Goldman, A. (1993) Philosophical Applications of Cognitive Science. Boulder, CO: Westview Press. Goldman, A. (2006) Simulating Minds: The Philosophy, Psychology, and Neuroscience of Mindreading. New York: Oxford University Press. Gordon, R. (1986) “Folk Psychology as Simulation.” Mind and Language 1: 158–70. Griffiths, P. (2002) “What is Innateness?” Monist 85: 70–85. Haidt, J. and Joseph, C. (2007) “The Moral Mind: How 5 Sets of Innate Moral Intuitions Guide the Development of Many Culture-Specific Virtues, and Perhaps Even Modules,” in P. Carruthers, S. Laurence and S. Stich (eds.), Innateness and the Structure of the Mind, Vol. 3, Foundations and the Future, New York: Oxford University Press, pp. 367–91. Harman, G. (1999) “Moral Philosophy and Linguistics,” in K. Brinkmann (ed.), Proceedings of the 20th World Congress of Philosophy, Vol. I: Ethics, Bowling Green, Ohio: Philosophy Documentation Center, pp. 107–15. Hauser, M. (2006) Moral Minds. New York: Echo Press. Hawthorne, J. (2004) Knowledge and Lotteries. Oxford: Oxford University Press. Heal, J. (1986) “Replication and Functionalism,” in J. Butterfield (ed.), Language, Mind and Logic, Cambridge: Cambridge University Press.
250
stephen stich
Horwich, P. (1990) Truth. Oxford: Blackwell. Jackson, F. (1998) From Metaphysics to Ethics: A Defence of Conceptual Analysis. Oxford: Oxford University Press. Jackson, F., Mason, K. and Stich, S. (forthcoming) “Implicit Knowledge and Folk Psychology,” to appear in D. Braddon-Mitchell and R. Nola (eds.), Conceptual Analysis and Philosophical Naturalism, Cambridge, Mass.: MIT Press. Kahneman, D., Slovic, P. and Tversky, A. (eds.) (1982) Judgment Under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kelly, D. and Stich, S. (2007) “Two Theories About the Cognitive Architecture Underlying Morality,” in P. Carruthers, S. Laurence and S. Stich (eds.), Innateness and the Structure of the Mind, Vol. 3, Foundations and the Future, New York: Oxford University Press, pp. 348–66. Kelly, D., Stich, S., Haley, K., Eng, S., and Fessler, D. (2007) “Harm, Affect and the Moral/ Conventional Distinction.” Mind and Language 22 (April): 117–31. Khalidi, M. (2007) “Innate Cognitive Capacities.” Mind and Language 22: 92–115. Kitcher, P. (1993) The Advancement of Science. Oxford: Oxford University Press. Kornblith, H. (2002) Knowledge and its Place in Nature. Oxford: Oxford University Press. Kripke, S. (1972) “Naming and Necessity,” in D. Davidson and G. Harman (eds.), Semantics of Natural Language, Dordrecht, The Netherlands: Reidel, pp. 253–355. Lycan, W. (1988) Judgement and Justification. Cambridge: Cambridge University Press. Machery, E. (forthcoming) Doing Without Concepts. New York: Oxford University Press. Machery, E., Mallon, R., Nichols, S., and Stich, S. (2004) “Semantics, Cross-Cultural Style,” Cognition 92: B1–B12. MacIntyre, A. (1957) “What Morality is Not.” Philosophy 32: 325–35. Makhail, J. (in press) Rawls’ Linguistic Analogy. Cambridge: Cambridge University Press. Margolis, E. and Laurence, S. (eds.) (1999) Concepts: Core Readings. Cambridge, Mass.: MIT Press. McCloskey, M. (1983) “Naive Theories of Motion,” in D. Gentner and A. Stevens (eds.), Mental Models, Hillsdale, NJ: Lawrence Erlbaum Associates, pp. 299–324. Millikan, R. (1984) Language, Thought, and Other Biological Categories: New Foundations for Realism. Cambridge, Mass.: MIT Press. Mitchell, J., Banaji, M., and Macrae, C. (2005) “The Link Between Social Cognition and SelfReferential Thought in the Medial Prefrontal Cortex.” Journal of Cognitive Neuroscience 17: 1306–15. Murphy, G. (2002) The Big Book of Concepts. Cambridge, Mass.: MIT Press. Nichols, S. and Stich, S. (1998) “Rethinking Co-cognition.” Mind and Language 13: 499–512. Nichols, S. and Stich, S. (2003) Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds. Oxford: Oxford University Press. Nichols, S., Stich, S. and Weinberg, J. (2003) “Meta-Skepticism: Meditations on EthnoEpistemology,” in S. Luper (ed.), The Skeptics: Contemporary Essays, Aldershot, UK: Ashgate Publishing, pp. 227–47. Nisbett, R. (2003) The Geography of Thought: How Asians and Westerners Think Differently . . . and Why. New York: The Free Press. Papineau, D. (1987) Reality and Representation. Oxford: Blackwell. Pessin, A. and Goldberg, S. (eds.) (1996) The Twin Earth Chronicles: Twenty Years of Reflection on Hilary Putnam’s “The Meaning of Meaning.” Armonk, NY: M. E. Sharpe, Inc. Pollock, J. and Cruz, J. (1999) Contemporary Theories of Knowledge. Lanham, Maryland: Rowman and Littlefield.
replies
251
Pust, Joel (2000) Intuitions as Evidence. New York: Garland. Putnam, H. (1975) “The Meaning of ‘Meaning,’ ” in K. Gunderson (ed.), Language, Mind and Knowledge: Minnesota Studies in the Philosophy of Science, Vol. 7, Minneapolis: University of Minnesota Press. Quine, W. (1960) Word and Object, Cambridge, Mass.: MIT Press. Ramsey, W. (2007) Representation Reconsidered. Cambridge: Cambridge University Press. Ross, L. and Nisbett, R. (1991) The Person and the Situation. New York: McGraw-Hill. Samson, D., Apperly, I., Kathirgamanathan, U., and Humphreys, G. (2005) “Seeing It My Way: A Case of Selective Deficit in Inhibiting Self-Perspective.” Brain 128: 1102–11. Samuels, R. (2002) “Nativism in Cognitive Science.” Mind and Language 17: 233–65. Scholl, B. and Leslie, B. (1999) “Modularity, Development and ‘Theory of Mind.’ ” Mind and Language 14: 131–53. Sosa, E. (2007a) “Experimental Philosophy and Philosophical Intuition.” Philosophical Studies 132: 99–107. Sosa, E. (2007b) A Virtue Epistemology. New York: Oxford University Press. Sosa, E. (2007c) “Intuitions: Their Nature and Epistemic Efficacy” Grazer Philosophische Studien, special issue Philosophical Knowledge – Its Possibility and Scope, ed. by C. Beyer and A. Burri, Amsterdam: Rodopi. 74, 1: 51–67. Sripada, C. and Stich, S. (2006) “A Framework for the Psychology of Norms,” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind, Vol. 2, Culture and Cognition, Oxford: Oxford University Press, pp. 280–301. Stich, S. (1983) From Folk Psychology to Cognitive Science: The Case Against Belief, Cambridge, Mass.: MIT Press. Stich, S. (1988) “Reflective Equilibrium, Analytic Epistemology and the Problem of Cognitive Diversity.” Synthese 74: 391–413. Reprinted in M. DePaul and W. Ramsey (eds.), Rethinking Intuition, Lanham, Maryland: Rowman and Littlefield, 1998, pp. 95–112. (Page references are to the DePaul and Ramsey volume.) Stich, S. (1990) The Fragmentation of Reason. Cambridge, Mass.: MIT Press. Stich, S. (1996) Deconstructing the Mind. New York: Oxford University Press. Stich, S. and Nichols, S. (1992) “Folk Psychology: Simulation or Tacit Theory.” Mind and Language 7: 35–71. Stich, S. and Nichols, S. (1997) “Cognitive Penetrability, Rationality and Restricted Simulation.” Mind and Language 12: 297–326. Stich, S. and Weinberg, J. (2001) “Jackson’s Empirical Assumptions.” Philosophy and Phenomenological Research 62: 637–43. Swain, S., Alexander, J., and Weinberg, J. (forthcoming) “The Instability of Philosophical Intuitions: Running Hot and Cold on Truetemp,” to appear in Philosophy and Phenomenological Research. Taylor, P. (1978) “On Taking the Moral Point of View,” in P. French, T. Uehling and H. Wettstein (eds.), Midwest Studies in Philosophy 3, Studies in Ethical Theory, Morris, Minnesota: University of Minnesota. Turiel, E. (1979) “Distinct Conceptual and Developmental Domains: Social Convention and Morality,” in H. Howe, and C. Keasey (eds.), Nebraska Symposium on Motivation, 1977: Social Cognitive Development, Lincoln: University of Nebraska Press, Vol. 25, pp. 77–116. Turiel, E. (1983) The Development of Social Knowledge. Cambridge: Cambridge University Press. Turiel, E. and Nucci, L. (1978) “Social Interactions and the Development of Social Concepts in Preschool Children.” Child Development 49: 400–7.
252
stephen stich
Turiel, E., Killen, M., and Helwig, C. (1987) “Morality: Its Structure, Functions, and Vagaries,” in J. Kagan and S. Lamb (eds.), The Emergence of Morality in Young Children, Chicago: The University of Chicago Press. Wallace, G. and Walker, A. (eds.) (1970) The Definition of Morality. London: Methuen. Weinberg, J., Nichols, S. and Stich, S. (2001) “Normativity and Epistemic Intuitions.” Philosophical Topics 29: 429–60. Williamson, T. (2000) Knowledge and Its Limits. Oxford: Oxford University Press. Williamson, T. (2007) The Philosophy of Philosophy. Oxford: Blackwell.
list of publications by stephen stich
253
List of Publications by Stephen Stich
Stephen Stich (1983) From Folk Psychology to Cognitive Science: The Case Against Belief, Cambridge, MA: Bradford Books / MIT Press. Italian translation: Dalla Psicologia Del Senso Comune Alla Scienza Cognitiva. Bologna, Italy: Il Mulino (1994). Excerpts reprinted in: (1) William Lycan (ed.), Mind and Cognition: A Reader, Oxford: Blackwell (1990), pp. 361–71. (2) Scott M. Christensen and Dale R. Turner (eds.), Folk Psychology and the Philosophy of Mind, Hillsdale, NJ: Lawrence Erlbaum Associates (1993), pp. 82–117. (3) Eduardo Rabossi (ed.), Filosofia y Ciencia Cognitiva, Buenos Aires & Barcelona: Editorial Paidos (1993). Stephen Stich (1990) The Fragmentation of Reason: Preface to a Pragmatic Theory of Cognitive Evaluation. Cambridge, MA: Bradford Books / MIT Press. Italian translation: La Frammentazione della Ragione. Bologna, Italy: Il Mulino, (1996). Japanese translation: Keiso Shobo: Tokyo (2006). Excerpts reprinted in Hilary Kornblith (ed.), Naturalizing Epistemology, 2nd edn., Cambridge, MA: MIT Press, pp. 393–426. Stephen Stich (1996) Deconstructing the Mind. New York: Oxford University Press. Excepts reprinted in Geert Keil and Herbert Schnaedelbach (eds.), Naturalismus. Philosophische Beitraege. Frankfurt am Main: Suhrkamp Verlag (2000), pp. 92–127. Shaun Nichols and Stephen Stich (2003) Mindreading. Oxford: Oxford University Press.
Anthologies Stephen Stich (ed.) (1975) Innate Ideas. Berkeley and London: University of California Press. David A. Jackson and Stephen Stich (eds.) (1979) The Recombinant DNA Debate. Englewood Cliffs, NJ: Prentice-Hall. William Ramsey, Stephen Stich and David E. Rumelhart (eds.) (1991) Philosophy and Connectionist Theory. Hillsdale, NJ: Lawrence Erlbaum Associates. Stephen Stich and Ted A. Warfield (eds.) (1994) Mental Representation. Oxford: Blackwell. Adam Morton and Stephen Stich (eds.) (1996) Benacerraf and His Critics. Oxford: Blackwell. Michael Bishop, Richard Samuels and Stephen Stich (eds.) (2000) Synthese, Special Issue on Rationality, Vol. 122, Nos. 1–2, pp. 1–244.
254
list of publications by stephen stich
Peter Carruthers, Stephen Stich and Michael Siegal (eds.) (2002) The Cognitive Basis of Science. Cambridge: Cambridge University Press. Ted A. Warfield and Stephen Stich (eds.) (2003) The Blackwell Guide to Philosophy of Mind. Oxford: Blackwell. Peter Carruthers, Stephen Laurence and Stephen Stich (eds.) (2005) The Innate Mind, Vol. 1: Structure and Contents. Oxford: Oxford University Press. Peter Carruthers, Stephen Laurence and Stephen Stich (eds.) (2006) The Innate Mind, Vol. 2: Culture and Cognition. New York: Oxford University Press. Peter Carruthers, Stephen Laurence and Stephen Stich (eds.) (2007) The Innate Mind, Vol. 3: Foundations and the Future. New York: Oxford University Press. Eric Margolis, Richard Samuels and Stephen Stich (eds.) (forthcoming) The Oxford Handbook of the Philosophy of Cognitive Science. Oxford: Oxford University Press. John M. Doris, Gilbert Harman, Shaun Nichols, Jesse Prinz, Walter Sinnott-Armstrong, and Stephen Stich (eds.) (forthcoming) The Oxford Handbook of Moral Psychology. Oxford: Oxford University Press.
Articles and Reviews Peter G. Hinman, Jaegwon Kim and Stephen Stich (1968) “Logical Truth Revisited,” Journal of Philosophy 65(17): 495–500. Stephen Stich (1970) “Dissonant Notes on the Theory of Reference,” Nous 4(4): 385–97. Stephen Stich (1971) “What Every Speaker Knows,” Philosophical Review 80(4): 476–96. Stephen Stich (1972) “Grammar, Psychology and Indeterminacy,” Journal of Philosophy 69(22): 799–818. Reprinted in: (1) Ned J. Block (ed.), Readings in the Philosophy of Psychology, Vol. 2, Cambridge, MA: Harvard University Press (1981), pp. 208–22. (2) Jerrold J. Katz (ed.), The Philosophy of Linguistics, Oxford: Oxford University Press (1985), pp. 126–45. (3) Jay L. Garfield (ed.), Foundations of Cognitive Science: The Essential Readings, New York: Paragon Press (1990), pp. 314–31. (4) Carlos Otero (ed.), Noam Chomsky: Critical Assessments, Vol. II, Philosophy, London: Routledge (1994), pp. 223–41. Spanish translation: “Gramatica, Psicologia e Indeterminacion,” in Cuadernos Teorema: Debate Sobre la Teoria de la Ciencia Linquistica, Valencia, Spain (1978), pp. 1–33. Stephen Stich, John Tinnon and Lawrence Sklar (1973) “Entailment and the Verificationist Program,” Ratio (English edn.) XV(1): 84–97. German translation: “Die logische Folge und das Programm der Verifakationsanhanger,” Ratio (German edn.) 15(1) (1973): 79–92. Stephen Stich (1973) “What Every Grammar Does,” Philosophia, 3(1): 85–96. Stephen Stich (1974) “Review of The Underlying Reality of Language and Its Philosophical Import by Jerrold J. Katz,” Philosophical Review 83(2): 259–63. Stephen Stich (1975) “The Idea of Innateness,” in S. Stich (ed.), Innate Ideas, Berkeley & London: University of California Press, pp. 1–22. Stephen Stich (1975) “Competence and Indeterminacy,” in Jessica Wirth and David Cohen (eds.), The Testing of Linguistic Hypotheses, Papers from the University of Wisconsin-Milwaukee Linguistics Group Third Annual Conference, Washington, DC: Hemisphere Publishing & John Wiley, pp. 93–109.
list of publications by stephen stich
255
Stephen Stich (1975) “Logical Form and Natural Language,” Philosophical Studies 28(6): 397–418. Stephen Stich (1976) “Davidson’s Semantic Program,” Canadian Journal of Philosophy, 4(2): 201–27. Stephen Stich (1978) “The Recombinant DNA Debate,” Philosophy and Public Affairs 7(3): 187–205. Reprinted in: (1) D. A. Jackson and S. P. Stich (eds.), The Recombinant DNA Debate, Englewood Cliffs, NJ: Prentice-Hall (1979), pp. 183–202. (2) James Humber and Robert Almeder (eds.), Biomedical Ethics and the Law, 2nd edn., New York: Plenum (1979), pp. 443–57. (3) Jack Dowie and Paul Lefrere (ed.), Risk and Chance: Selected Readings, Milton Keynes: The Open University Press (1980), pp. 180–98. (4) John Arthur (ed.), Morality and Moral Controversies, Englewood Cliffs, NJ: Prentice-Hall (1981), pp. 355–70. (With a new foreword sketching the history of the debate.) (5) Joel Feinberg (ed.), Reason and Responsibility, 5th & 6th edns., Wadsworth Publishing (1981), pp. 97–8. (Excerpt under the title, “Pascal’s Wager and the Doomsday Scenario Argument.”) (6) Marshall Cohen, Thomas Nagel and Thomas Scanlon, Medicine and Moral Philosophy, Princeton: Princeton University Press (1981), pp. 168–86. (7) Tom Beauchamp and LeRoy Walters (ed.), Contemporary Issues in Bioethics, 2nd edn., Wadsworth (1982), pp. 590–8. (8) Judith Areen, Patricia A. King, Steven Goldberg and Alexander Capron (eds.), Law, Science and Medicine (University Casebook Series), Mineola, NY: Foundation Press (1984). (9) Ralph W. Clark (ed.), An Introduction to Philosophical Thinking, St Paul, MN: West Publishing (1987), pp. 477–85. (10) Michael Ruse (ed.), Philosophy of Biology, New York: Macmillan Publishing (1989), pp. 229–43. (11) Eleonore Stump and Michael Murray (eds.), Philosophy of Religion: The Big Questions, Oxford: Blackwell (1998), pp. 300–2. (Excerpt under the title: “The Recombinant DNA Debate: a Difficulty for Pascalian-Style Wagering.”) (12) Raziel Abelson and Marie-Louise Friquegnon (eds.), Ethics for Modern Life, 6th edn., Boston: St Martins/Bedford Press (2003), pp. 485–93. (excerpt under the title “Worth the Risk.”) French translation: “Le Debat Sur Les Manipulations De L’ADN,” in Ethique Et Biologie, Cahiers S. T. S., Science – Technologie – Societe, Paris: Editions du Centre National de la Recherche Scientifique (1986), pp. 157–70. Stephen Stich (1978) “Empiricism, Innateness and Linguistic Universals.” Philosophical Studies 33(3): 273–86. Stephen Stich (1978) “Beliefs and Sub-Doxastic States.” Philosophy of Science 45(4): 499–518. Reprinted in Jose Luis Bermudez and Fiona MacPherson (eds.), Philosophy of Psychology: Contemporary Readings, London: Routledge (2006), pp. 559–76. Stephen Stich (1978) “Forbidden Knowledge,” in Robert Bareikis (ed.), Science and The Public Interest: Proceedings of the Bloomington Indiana Forum on Recombinant DNA Research, Bloomington: The Poynter Center, pp. 206–15. Reprinted in L. M. Russow and M. Curd (eds.), Principles of Reasoning, New York: St Martin’s Press (1989), pp. 310–16. Stephen Stich (1978) “Autonomous Psychology and the Belief-Desire Thesis,” The Monist, Special Number on the Philosophy and Psychology of Cognition 61(4): 573–91.
256
list of publications by stephen stich
Reprinted in: (1) William Lycan (ed.), Mind and Cognition: A Reader, Oxford: Blackwell (1990), pp. 345– 61. (2) David Rosenthal (ed.), The Nature of Mind, Oxford: Oxford University Press (1991), pp. 590–600. (3) Alvin Goldman (ed.), Readings in Philosophy and Cognitive Science, Cambridge, MA: MIT Press (1993), pp. 699–718. (4) Mind and Cognition: A Reader, ed. by William Lycan, Oxford: Blackwell, 2nd edn. (1999), pp. 259–70. (5) John Heil and Paul B. Freeland (eds.), Philosophy of Mind: A Guide and Anthology, Oxford: Oxford University Press (2003), pp. 365–81. (6) Jose Luis Bermudez and Fiona MacPherson (eds.), Philosophy of Psychology: Contemporary Readings, London: Routledge (2006), pp. 242–60. Stephen Stich (1979) “Do Animals Have Beliefs?” The Australasian Journal of Philosophy 57(1): 15–28. German translation: “Haben Tiere Überzeugungen?” in Dominik Perler and Markus Wild (eds.), Der Geist der Tiere, Frankfurt: Suhrkamp (2005), pp. 95–116. Stephen Stich (1979) “Cognition and Content in Non-Human Species,” The Behavioral and Brain Sciences 1(4): 604–5. Stephen Stich (1979) “Between Chomskian Rationalism and Popperian Empiricism,” The British Journal for the Philosophy of Science 30: 329–47. Stephen Stich (1980) “Headaches,” Philosophical Books 21(2): 65–73. Stephen Stich (1980) “What Every Speaker Cognizes,” The Behavioral and Brain Sciences 3(1): 39–40. Stephen Stich (1980) “Paying the Price for Methodological Solipsism,” The Behavioral and Brain Sciences 3(1): 97–8. Reprinted in David Rosenthal (ed.), The Nature of Mind, Oxford: Oxford University Press (1991), pp. 499–500. Stephen Stich (1980) “Computation without Representation,” The Behavioral and Brain Sciences 3(1): 152. Stephen Stich (1980) “Desiring, Believing and Doing,” The Times Literary Supplement, 27 June, no. 4031, pp. 737–8. Stephen Stich and Richard Nisbett (1980) “Justification and the Psychology of Human Reasoning,” Philosophy of Science 47(2): 188–202. Reprinted in: (1) Thomas L. Haskell (ed.), The Authority of Experts, Bloomington: Indiana University Press (1984), pp. 226–41. (2) Catherine Z. Elgin (ed.), Nelson Goodman’s New Riddle of Induction, Vol. 2 of The Philosophy of Nelson Goodman, New York: Garland Publishing, (1997), pp. 274–88. Stephen Stich (1981) “Review of The Computer Revolution in Philosophy by Aaron Sloman,” The Philosophical Review 90(2) (1981): 300–7. Stephen Stich (1981) “Is Knowledge a Social Concept?” a review of Experience and the Growth of Knowledge by D. W. Hamlyn, Contemporary Psychology 26(3): 205. Stephen Stich (1981) “Group Portrait of the Mind,” The Times Literary Supplement, 3 April, no. 4070, p. 374. Stephen Stich (1981) “Can Popperians Learn to Talk?” The British Journal for the Philosophy of Science 32(2): 57–164. Stephen Stich (1981) “The Many Rights to Health and Health Care,” in Marc D. Basson (ed.), Rights and Responsibilities in Modern Medicine, New York: Alan R. Liss, pp. 15–30.
list of publications by stephen stich
257
Stephen Stich (1981) “Inferential Competence: Right You are if You Think You Are,” The Behavioral and Brain Sciences 4(3): 353–4. Stephen Stich (1981) “Dennett on Intentional Systems,” Philosophical Topics 12(1): 39–62. Reprinted in: (1) J. I. Biro and Robert W. Shahan (eds.), Mind, Brain and Function: Essays in the Philosophy of Mind, Norman, OK: University of Oklahoma Press (1982), pp. 39–62. (2) William Lycan (ed.), Mind and Cognition: A Reader, Oxford: Blackwell (1990), pp. 167–84. (3) William Lycan (ed.), Mind and Cognition: A Reader, Oxford: Blackwell, 2nd edn. (1999), pp. 87–100. Stephen Stich (1981) “On the Relation Between Occurrents and Contentful Mental States,” Inquiry 24(3): 353–8. Stephen Stich (1981) “Philosophers Make House Calls,” Human Systems Management 2(1): 54–5. Stephen Stich (1982) “On the Ascription of Content,” in Andrew Woodfield (ed.), Thought and Object: Essays on Intentionality, Oxford: Oxford University Press, pp. 153–206. Stephen Stich (1982) “On Genetic Engineering, the Epistemology of Risk and the Value of Life,” in J. Cohen, J. Los, H. Pfeiffer and K. P. Podewski, Proceedings of the 6th International Congress of Logic, Methodology and the Philosophy of Science, Amsterdam: North Holland Publishing, pp. 433–58. Stephen Stich (1982) “Genetic Engineering: How should Science be Controlled?” in Tom Regan and Donald VanDeVeer (eds.), Individual Rights and Public Policy, Towota, NJ: Rowman & Littlefield, pp. 86–115. Stephen Stich (1982) “The Compleat Cognitivist,” Contemporary Psychology 27(6): 419–21. Stephen Stich (1982) “Review of Beyond the Letter by Israel Scheffler,” Linguistics and Philosophy 5: 295–7. Stephen Stich (1983) “Lessons To Be Learned from the Recombinant DNA Controversy,” in Erik Tranøy and Kare Berg (eds.), Research Ethics, New York: Alan R. Liss, pp. 75–86. Stephen Stich (1983) “Review of Philosophical Perspectives in Artificial Intelligence, ed. by Martin D. Ringle,” The Philosophical Review 92(2): 280–2. Stephen Stich (1983) “Beyond Inference in Perception,” in Peter D. Asquith and Thomas Nickles (eds.), PSA: 1982: Proceedings of the 1982 Biennial Meeting of the Philosophy of Science Association, East Lancing, MI: Philosophy of Science Association, pp. 553–60. Stephen Stich (1983) “Beastly Brainwork,” The Times Literary Supplement, April 29, no. 4178, p. 424. Stephen Stich (1983) “Testimony on Genetic Engineering,” in Human Genetic Engineering: Hearings before the Subcommittee on Investigations and Oversight of the Committee on Science and Technology, U.S. House of Representatives. Ninety-Seventh Congress, Second Session. US Government Printing Office, Washington, DC. Abridged under the title “The Genetic Adventure” and reprinted in: (1) QQ: Report from the Center for Philosophy and Public Policy, 3(2) (1983): 9–12. (2) Vox 11(3), November 1983: 1–4. (3) Claudia Mills (ed.), Values and Public Policy, New York: Harcourt Brace Jovanovich (1992), pp. 256–61. (4) Edward Erwin, Sidney Gendin and Lowell Kleiman (eds.), Ethical Issues in Scientific Research, New York: Garland Publishing (1994), pp. 321–7. (5) Carol Wekesser (ed.), Genetic Engineering: Opposing Viewpoints, San Diego: Greenhaven Press (1996), pp. 213–19. Stephen Stich (1984) “Thinking As Per Program,” The Times Literary Supplement, February 24, no. 4221, p. 189.
258
list of publications by stephen stich
Stephen Stich (1984) “Self Awareness and Straw Men,” Contemporary Psychology 29(5): 398–9. Stephen Stich (1984) “Is Behaviorism Vacuous?” The Behavioral and Brain Sciences 7(4): 647–9. Stephen Stich (1984) “Armstrong on Belief,” in Radu J. Bogdan (ed.), D. M. Armstrong, Dordrecht, Holland: D. Reidel Publishing, pp. 121–38. Stephen Stich (1984) “Life Without Meaning,” Proceedings of the Russellian Society (Sydney University), Vol. 9, pp. 37–51. Stephen Stich (1984) “Relativism, Rationality and the Limits of Intentional Description,” Pacific Philosophical Quarterly 65(3): 211–35. Stephen Stich (1985) “Theory, Meta-Theory and Weltanschauung,” in K. B. Madsen and L. P. Mos (eds.), Annals of Theoretical Psychology, Vol. 3, New York: Plenum Press, pp. 87–94. Stephen Stich (1985) “Could Man be an Irrational Animal?” Synthese, 64(1): 115–35. Reprinted in: (1) Hilary Kornblith (ed.), Naturalizing Epistemology, Cambridge, MA: MIT Press (1985), pp. 249– 67. (2) Hilary Kornblith (ed.), Naturalizing Epistemology, Cambridge, MA: MIT Press, 2nd edn., Cambridge, Mass.: MIT Press (1994), pp. 337–57. (3) Ernest Sosa (ed.), Knowledge and Justification, Vol. II, International Research Library of Philosophy, Hampshire, UK: Ashgate – Dartmouth Publishing (1994). Stephen Stich (1985) “Review of Philosophical Psychology by Joseph Margolis,” Canadian Philosophical Reviews 5: 166–7. Stephen Stich (1985) “Sorting Out the Right Properties,” Times Literary Supplement, November 29, no. 4313, pp. 1367–8. Stephen Stich (1986) “How Thoughts Get Their Content,” Contemporary Psychology 31(4): 267–8. Stephen Stich (1986) “Are Belief Predicates Systematically Ambiguous?” in Radu Bogdan (ed.), Belief: Form, Content and Function, Oxford: Oxford University Press, pp. 119–47. Stephen Stich (1986) “Leaving Belief Behind,” Annals of Theoretical Psychology 4: 351–6. Stephen Stich (1986) “The Risks and Rewards of Studying Genes,” Hastings Center Report 16(2): 39–42. Stephen Stich (1987) “Review of John Searle, Minds, Brains and Science,” Philosophical Review 96(1): 129–33. Stephen Stich (1987) “Eloquent But Elusive,” The Times Literary Supplement, November 27, no. 4417, p. 1315. Stephen Stich (1988) “Reflective Equilibrium, Analytic Epistemology and the Problem of Cognitive Diversity,” Synthese, 74(3): 391–413. Reprinted in: (1) Michael F. Goodman and Robert A. Snyder (eds.), Contemporary Readings in Epistemology, Englewood Cliffs, NJ: Prentice Hall (1993), pp. 350–64. (2) Ernest Sosa (ed.), Knowledge and Justification, Vol. II. International Research Library of Philosophy, Hampshire: Ashgate – Dartmouth Publishing (1994). (3) Paul K. Moser and Arnold van der Nat (eds.), Human Knowledge: Classical and Contemporary Approaches, 2nd edn., Oxford: Oxford University Press (1995), pp. 367–79. (4) Michael DePaul and William Ramsey (eds.), Rethinking Intuition, Lanham, Maryland: Rowman & Littlefield (1998), pp. 95–112. (5) Jack S. Crumley II (ed.), Readings in Epistemology, Mountain View, CA: Mayfield Publishing (1999). (6) Ernest Sosa and Jaegwon Kim (eds.), Epistemology: An Anthology, Oxford: Blackwell (2000), pp. 571–83.
list of publications by stephen stich
259
Stephen Stich (1988) “From Connectionism to Eliminativism,” The Behavioral and Brain Sciences 11(1): 53–4. Stephen Stich (1988) “Review of John MacNamara, A Border Dispute: The Place of Logic in Psychology,” Applied Psycholinguistics 9(4): 311–14. Stephen Stich (1988) “Connectionism, Realism and realism,” The Behavioral and Brain Sciences 11(3): 531–2. Stephen Stich (1989) “Review of Christopher Cherniak, Minimal Rationality.” Philosophy of Science 56(1): 171–3. Shawn Lockery and Stephen Stich (1989) “Prospects for Animal Models of Mental Representation,” International Journal of Comparative Psychology 2(3): 157–73. Stephen Stich (1990) “Rationality,” in Daniel Osherson and Edward E. Smith (eds.), Thinking, vol. 3 of An Invitation to Cognitive Science, Cambridge, MA: MIT Press, pp. 173–96. William Ramsey, Stephen Stich, and Joseph Garon (1990) “Connectionism, Eliminativism and the Future of Folk Psychology,” Philosophical Perspectives 4: Action Theory and Philosophy of Mind, pp. 499–533. Reprinted in: (1) David J. Cole, James H. Fetzer and Terry L. Rankin (eds.), Philosophy and Cognitive Inquiry, Dordrecht, The Netherlands: Kluwer (1990), pp. 117– 44. (2) John Greenwood (ed.), The Future of Folk Psychology, Cambridge: Cambridge University Press (1991), pp. 93–119. (3) W. Ramsey, D. E. Rumelhart and S. P. Stich (eds.), Philosophy and Connectionist Theory, Hillsdale, NJ: Lawrence Erlbaum Associates (1991), pp. 199–228. (4) Scott M. Christensen and Dale R. Turner (eds.), Folk Psychology and the Philosophy of Mind, Hillsdale, NJ: Lawrence Erlbaum Associates (1993), pp. 315–39. (5) Cynthia Macdonald and Graham Macdonald (eds.), Connectionism: Debates on Psychological Explanation, Vol. 2, Oxford: Blackwell (1995), pp. 310–38. (6) John Haugeland (ed.), Mind Design II: Philosophy, Psychology, Artificial Intelligence, Cambridge, Mass.: Bradford Books/MIT Press (1997), pp. 351–76. (7) Jose Luis Bermudez and Fiona MacPherson (eds.), Philosophy of Psychology: Contemporary Readings, London: Routledge (2006), pp. 263–87. Chinese translation in Gao Xinmin and Chu Zhaohua (eds.), The Selected Works of Western Philosophers of Mind, Wuhan: Shangwu Publishing House (2003). William Ramsey and Stephen Stich (1990) “Connectionism and Three Levels of Nativism,” Synthese 82(2): 177–205. Reprinted in: (1) W. Ramsey, D. E. Rumelhart and S. P. Stich (eds.), Philosophy and Connectionist Theory, Hillsdale, NJ: Lawrence Erlbaum Associates (1991), pp. 287–310. (2) James H. Fetzer (ed.), Epistemology and Cognition, Dordrecht, The Netherlands: Kluwer (1991), pp. 3–31. Stephen Stich (1990) “Building Belief: Some Queries about Representation, Indication and Function,” Philosophy and Phenomenological Research 50(4): 801–6. Stephen Stich (1990) “Review of Meaning and Mental Representation by Robert Cummins,” Canadian Philosophical Reviews 10(5): 177–80. Stephen Stich (1991) “The Fragmentation of Reason – Precis of Two Chapters,” Philosophy and Phenomenological Research 51(1): 179–83. Stephen Stich (1991) “Evaluating Cognitive Strategies: A Reply to Cohen, Goldman, Harman and Lycan,” Philosophy and Phenomenological Research 51(1): 207–13.
260
list of publications by stephen stich
Todd Jones, Edmond Mulaire, and Stephen Stich (1991) “Staving Off Catastrophe: A Critical Notice of Jerry A. Fodor’s Psychosemantics,” Mind and Language 6(1): 58–82. Stephen Stich (1991) “Causal Holism and Common Sense Psychology: A Reply to O’Brien,” Philosophical Psychology 4(2): 179–81. Stephen Stich (1991) “Narrow Content Meets Fat Syntax,” in Barry Loewer and Georges Rey (eds.), Meaning in Mind: Fodor and His Critics, Oxford: Blackwell, pp. 239–54. Reprinted in William Lycan (ed.), Mind and Cognition (2nd edn.) Oxford: Blackwell (1999), pp. 306–17. Stephen Stich (1991) “Do True Believers Exist? A Reply to Andy Clark,” The Aristotelian Society, Supplementary Volume 65 (1991), pp. 229–44. Stephen Stich (1992) “What is a Theory of Mental Representation?” Mind 101(402): 243–61. Reprinted in: (1) Karen Neander and Ian Ravenscroft (eds.), Prospects for Intentionality, Working Papers in Philosophy, Vol. 3, Research School of Social Science, Australian National University (1993), pp. 1–24. (2) Richard Warner and Tadeusz Szubka (eds.), The Mind–Body Problem: A Guide to the Current Debate, Oxford: Blackwell (1994), pp. 171–91. (3) Ted Warfield and Stephen Stich (eds.), Mental Representation, Oxford: Blackwell (1994), pp. 347–64. German translation in: Andreas Elepfandt and Gereon Walter (eds.), Denkmaschinen? Interdisziplinaere Perspektiven zum Thema Gehirn und Geist, Konstanz: Universitaetsverlag Konstanz (1993), pp. 75–97. Chinese translations in: (1) Foreign Philosophical Problems of Natural Sciences, Beijing: Chinese Academy of Social Science (1994), pp. 328–46. (2) Gao Xinmin and Chu Zhaohua (eds.), The Selected Works of Western Philosophers of Mind, Wuhan: Shangwu Publishing House (2003). Stephen Stich and Shaun Nichols (1992) “Folk Psychology: Simulation vs. Tacit Theory,” Mind and Language 7 (1 & 2): 29–65. Reprinted in: (1) Enrique Villanueva (ed.), Science and Knowledge (Philosophical Issues, 3) Atascadero, CA: Ridgeview Publishing Company (1993), pp. 225–70. (2) Martin Davies and Tony Stone (eds.), Folk Psychology, Oxford: Blackwell (1995), pp. 123–58. Stephen Stich (1993) “Moral Philosophy and Mental Representation,” in M. Hechter, L. Nadel and R. E. Michod (eds.), The Origin of Values, New York: Aldine de Gruyter, pp. 215–28. Stephen Stich (1993) “Consciousness Revived: John Searle and the Critique of Cognitive Science,” Times Literary Supplement, March 5, no. 4692, pp. 5–6. Stephen Stich (1993) “Naturalizing Epistemology: Quine, Simon and the Prospects for Pragmatism,” in C. Hookway and D. Peterson (eds.), Philosophy and Cognitive Science, Royal Institute of Philosophy, Supplement no. 34, Cambridge: Cambridge University Press, pp. 1–17. Stephen Stich (1993) “Review of Judgement and Justification by William Lycan,” Nous 27(3): 380–3. Stephen Stich (1993) “Concepts, Meaning, Reference and Ontology,” in Karen Neander and Ian Ravenscroft (eds.), Prospects for Intentionality, Working Papers in Philosophy, Vol. 3, Research School of Social Science, Australian National University, pp. 61–77. Stephen Stich (1993) “Puritanical Naturalism,” in Karen Neander and Ian Ravenscroft, (eds.), Prospects for Intentionality, Working Papers in Philosophy, Vol. 3, Research School of Social Science, Australian National University, pp. 141–53.
list of publications by stephen stich
261
Stephen Stich and Ian Ravenscroft (1994) “What is Folk Psychology?” Cognition, 50(1–3): 447–68. Reprinted in Jacques Mehler and Susana Franck (eds.), Cognition on Cognition, Cambridge, MA: Bradford Books/MIT Press, pp. 449–70. Stephen Stich (1994) “Philosophy and Psychology,” in Samuel Guttenplan (ed.), A Companion to the Philosophy of Mind, Oxford: Blackwell, pp. 500–7. Stephen Stich and Stephen Laurence (1994) “Intentionality and Naturalism,” in Peter A. French, Theodore E. Uehling, Jr, and Howard Wettstein (eds.), Midwest Studies in Philosophy, Vol. 19, Naturalism, South Bend, IN: University of Notre Dame Press, pp. 159–82. Reprinted in: (1) Prospects for Intentionality, Working Papers in Philosophy, Vol. 3, ed. by Karen Neander and Ian Ravenscroft, Research School of Social Science, Australian National University (1993), pp. 81–110. (2) Stephen Stich, Deconstructing the Mind, New York: Oxford University Press (1996), pp. 168–91. Stephen Stich (1994) “The Virtues, Challenges and Implications of Connectionism,” British Journal for the Philosophy of Science 45: 1047–58. Stephen Stich and Ted Warfield (1995) “Do Connectionist Minds Have Beliefs? – A Reply to Clark and Smolensky,” in Cynthia Macdonald and Graham Macdonald (eds.), Connectionism: Debates on Psychological Explanation, Vol. 2, Oxford: Blackwell, pp. 395–11. Stephen Stich and Shaun Nichols (1995) “Second Thoughts on Simulation,” in Martin Davies and Tony Stone (eds.), Mental Simulation: Philosophical and Psychological Essays, Oxford: Blackwell, pp. 87–108. Shaun Nichols, Stephen Stich, and Alan Leslie (1995) “Choice Effects and the Ineffectiveness of Simulation,” Mind and Language 10(4): 437–45. Stephen Stich (1996) “The Dispute Over Innate Ideas,” in Marcelo Dascal, Dietfried Gerhardus, Kuno Lorenz, and Georg Meggle (eds.), Sprachphilosophie, Ein Internationales Handbuch Zeitgenossischer Forschung, Vol. 2, Berlin: Walter de Gruyter, pp. 1041–50. Shaun Nichols, Stephen Stich, Alan Leslie, and David Klein (1996) “The Varieties of Off-Line Simulation,” in Peter Carruthers and Peter Smith (eds.), Theories of Theories of Mind, Cambridge: Cambridge University Press, pp. 39–74. Stephen Stich (1997) “Decostruire la Mente: La Critica al Materialismo,” in Cervelli che Parlano: Il Dibattio su Mente, Coscienza e Intelligenza Artificiale, Introduzione e cura di Eddy Carli, Milano: Bruno Mondadori, pp. 197–212. Stephen Stich and Shaun Nichols (1997) “Cognitive Penetrability, Rationality and Restricted Simulation,” Mind and Language 12 (3/4): 297–326. Michael Bishop and Stephen Stich (1998) “The Flight to Reference, or How Not to Make Progress in the Philosophy of Science,” Philosophy of Science 65(1): 33–49. Chinese translation in: Journal of Dialectics of Nature 19(112) (1997) (no. 6): 1–8. Stephen Stich and Shaun Nichols (1998) “Theory Theory to the Max,” Mind and Language 13(3): 421–49. Shaun Nichols and Stephen Stich (1998) “Rethinking Co-Cognition,” Mind and Language 13(4): 499–512. Richard Samuels, Stephen Stich, and Patrice D. Tremoulet (1999) “Rethinking Rationality: From Bleak Implications to Darwinian Modules,” in E. LePore and Z. Pylyshyn (eds.), What is Cognitive Science? Oxford: Blackwell, pp. 74–120. Reprinted in: K. Korta, E. Sosa and X. Arrazola (eds.) Cognition, Agency, and Rationality, Proceedings of the Fifth International Colloquium on Cognitive Science (ICCS-97), Dordrecht, The Netherlands: Kluwer (1999), pp. 21–62.
262
list of publications by stephen stich
Portuguese translation: “Repensando a Racionalidade: de Implicações Pessimistas a Módulos Darwinianos,” in Intelectu, no. 9, Outubro de 2003, available on line at: http://www. intelectu.com/ Dominic Murphy and Stephen Stich (1999) “Griffiths, Elimination and Psychopathology,” Metascience 8(1), March: 13–25. Stephen Stich (1999) “Is Man a Rational Animal?” in Daniel Kolak (ed.), Questioning Matters: An Introduction to Philosophical Inquiry, Mountain View, CA: Mayfield, pp. 221–36. French translation: “L’homme est-il un animal rationnel?” in D. Fisette and P. Poirier (eds.), Philosophie de l’Espirit: Une anthologie, Paris & Québec: Vrin (2001). Shaun Nichols and Stephen Stich (1999) “Pretense in Prediction: Simulation and Understanding Minds,” in Denis Fisette (ed.), Consciousness and Intentionality: Models and Modalities of Attribution, a volume in The Western Ontario Series in Philosophy of Science, Dordrecht, The Netherlands: Kluwer, pp. 291–310. Stephen Stich (1999) “Eliminativism,” in Robert A. Wilson and Frank C. Keil (eds.), The MIT Encyclopedia of Cognitive Science, Cambridge, MA: MIT Press, pp. 265–67. Stephen Stich (1999) “Cognitive Pluralism,” Routledge Encyclopedia of Philosophy, online. Stephen Stich (1999) “Epistemic Relativism,” Routledge Encyclopedia of Philosophy, online. Stephen Stich and Georges Rey (1999) “Folk Psychology,” Routledge Encyclopedia of Philosophy, online. Shaun Nichols and Stephen Stich (2000) “A Cognitive Theory of Pretense,” Cognition 74(2): 115–47. Ron Mallon and Stephen Stich (2000) “The Odd Couple: The Compatibility of Social Construction and Evolutionary Psychology,” Philosophy of Science 67(1): 133–54. Dominic Murphy and Stephen Stich (2000) “Darwin in the Madhouse: Evolutionary Psychology and the Classification of Mental Disorders,” in Peter Carruthers and Andrew Chamberlain (eds.), Evolution and the Human Mind: Modularity, Language and Meta-Cognition, Cambridge: Cambridge University Press, pp. 62–92. Italian translation: “Darwin in manicomio: psicologia evoluzionistica e classificazione dei disturbi mentali” in Mauro Adenzato and Cristina Meini (eds.), Psicologia Evoluzionistica, Torino: Bollati Boringhieri, (2006), pp. 195–222. Stephen Stich (2001) “Plato’s Method Meets Cognitive Science,” Free Inquiry 21(2): 36–8. Stephen Stich and Jonathan Weinberg (2001) “Jackson’s Empirical Assumptions,” Philosophy and Phenomenological Research 62(3): 637–43. Jonathan Weinberg, Shaun Nichols, and Stephen Stich (2001) “Normativity and Epistemic Intuitions,” Philosophical Topics 29 (1 & 2): 429–60. Reprinted in: (1) Biological and Cultural Bases of Human Inference edited by Riccardo Viale, Daniel Andler and Lawrence Hirschfeld, Mahwah, NJ: Lawrence Erlbaum (2006), pp. 191–222. (2) The Elements of Philosophy: Readings from Past and Present edited by Tamar Szabo Gendler, Susanna Siegel and Steven M. Cahn, Oxford: Oxford University Press (2007). (3) Ernest Sosa, Jaegwon Kim, Jeremy Fantl, and Matthew McGrath (eds.), Epistemology: An Anthology (2nd edn.), Oxford: Blackwell (2008). (4) Joshua Knobe and Shaun Nichols (eds.), Experimental Philosophy Oxford: Oxford University Press (2008). Richard Samuels, Stephen Stich, and Michael Bishop (2002) “Ending the Rationality Wars: How to Make Disputes About Human Rationality Disappear,” in Renée Elio (ed.), Common Sense, Reasoning, and Rationality, Oxford: Oxford University Press, pp. 236–68. Luc Faucher, Ron Mallon, Shaun Nichols, Daniel Nazer, Aaron Ruby, Stephen Stich, and Jonathan Weinberg (2002) “The Baby in the Labcoat: Why Child Development is An
list of publications by stephen stich
263
Inadequate Model for Understanding the Development of Science,” in P. Carruthers, S. Stich, and M. Siegal (eds.), The Cognitive Basis of Science, Cambridge: Cambridge University Press, pp. 335–62. Richard Samuels and Stephen Stich (2002) “Rationality,” Encyclopedia of Cognitive Science, Vol. 3, London: Macmillan Publishers, pp. 830–7. John Doris and Stephen Stich (2002) “Ethics,” Encyclopedia of Cognitive Science, Vol. 2, London: Macmillan Publishers, pp. 29–35. Stephen Stich (2002) “Review of The Philosophy of Psychology by George Botterill and Peter Carruthers,” Philosophy of Science 69(2): 392–4. Richard Samuels and Stephen Stich (2002) “Irrationality: Philosophical Aspects,” in N. Smelser and P. Baltes (eds.), International Encyclopedia of the Social and Behavioral Sciences, Oxford: Pergamon Press. Stephen Stich and Shaun Nichols (2003) “Folk Psychology,” in Stephen Stich and Ted A. Warfield (eds.), The Blackwell Guide to Philosophy of Mind, Oxford: Blackwell, pp. 235–55. Shaun Nichols and Stephen Stich (2003) “How to Read Your Own Mind: A Cognitive Theory of Self-Consciousness,” in Q. Smith and A. Jokic (eds.), Consciousness: New Philosophical Perspectives, Oxford: Oxford University Press, pp. 157–200. Shaun Nichols, Stephen Stich, and Jonathan Weinberg (2003) “Meta-Skepticism: Meditations on Ethno-Epistemology,” in S. Luper (ed.), The Skeptics, Aldershot, UK: Ashgate Publishing, pp. 227–47. Richard Samuels and Stephen Stich (2004) “Rationality and Psychology,” in Alfred Mele and Piers Rawling (eds.), The Oxford Handbook of Rationality, Oxford Reference Library, Oxford: Oxford University Press, pp. 279–300. Edouard Machery, Ron Mallon, Shaun Nichols, and Stephen Stich (2004) “Semantics, CrossCultural Style,” Cognition, 92(3): B1–B12. Stephen Stich (2004) “Philosophie et psychologie cognitive,” in Elisabeth Pacherie and Joëlle Proust (eds.), La Philosophie Cognitive, Paris: Éditions Ophrys, pp. 55–70. Richard Samuels, Stephen Stich, and Luc Faucher (2004) “Reasoning and Rationality,” in I. Niiniluoto, M. Sintonen, and J. Wolenski, Handbook of Epistemology, Dordrecht: Kluwer, pp. 131–79. Chandra Sripada and Stephen Stich (2004) “Evolution, Culture and the Irrationality of the Emotions,” in D. Evans and P. Cruse (eds.), Emotion, Evolution and Rationality, Oxford University Press, pp. 133–58. Stephen Stich (2004) “Some Questions From the Not-So-Hostile World,” Author Meets Critic Symposium on Kim Sterelny, Thought in a Hostile World: The Evolution of Human Cognition, Australasian Journal of Philosophy 82(3): 491–8. John Doris and Stephen Stich (2004) “Ethics and Psychology,” Routledge Encyclopedia of Philosophy Online. Shaun Nichols and Stephen Stich (2004) “Reading One’s Own Mind: Self-Awareness and Developmental Psychology,” in Maite Ezcurdia, Robert Stainton, and Christopher Viger (eds.), New Essays in the Philosophy of Language and Mind, Canadian Journal of Philosophy Supplementary Volume 30, Calgary: University of Calgary Press, pp. 297–339. Abridged and translated as “Leggere la propria mente,” Sistemi Intelligenti 13(1) (2001): 143–70. John Doris and Stephen Stich (2005) “As a Matter of Fact: Empirical Perspectives on Ethics,” in F. Jackson and M. Smith (eds.), The Oxford Handbook of Contemporary Philosophy, Oxford: Oxford University Press, pp. 114–52. Edouard Machery, Daniel Kelly, and Stephen Stich (2005) “Moral Realism and Cross-Cultural Normative Diversity,” Behavioral and Brain Sciences 28(6): 830.
264
list of publications by stephen stich
Stephen Stich (2006) “Is Moral Psychology an Elegant Machine or a Kludge?” Journal of Cognition and Culture 6 (1 & 2): 223–31. John Doris and Stephen Stich (2006) “Moral Psychology,” in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, Summer 2006 Edition, http://plato.stanford.edu/archives/sum2006/ entries/moral-psych-emp/ Chandra Sripada and Stephen Stich (2006) “A Framework for the Psychology of Norms,” in P. Carruthers, S. Laurence and S. Stich (eds.), The Innate Mind, Vol. 2: Culture and Cognition, New York: Oxford University Press, pp. 280–301. Stephen Stich (2006) “Review of Epistemology and the Psychology of Human Judgment by Michael A Bishop and J. D. Trout,” in Mind 115(458): 390–3. Daniel Kelly, Edouard Machery, Ron Mallon, Kelby Mason, and Stephen Stich (2006) “The Role of Psychology in the Study of Culture,” Behavioral and Brain Sciences 29(4): 355. Stephen Stich (2007) “Evolution, Altruism and Cognitive Architecture: A Critique of Sober and Wilson’s Argument for Psychological Altruism,” Biology and Philosophy 22(2): 267–81. Daniel Kelly, Stephen Stich, Kevin J. Haley, Serena Eng, and Daniel M. T. Fessler (2007) “Harm, Affect and the Moral/Conventional Distinction,” Mind and Language 22(2): 117–31. Daniel Kelly and Stephen Stich (2007) “Two Theories About the Cognitive Architecture Underlying Morality,” in P. Carruthers, S. Laurence and S. Stich (eds.), Innateness and the Structure of the Mind, Vol. 3: Foundations and the Future, New York: Oxford University Press, pp. 348– 66. Catherine Driscoll and Stephen Stich (2008) “Vayda Blues: Explanation in Darwinian Ecological Anthropology,” in Bradley B. Walters, Bonnie J. McCay, Paige West, and Susan Lees (eds.), Against the Grain: The Vayda Tradition in Human Ecology and Ecological Anthropology, Walnut Creek, CA: AltaMira Press. Stephen Stich (forthcoming) “Some Questions About the Evolution of Morality,” a commentary on The Evolution of Morality by Richard Joyce, Philosophy and Phenomenological Research. Kelby Mason, Chandra Sripada and Stephen Stich (2008) “The Philosophy of Psychology,” in Dermot Moran (ed.), Routledge Companion to Twentieth-Century Philosophy, London: Routledge. Frank Jackson, Kelby Mason, and Stephen Stich (forthcoming) “Folk Psychology and Tacit Theories: A Correspondence between Frank Jackson, and Stephen Stich and Kelby Mason,” to appear in David Braddon-Mitchell and Robert Nola (eds.), Conceptual Analysis and Philosophical Naturalism, Cambridge, MA: MIT Press. Stephen Stich, John Doris and Erica Roedder (forthcoming) “Egoism vs. Altruism,” to appear in The Oxford Handbook of Moral Psychology, ed. by the Moral Psychology Research Group (John M. Doris, Gilbert Harman, Shaun Nichols, Jesse Prinz, Walter Sinnott-Armstrong, and Stephen Stich), New York: Oxford University Press. Jennifer Nado, Daniel Kelly and Stephen Stich (forthcoming) “Moral Judgment,” to appear in John Symons and Paco Calvo (eds.), Routledge Companion to the Philosophy of Psychology, London: Routledge. Ron Mallon, Edouard Machery, Shaun Nichols, and Stephen Stich (forthcoming) “Against Arguments from Reference,” Philosophy and Phenomenological Research.
index
265
Index
a priori 1–2, 9, 49, 52, 56, 58, 59n, 103, 133, 134 n6, 177, 193, 195 Ameliorative Psychology 125 analytic epistemology 7, 101–3, 105–6, 108, 112–13, 117–21, 132–5, 133n, 228–30, 237 analytic epistemology argument or (AEA) 102–3, 105 Ariew, A. 79–80 autonomy principle 19, 25–7 Barrett, Matthew 40 basic representationalist model 32–5, 38–41, 227 Bateson, Patrick 74, 78, 80–3, 88–90 Bateson, William 42 behaviorism 30, 40–1, 157 belief box 153–4 beliefs 2, 4–6, 8–10, 14–17, 23–7, 27n, 31, 46–7, 55, 58, 63, 68–71, 75, 80, 85–7, 97n, 101, 104–5, 108–9, 111n, 116, 118, 122–3, 125–31, 134n, 139, 153–4, 156–9, 164, 165n, 191, 200–2, 204–6, 209, 211, 217–19, 226–7, 231, 235–6, 238–9, 241–4 Benzer, Seymour 91 Berzelius, J. J. 95 Block, Ned 78 Blumberg, Mark 74 Boullay 95 Brandom, Robert 53 causal-historical theory of reference 4, 47, 54, 63, 191
Chalmers, David 18n, 85 Chomsky, Noam 24, 30, 74, 46, 88, 167, 192–4 Churchland, Patricia 29 Churchland, Paul 28 n19, 30, 64, 87, 97n cognitive diversity 114–16, 119, 121, 229 cognitive psychology 4, 30, 50, 52, 78, 195 computation 40 computational cognitive science 23–5 computational mechanism 22–3 computational psychology 2, 19, 23–5, 203–5, 227 computational theories 20, 203–4, 206 computationalism 22, 30, 40, 203, 227 concepts 6–8, 11–12, 27n, 31–2, 38, 66, 76–7, 82, 85, 90, 92, 96–7, 101, 108–9, 164n, 207–8, 216–17, 233–5, 246n acquisition of 19, 88, 157–8, 164, 165n, 216 intentional 11, 156–8, 164n, 215–16 conceptual analysis 65, 102, 221–5, 228–30, 238, 240–1, 247n constructivism 56–8 content 1–3, 5–6, 8, 11–12, 14–18, 20–7, 28n narrow 20, 27n, 28n representational 14, 69, 200, 204–6, 227 wide 17–18, 20, 23, 27, 27n Crick, Francis 91 Davies, Martin 21 deflationism 53–4, 57 Delbruck, Max 91 Dennett, Daniel 27n, 28 n19, 30, 36, 63
266
index
Descartes, René 75, 116, 236 description theory of reference 4, 46–7, 54–5, 63–4, 67–8, 191 desires 2, 9–11, 14–17, 23–6, 32, 63, 67, 86–7, 139, 153–63, 191, 200, 204–5, 208–11, 214–17, 227, 245n desire box 153 developmental psychology 19, 25, 149, 205–6, 227 Doris, John 122 Dretsky, Fred 30–1, 39, 52, 191 Dumas, Jean-Baptiste 93, 95 eliminativism 1–4, 9, 14–16, 23, 27n, 43n, 46–8, 62–4, 66–8, 71, 84–5, 87–8, 91, 93, 190–1, 193, 200–1, 240 1983 anti-content argument 16–17 1991 anti-content argument 17–19 linguistic 84 ontological 85 strong 15, 27n weak 15–17, 23, 24 elimiNativism 74–5, 83, 85–6, 88–9 epistemic diversity 8, 113, 115–17 epistemic judgments 114, 116, 122 experimental philosophy 1, 119–22, 132, 241–2 Fodor, Jerry 24, 28 n18, 30–1, 36, 74, 77, 79, 82, 157, 164, 208, 222, 233, 245n folk psychology 2–4, 9, 14, 18, 23–7, 31, 46, 59, 63, 68, 86–7, 134n, 137, 155–7, 191–3, 201–3, 210 Fortes, Meyer 181 functional decomposition 24, 184 Gettier cases 7, 105, 116, 123, 233, 235, 240–1 Godfrey-Smith, Peter 79, 85, 163 Goldschmidt, Richard 91–2 Goody, Jack 173 Griffiths, Paul 39, 74–6, 88–91, 97n, 207 Harris, Marvin 169, 177 homuncular functionalism 36–8 Horwich, Paul 53–4, 199 Hume, David 84, 177 Hussien, Fawzia 173
iceberg concepts 157–8, 217 innateness 1, 6, 74–89, 96, 155, 157, 167–8, 181, 207, 216, 226, 246n concept innateness 156, 216 information (or knowledge) innateness 156, 216–17 i-properties or innateness properties 89 instrumentalism 34–6, 39, 41, 90 intentional states 14, 17, 25–6, 152, 154–5, 158, 164n, 216–17, 240 intuition 5, 8–9, 12, 31, 48–9, 64–6, 71–2, 73n, 77–8, 103–6, 108, 111, 134n, 192, 194, 197–200, 206, 221–4, 228–42, 247n, 248n epistemic intuitions 7–8, 106, 114, 117–18, 121–5, 133, 232–4, 238–9, 242, 248n folk intuitions 5, 49–50, 197–8 linguistic intuitions 49 physical intuitions 124–5, 134n intuition-driven romanticism 7, 106, 117–18, 133n Johannsen, Wilhelm 90 Kelly, Daniel 122, 246n Kim, Jaegwon 118, 124 Kitcher, Philip 78–9, 199 Klein, Ursula 92–5 Knobe, Joshua 121 knowledge 3, 7–9, 31, 56–7, 59n, 60n, 69–70, 76–7, 103–5, 108–11, 112n, 114, 116–17, 121–3, 147, 161, 194–5, 230, 232–9, 241, 247n Kripke, Saul 46, 52, 67–8, 72n, 73n, 191–2, 194, 197–8 Kuhn, Thomas 55 Lavoisier, Antoine 94 Leavitt, Gregory 174, 181–2 Leslie, Alan 36, 156, 159, 219–20, 246n linguistic analysis 101–4, 228–30 Locke, John 73n, 75, 166, 200–1 Lorenz, Konrad 78 Lycan, William 4, 30, 36, 46–7, 52, 63–4, 191 Nietzsche, Friedrich 185–6 Nisbett, Richard 114–15, 117, 128, 247n, 248n
index Machery, Edouard 1, 5, 121–2, 126, 198, 233, 246n, 247n Mackie, J. L. 84 Macquer, Pierre 94 Malinowski, Bronislaw 178, 182 Mameli, Matteo 76, 78, 80–2, 89–90 manifest image of the mind 2, 30 Marr, David 20–2, 27n mental content 1, 41 linguistic picture 40–1 pragmatic picture 40–1 mental states 1, 9, 16, 26, 30, 53, 69, 85–6, 112n, 139, 141, 144–5, 148, 156–8, 191, 200–2, 205–6, 210, 217, 219, 245n Milgram, Stanley 171, 173 Milgram effects 160, 162–3, 171 Millikan, Ruth 32–3, 52, 73n, 191 mindreading 1, 9–11, 137–40, 143–8, 152–63, 207–10, 213–20, 245n simulationism 137, 140, 147 152, 208 simulationist/theory-theory hybrid 10–11, 149, 152–3, 210, 217 theory-theory 9–10, 41, 137, 147, 152–3, 158, 210 model 3, 11, 16, 20, 22, 31–42, 43n, 152–3, 158, 184, 186, 204–5, 217–18, 226–8 model-based theorizing 32–3, 36–7, 227 model-based view of representation 36, 41 model-theoretic argument 50 modules 12, 155–6, 159–60, 184, 218–19, 245n modular systems 27n moral psychology 1, 11, 13, 122, 220, 225 moral realism 122 moral universals 168–9 minimal view 168, 175, 182–3 modest view 168, 176–7, 179, 182 immodest view, see also moral nativism 168, 175–7 Morality Acquisition Device (MAD) 176, 183 Moralization Mechanism (MM) 176, 183–4 Morgan, Thomas 90 Muller, Herman 90–1 narrow causal role 16 nativism 10–12, 152–3, 157, 159, 164n, 167, 216, anti-nativism 186, 216–17 modular nativism 155, 218–20
267
moral nativism 167–8, 176–7, 179, 182–4, 220–1, 226 natural kind 12, 24, 54, 63, 73n, 74, 221–5, 246n naturalism 1, 56, 113, 125, 132, 167 naturalization project 17 Nichols, Shaun 7, 10–11, 31, 105, 112n, 113, 117–18, 121–2, 126, 133n, 137–8, 149, 152–60, 162–3, 185, 207–11, 215–18, 234, 236–7, 241, 245n, 247n, 248n normativity 53, 110 norms 11–12, 168–71, 173, 175–80, 183–6, 222–35, 246n epistemic 7, 117 moral 12, 167–9, 171–2, 175–6, 177–84, 187, 220–5 philosophical method 1–2, 56, 123, 203 planner, the 153–5 Plato 76, 103, 109, 167, 187, 206, 230, 234 possible world box 154–6, 158–9 poverty-of-stimulus arguments 155–7, 159, 163, 216 pragmatic theory of cognitive evaluation 129–30 pragmatism 1, 6–7, 9, 97, 119–20, 132–3, 207, 237, 242–3 primary moral data (PMD) 176, 182 propositional attitudes 4, 6, 10, 23–6, 28n, 62–3, 66–7, 86–7, 145, 148, 200, 205, 209, 245n Putnam, Hilary 4–5, 19, 46–7, 50, 52, 57n, 63, 73n, 191–2, 194, 197 qualified autonomy principle (QAP) 18–21, 23, 25 Quartz, Steve 78–9, 81, 97n Quine, W. V. 56, 113–14, 125, 132–3, 134n, 167, 207 R (relativity) properties 16–17 realism 33, 39, 42, 47, 59n intentional realism 46–7, 58–9 scientific realism 47 reference 4 –6, 15, 17–8, 20, 26, 47–58, 59n, 60n, 62, 64–72, 72n, 73n, 190–202, 227 folk semantic account 48–50, 58, 193–5 proto-science account 4 –5, 48, 50, 52, 58, 65–6, 71, 195–7, 200
268 relativism 9, 101, 120, 186–7 reliabilism 9, 125, 128, 243 representation 3, 5–6, 21–2, 31–7, 40–2, 49, 52, 65, 69, 74, 195–6, 200, 202–5, 227 convention-based 69, 201 language-based 42 model-based view 36, 41 representational content 14–18, 21, 23, 69, 200, 204–6, 227 representational states 2–3 representational theory of mind 15, 28n Rosaldo, Renato 169 Rouelle, Guillaume-François 94 Scheidel, Walter 173, 175 Scholl, Brian 156, 159, 219 scientific image of the mind 30 Searle, John 22, 47 Sellars, Wilfrid 30, 41 semantic ascent 47–8, 55–6, 58–9, 60n, 66, 104, 190, 199 semantic description 31–2, 40 sentimentalism 179, 222–3 SES (Socio-Economic Status) 109, 116, 122, 133n, 232, 234 Shannon, C. E. 30, 35, 39 Shaw, Brent 174 similarity 16, 42, 86, 139–41, 145, 148, 202, 211–12 ideological 15, 18, 202, 204–5 reference 15 skepticism 6, 57, 97n, 109, 123–4, 216, 231, 239 social psychology 2–3, 24, 205–6, 227 attribution theory 25, 205 Sripada, Chandra 11–12, 145–7, 167, 179, 184–6, 207, 220, 224, 246n Stahl, Ernst 93
index Stalnaker, Robert 41–2 stasis requirement, the 118, 124, 134n states of affairs 69–70, 73n, 201 Stotz, Karola 90–1 strategic reliabilism 129, 134n, 237, 242–4 sub-doxastic states 24 teleological theory of reference 65, 191 Thornhill, Nancy 175, 181 Trout, J. D. 118–19, 122, 125, 129, 134n, 237, 243–4 truth-referentialism 53–4 Turnbull, Colin 170 twin earth 5, 19, 25, 64, 73n, 192, 197–9 Universal Morality (UM) 176, 182–3 Updater, the 153, 155–6 value 78, 101–2, 122, 125–7, 134n, 169, 175, 177, 187, 234–5, 243–4, 247n epistemic 111 instrumental 126, 128, 132–3, 242 intrinsic 9, 117–21, 125–30, 132n, 134n, 243 Venel, Gabriel 93 Watson, James 91 Weinberg, Jonathan 1, 7, 105, 112n, 113, 117–19, 122, 133n, 234, 236–7, 241, 245n, 247n, 248n Westermarck, Edward 181 Wimsatt, William 79–80 Wittgenstein, Ludwig 34, 53 WNS (Weinberg, Nichols, Stich) 7–8, 105–8, 113, 116–22, 133 Woodward, James 85 word–world relation 4, 6, 48, 50–4, 64, 68, 70–1, 73n, 195–8, 200–1
index
269
Allie
Allie
Allie