The Laws of Thought Hilary Kornblith Philosophy and Phenomenological Research, Vol. 52, No. 4. (Dec., 1992), pp. 895-911. Stable URL: http://links.jstor.org/sici?sici=0031-8205%28199212%2952%3A4%3C895%3ATLOT%3E2.0.CO%3B2-B Philosophy and Phenomenological Research is currently published by International Phenomenological Society.
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ips.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.
The JSTOR Archive is a trusted digital repository providing for long-term preservation and access to leading academic journals and scholarly literature from around the world. The Archive is supported by libraries, scholarly societies, publishers, and foundations. It is an initiative of JSTOR, a not-for-profit organization with a mission to help the scholarly community take advantage of advances in technology. For more information regarding JSTOR, please contact
[email protected].
http://www.jstor.org Fri Jun 22 07:22:55 2007
Philosophy and Phenomenological Research Vol. LIZ.No. 4. December 1992
The Laws of Thought1 HILARY KORNBLrn
University of Vermont
In the last two decades, a great deal of interesting empirical work has been done on the nature of human inference. Work by Tversky and Kahneman? Nisbett and Ross? and many others4 suggests a number of important conclusions about patterns of inference which are, at a minimum, extremely widespread. It is widely agreed that this work has implications for any attempt to assess the powers of human reason, but there is little agreement on what these implications are. Some psychologists have claimed that their experiments have "bleak implications for human rati~nality."~ Others have claimed that the very same experiments suggest that the principles governing our reasoning are normatively correct as they stand.6This work is obviously in need of careful analysis. I have profited from reading earlier version of this paper to audiences at Middlebury College, Rice University and the conference on Human Reasoning at the University of Cincinnati. I have also received helpful comments on distant ancestors of this paper from Bill Lycan, John Heil, Alan Nelson, Michael Devitt. Al Mele, J. Christopher Maloney and Mark Heller. A more recent draft of this paper has profited from the criticism of David Christensen and William E. Mann. Sympathetic and constructive comments from two anonymous referees were extremely useful as well. Early work on this topic was supported by a generous grant from the National Endowment for the Humanities. Many of the classic papers by Tversky and Kahneman, as well as a good deal of supporting literature by others is collected in Daniel Kahneman. Paul Slovic and Amos Tversky, eds.. Judgment Under Uncertainty: Heurirricr and Biases, Cambridge University Press. 1982. Human Inforonce: Strategies and Shortcomings @Social Judgment, Prentice-Hall, 1980. See, e.g.. John Holland, Keith Holyoak, Richard Nisbett and Paul Thagard, Induction: Process of Inference, Learning, and Discovery, MIT Press, 1986; Philip Johnson-Laird, Mental Models, Harvard University Press, 1983; Peter Wason and Philip Johnson-Laird, Psychology of Reasoning: Structure and Content, Harvard University Ress, 1972. Richard Nisbett and Eugene Borgida, "Attribution and the Psychology of Prediction," Jourml of Persomlity and Social Psychology 32 (1975), quoted in .I J. Cohen, "Can Human Irrationality Be Experimentally Demonstrated?" The Behavioral and Brain Sciences 4 (1981): 3 17. See, e.g., L. J. Cohen, op. cit., Daniel Dennett, Brainstorms, Bradford BookslMIT Press, 1978, and The Intentional Stance, Bradford Books/MlT Press, 1987; Mary Henle, "Forward," in R. Revlin and R. E. Mayer, eds., Human Reasoning, Winston, 1978, xiiixviii; John Pollock, Contemporary Theories of Knowledge, Rowman and Littlefield, 1986, chapter 5; Elliot Sober, "Psychologism." Journal for the Theory of Social Behavior THE LAWS OP THOUGW
895
There are two reasons to think that now is a particularly good time to attempt such analysis. First, although empirical work in this area is by no means complete, a great deal of interesting work has now been done and there is a common thread which clearly runs through all of it: if taken at face value, a very wide range of inferences which are all but universal are normatively defective. It is not clear whether this work should be taken at face value, but it is clear that this is the natural prima facie interpretation of the data. A big picture has thus already begun to emerge from the data. Second, the existing data present both significant interpretive problems and interesting philosophical challenges. I believe that there are important misunderstandings of these data which are currently widespread and very much in need of rectification. There are important lessons to be learned from this body of information, but a number of issues which have been blurred in the literature need to be carefully separated first. I undertake this task here. This literature straddles the very hazy borderline between epistemology and empirical psychology. A few preliminary remarks about the philosophical relevance of this literature are thus not out of place. The issues which are addressed in this literature are clustered around three questions: (1) How ought we to arrive at our beliefs? (2) How do we arrive at our beliefs? (3) Are the processes by which we do arrive at our beliefs the ones by which we ought to arrive at our beliefs?
One traditional view in epistemology holds that the answer to question (1) must be reached entirely independently of an answer to question (2). I have argued elsewhere7 that definitive of the naturalistic approach to epistemology is the view that questions (1) and (2) can not be answered independently of one another. One of the interesting features of the literature on inference, to my mind, is the way in which it illustrates the interanimation of these two questions. Our conception of how we ought to reason does not float free of our understanding of how we actually do reason. In recent years, our conception of proper reasoning has come under pressure from a number of different sources: the failure of a priorist and internalist accounts of justification in epistem~logy;~ a better understanding, via the theory of
'
8 (1978),: 165-91 and "Panglossian Functionalism and the Philosophy of Mind," Synthese 64 (1985) 165-93.
"What is Naturalistic Epistemology?" in Hilary Komblith, ed., Naturalizing Epistemol-
ogy, Bradford Books/MIT Press, 1985.
See especially, Alvin Goldman, "The Intemalist Conception of Justification," in Peter
French, Theodore Uehling and Howard Wettstein, eds., Midwest Studies in Philosophy V
(1980): 27-51; Hilary Komblith, "How Internal Can You Get?" Synthese 74 (1988):
313-27 and "Introspection and Misdirection," Australasian Journal of Philosophy 67
(1989): 41CL22.
computational complexity, of the structure of the epistemic problem facing all of us;9 and a better understanding, via the empirical literature on inference, of how we have solved that problem, or rather, how that problem'has been solved for us.1° My examination of this literature and its implications will thus, indirectly, serve as a defense of a certain naturalistic approach to epistemology. On this approach, the normative dimension of epistemological inquiry is not eliminated, but is instead informed by a wide range of empirical considerations.ll The Data It will be best to begin with a brief description of the data. There is a very widespread tendency to draw inferences about a population from extremely small samples, even in the absence of any.special reason to believe the sample to be representative. This tendency is found in untrained subjects, in subjects who have taken a course in statistics, and in members of the Mathematical Psychology Group of the American Psychological Associati~n.~~ Tversky and Kahneman comment, The law of large numbers guarantees that very large samples will indeed be highly representative of the population from which they are drawn... People's intuitions about random sampling appear to satisfy the law of small numbers, which asserts that the law of large numbers applies to small numbers as well.13
The law of small numbers is just one illustration of what Tversky and Kahneman call the representativeness heuristic: subjects tend to assume that
lo l1
l2
l3
Christopher Chemiak, Minimal Rationality, Bradford BooksIMIT Press, 1986, especially chapter 4. This is an important theme in Cherniak's op, cit. I am very much indebted to Chemiak here and throughout this paper. There is one area of the literature bearing on these questions which, I should say at the outset, I will not be discussing: arguments based on the principle of charity. Any attempt to argue for the correctness of our inference by way of the principle of charity requires a very strong version of that principle. I have nothing to add to the literature on this subject which, to my mind, has clearly shown the untenability of any such version of the principle. See, e.g., Richard Grandy, "Reference, Meaning and Belief," Journal of Philosophy 70 (1973): 439-52; Christopher Chemiak, op. cit., Michael Devitt and Kim Sterelny, LanBradford Books/MIT guage and Reality: An Introduction to the ~ h f i o s o of ~ hlanguage, ~ Press, 1987; David Papineau, Reality and Representation, Blackwell, 1987; Paul Thagard and Richard Nisbett, "Rationality and Charity," Philosophy of Science 50 (1983): 25067. For a summary of this literature, see Nisbett and Ross, op. cit., 77-89. The original papen by Tversky and Kahneman, "Judgment Under Uncertainty: Heuristics and Biases," "Belief in the Law of Small Numbers," "Subjective Probability: A Judgment of Representativeness," and "On the Psychology of Prediction," are reprinted in Kahneman, Slovic and Tversky, op. cit. "Belief in the Law of Small Numbers," 25. THE LAWS OF THOUGHT
897
samples of whatever size will be representative of the population from which they are drawn. Thus, In considering tosses of a coin for heads or tails, for example, people regard the sequence H-TH-T-T-H to be more likely than the sequence H-H-H-T-T-T, which does not appear random, and also more likely than the sequence H-H-H-H-T-H, which does not represent the fairness of the coin.14
By the same token, more than half of all subjects tested disregarded sample size in answering the following question. A certain town is sewed by two hospitals. In the larger hospital about 45 babies are born each day, and in the smaller hospital about 15 babies are born each day. As you know, about 50 percent of all babies are boys. However, the exact percentage varies from day to day. Sometimes it may be higher than 50 percent, sometimes lower. For a period of 1 year, each hospital recorded the days on which more than 60 percent of the babies born were boys. Which hospital do you think recorded more such days? The larger hospital The smaller hospital About the same1'
Although these examples are somewhat artificial, there can be no doubt that inferences from sample to population are ubiquitous; they are an ineliminable part of everyday life. It is thus no wonder that Tversky and Kahneman remark that, The true believer in the law of small number commits his multitude of sins against the logic of statistical inference in good faith. The representation hypothesis describes a cognitive or perceptual bias.. .[The true believer's] intuitive expectations are governed by a ccmsistent misrepresentation of the world.. .I6
Not unrelated to the representativeness heuristic is what Tversky and Kahneman call the availability heuristic. For example, one may assess the divorce rate in a given community by recalling divorces among one's acquaintances; one may evaluate the probability that a politician will lose an election by considering various ways in which he may lose suppo~t;and one may estimate the probability that a violent person will "see" beasts of prey in a Rorschach card by assessing the strength of association between violence and beasts of prey. In all of these cases, the estimation of the frequency of a class or the probability of an event is mediated by an assessment of availability. A person is said to employ the availability heuristic whenever he estimates frequency or probability by the ease with which instances or associations could be brought to mind.17
l4
l5 l6 l7
"Judgment Under Uncertainty: Heuristics and Biases," 7. Ibid.,6. "Belief in the Law of Small Numbers," 31.
"Availability: A Heuristic for Judging Frequency and Probability," in Kahneman, Slovic
and Tversky, op. cit., 164.
The evidence that the availability heuristic is routinely employed is quite substantial, as is the evidence that reliance on this heuristic will lead to widespread error.18Here too, Tversky and Kahneman are alive to the ubiquity of this heuristic's effects. Perhaps the most obvious demonstration of availability in real life is the impact of fortuitous availability of incidents or scenarios. Many readers have experienced the temporary rise in the subjective probability of an accident after seeing a car overturned by the side of the road. Similarly, many must have noticed an increase in the subjective probability that an accident or malfunction will start a thermonuclear war after seeing a movie in which such an occurrence was vividly portrayed. Continued preoccupation with an outcome may increase its availability, and hence its perceived likelihood. People are preoccupied with highly desirable outcomes, such as winning the sweepstakes, or with highly undesirable outcomes, such as an airplane crash. Consequently, availability provides a mechanism by which occurrences of extreme utility (or disutility) may appear more likely than they actually are...lg
The representativeness and availability heuristics are only two of the many patterns of inference which are now well documented in the literature, but these two examples should be sufficient to give the flavor of the literature. If Tversky and Kahneman are right, and if this literature is to be taken at face value, then our inferences are governed by principles which systematically mislead us and present us with a "consistent misrepresentation of the world."*O The question is whether the literature should be taken at face value. Competence and Performance Some authors have suggested that the proper description of our reasoning requires a competence/performance distinction, analogous to the distinction made by linguists in describing our command of grammar. Thus, Elliot Sober suggests, Instances of irrationality are consistent with psychologism [the view that we arrive at our beliefs in just the way we ought to] in the same way that grammatical slips are consistent with viewing native speakers as having internalized the grammar of their language. In both cases, one posits a mechanism which endows people with perfect rationality or with perfect grammaticality, and then one posits in addition various devices which provide interferences with the smooth functioning of the basically 'correct' mechanism. Instances of ungrammatical utterance or irrational belief and behavior are to be explained as the impingements of interferenceslike lapses of memory, headaches, substantive prejudice, and so 4.[1978,177/8]
l8
l9
u,
Ibid., Michael Ross and Fiore Sicoly, "Egocentric Biases in Availability and Attribution"; Shelly Taylor, "The Availability Bias in Social Perception and Interaction"; and Kahneman and Tversky, "The Simulation Heuristic"; all reprinted in Kahneman, Slovic and Tversky, op. cit. See also Nisbett and Ross, op. cit., 18-23. "Availability: A Heuristic for Judging Frequency and Probability," 178. "Belief in the Law of Small Numbers," 31.
It is always possible, of course, that the underlying principles which govern our reasoning are not the ones in accord with which we ought to reason. Sober suggests, however, that One sign that the positing of perfect rationality or grammaticalness is preferable is that the 'errors'--the lapses in rationality or grammaticalness-are occasional and unsystematic. [I781
The suggestion that a competence/performance distinction must be applied in describing human reasoning is also made by L. J. Cohen and John P o l l o ~ k . ~ ~ In this section, I examine whether a competenceljperformance distinction is required for an accurate and illuminating description of human reasoning. I argue that it is required for such a description. I do not believe, however, that the appropriateness of drawing such a distinction demonstrates the truth of psychologism. It will be best to begin by way of an example. I recently replaced the faucet on my bathroom sink. After connecting the faucet and fully opening the valves, I turned both the hot and cold handles. Only a trickle of water came out. I turned them off and tried again, with much the same result. As a plumber I am a crude inductivist, and so I tried this several more times, again with the same result. I then examined the inner workings of the faucet, and found that a bit of one of the parts had broken off and lodged itself in such a way as to block the flow of water. When it was removed, the faucet worked perfectly. Consider the workings of the faucet prior to removal of the blockage. It does not work properly, and its failure is neither occasional nor unsystematic. Nevertheless, a natural way of describing the faucet would consist of a description of a mechanism which works perfectly-the faucet after the blockage has been removed-together with a description of an interfering factor. Such a description, in effect, gives us a recipe for repairing the faucet. By describing a part of the mechanism as an interfering factor, we indicate that if this part is eliminated, the faucet will work properly. Indeed, such a description is far more useful, if we are interested in repairing the faucet, than one which simply describes the blockage as another of the faucet's many parts. It should be obvious that different descriptions of the faucet, each of them perfectly accurate, will be conducive to different purposes. No doubt the same is true of our belief generating equipment. If our interest in epistemology is cognitive repair, that is, improving the manner in which we arrive at our beliefs, a description of our cognitive equipment which divides it into a perfectly working mechanism and a variety of interfering factors will serve our purposes admirably. This is precisely the description which advocates of the competence/performance distinction promise. See works cited above.
Such a description, however, is not always possible. Let us return to the bathroom faucet. We may describe it as a perfectly working faucet together with interfering factors, but we may not accurately describe it as a perfectly working telephone together with interfering factors. This has nothing to do with the purpose for which it was designed; the faucet may make a perfectly good hammer if certain parts are sawed off, even though it wasn't designed for that purpose. By the same token, that we want to use something for a certain purpose does not guarantee that we may accurately describe it as a perfect device for achieving that purpose, together with interfering factors. It is surely an open empirical question as to whether our cognitive equipment may accurately be described as a perfect reasoning mechanism together with interfering factors; there is no a priori reason to think that it can. We will return to this issue later. Even if such a description of our cognitive equipment can be given, it is of no help to the defender of psycho log ism^ Indeed, quite the reverse. Consider the following three questions about my bathroom faucet, prior to its repair, analogous to the three questions about belief acquisition raised at the beginning of this paper. (1) How ought the faucet to work? (2) How does the faucet work? (3) Does the faucet work in the way it ought to?
While it is a significant fact about the faucet that an answer to question (2) may consist of a description of a perfectly working faucet together with a description of interfering factors, the fact remains that the answer to question (3) is "no." As long as there are any interfering factors to be described, the faucet does not work in the way it ought to. Thus, in admitting that, at best, a description of our cognitive equipment will have to include factors interfering with the smooth working of a perfect reasoning device, the defender of psychologism is forced to admit that psychologism is false. We simply do not reason in the way we ought to. One further point should be made here about trying to describe our cognitive equipment in terms of a perfect reasoning mechanism together with interfering factors. This kind of description may give the impression that these two parts of our cognitive equipment differ in their manipulability. In particular, it may suggest that the perfect reasoning mechanism is unalterable, or alterable only with great difficulty, while the interfering factors are easily changed or done away with. It should be clear, however, that the question of manipulability is entirely separate from the division of the description of the reasoning mechanism into two parts. There are clearly devices which consist of a perfectly working part together with interfering factors, where the interfering factors simply cannot be done away with. There are THE LAWS OF THOUGm
901
also devices where the part which works perfectly may easily be modified. For those interested in cognitive repair, the real issue is modifiability, and not whether we may divide our description of the belief generating mechanism into a part which reasons perfectly and factors which interfere with it. No evidence has ever been offered for the claim that we have a perfect reasoning mechanism whose operation is fixed, while factors responsible for interfering with it are easily eliminated. What then may we conclude about the appropriateness of drawing a competence/performance distinction? On any reasonable account of how human beings reason, we will need to distinguish between the operation of the reasoning mechanism itself and various factors which interfere with it. Headaches, to take an example of Sober's, should be described as factors interfering with the underlying reasoning mechanism and not as part of the reasoning mechanism itself. Everyone will thus need to draw a distinction between the reasoning mechanism and factors which interfere with it. If this is all that is meant by the suggestion that we should draw a distinction here between reasoning competence and reasoning performance, then the distinction should be entirely uncontroversial. Most people who draw this distinction, however, mean to build into it far more than this, as the analogy with linguistics should make plain. The suggestion that the rules which govern the operation of the reasoning mechanism are the rules according to which we ought to reason goes far beyond the uncontroversial suggestion that we need to draw a line around the reasoning mechanism and distinguish it from factors which interfere with it. For all that has been said thus far, once we discover the principles by which we reason, we might more naturally think of them as an underlying incompetence; our reasoning might be governed by principles which we would not wish to endorse. If the claim that a competence/performance distinction applies to our reasoning mechanism is meant to rule out this possibility, as it clearly is in the cases Sober, Cohen and Pollock, it will require substantial empirical argument to show that it is appropriate to draw such a distinction. Finally, even if this distinction is appropriate, we must remember that the issue of the modifiability of our cognitive mechanisms is entirely separate from the appropriateness of a competence/performance distinction. The appropriateness of such a distinction-doesnot in any way entail the truth of psychologism. How Do We Detect Our Errors? Let us grant then that we need to draw a distinction between the principles which govern our reasoning and the many factors which may interfere with normal inference. What does the evidence suggest about the principles which govern the reasoning mechanism?
It has sometimes been suggested that our ability to detect our own errors shows that the principles governing our reasoning are normatively correct. Thus, Elliot Sober comments, One stumbling block for hypotheses asserting that our inferential devices have fallacies wired into them is accounting for how we leorn to recognize those follocies. How is it that we can recognize the errors we are vulnerable to? One natural pattern of explanation is that we think ourselves back to a set of more fundamental principles and then see, in the light of them, that the pattern in question was a mistake. If the fallacious patterns that we discard were in fact fundamental, it is hard to see how our other principles could provide this sort of perspective. Not that this difficulty is in principle insurmountable; but the recognition of irrationality is a problem that irrationality hypotheses must come to grips with."
John Pollock is less cautious. There has been a lot of recent work in psychology concerning human irrationality. Psychologists have shown that in certain kinds of epistemic situations people have an almost overpowering tendency to reason incorrectly. ..It might be tempting to conclude from this that, contrary to what I am claiming, people do not know how to r e a s ~ n The . ~ short way with this charge is to note that if we did not know how to reason correctly in these cases, we would be unable to discover that people reason incorrectly."
Sober and Pollock thus suggest, paradoxically, that the discovery that people often seem to reason badly is evidence, or proof, that the principles governing our reasoning are correct as they stand. This conclusion should be resisted. Consider an analogy.25Many personal computers have a set of built-in diagnostic procedures. When these machines are turned on, the machine performs a self-test. If the machine is working properly, the diagnostic procedure will indicate that the machine is working properly. If the machine is not working properly, however, the diagnostic procedure will indicate where the defect lies.% Sober's conclusion would suggest that if a machine indicates that it is not working properly, this is evidence that it is working properly. Pollock's conclusion would suggest that if a machine indicates that it is not working properly, this is proof that it is working properly. These conclusions cannot be right. There is nothing the least bit puzzling about the suggestion that human beings may reason in accord with principles some of which may be less than normatively appropriate and that, at the same time, at least some of these er-
" " 25
26
"Panglossian Functionalism and the Philosophy of Mind," 174.
Pollock equates knowing how to reason with reasoning correctly.
Op.cit., 132,n. 7.
This analogy was suggested to me by William E. Mam.
This assumes, of course, that the diagnostic procedure is working properly, which is not
always the case. It is typically the case, however. All that is needed for purposes of the analogy is that the diagnostic procedure may sometimes function when the machine is otherwise malfunctioning. There would be no point in building such procedures into machines were this not possible.
THE LAWS OF THOUGW 903
rors may be discoverable. Like defective personal computers, we may have principles which govern our reasoning, some of which are incorrect, and some of which are correct. At times, we may discover the mistakes which we are prone to by examining our reasoning. At other times, we may believe that we have found errors in our reasoning, and yet be mistaken about this precisely because the supposed errors were discovered by means of mistaken principles of inference. We may also examine our reasoning and find no errors at all. At times this will be because we have correctly determined that we are reasoning well. At other times this will be because we have applied inaccurate principles of reasoning to the task of examining our principles of reas0ning.n The fact that we may sometimes discover our errors does not in any way suggest, let alone prove, that all of the principles which govern our reasoning are correct. My discussion may seem to ignore Sober's suggestion that we need to distinguish between fundamental principles of reasoning, which are likely to be correct, and less fundamental principles, which need not be correct. I do not believe, however, that the data motivate this distinction. Moreover, insofar as my analogy with computer self-testing incorporates Sober's distinction of levels, I have already granted too much. The fact that we sometimes discover our own errors does not in any way suggest that correct principles of reasoning are more fundamental, psychologically, than incorrect principles of reasoning. After all, we sometimes mistakenly believe ourselves to have erred; this does not show that mistaken principles of inference are more fundamental than correct ones. It may well be that we have numerous principles built into us which are, from a psychological point of view, equally fundamental, and that these principles are not logically consistent with one another. When conflicts among the principles are discovered, we may sometimes favor the correct principles and at other times favor the incorrect principles. Which principles are favored will be largely a matter of what our collateral information is, as well as where our attention is focused. From a psychological point of view, however, the fact that we sometimes correct our errors is no more revealing than that we sometimes retract perfectly good reasoning. We have thus far found no reason not to take the psychological evidence at face value. If psychologists have indeed discovered common errors in reasoning, their ability to discover these errors does not thereby undermine their claim. We may with perfect consistency say both that human beings are governed by principles of reasoning some of which are mistaken, and also that at least some of these errors are ones which we may discover.
27
I have discussed a number of cases of precisely this sort in "Introspection and Misdirection," op, cit.
How Bad Are Our Inferences? The claim that we reason badly and that we may discover just how badly we reason is not internally inconsistent. Is it true? Those philosophers and psychologists who claim that it is true have typically held that the data speak for themselves. Thus, Stephen Stich comments,
...
Nisbett and Ross argue that the primitive representativeness heuristic plays a central role in psychoanalytic inference and in contemporary lay inference about the causes of disease, crime, success, etc. The normative inappropriatenessof the heuristic in these settings is, I should think, beyond dispute."
There is much to be said for this kind of appeal. It is, after all, hard to see what could be more obvious than that these inferences are somehow defective. Since a direct non-question-begging argument for this conclusion would require premises more obvious than it is itself, it seems unlikely that such an argument will be forthcoming. We may nevertheless seek to evaluate Stich's assessment by examining the account of proper inference which is presupposed here. If a reasonable account of proper inference is seen to underlie Stich's assessment, the claim that these inferences are defective will receive additional support. If no such account is forthcoming, our enthusiasm for the claim of defectiveness must diminish. Consider what Nisbett and Ross themselves say about the inferences Stich discusses.
...
people's inferential failures are cut from the same cloth as their inferential successes are. Specifically, we contend that people's inferential strategies are well adapted to deal with a wide range of problems, but that these same strategies become a liability when they are applied beyond that range.. .B
A single inferential strategy may thus be highly successful in one domain, and yet highly unsuccessful in others. The same point is put in Darwinian perspective by Stich.
...
inferential strategies which generally get the right answer may nonetheless be irrational or normatively inappropriate when applied outside the problem domain for which they were shaped by natural selection.30
This point seems to presuppose an account of how we ought to arrive at our beliefs: beliefs should be produced by processes which are reliable in all domains to which they are applied. To say that belief producing processes must be reliable in all such domains is not to say that they must be perfectly reliza 29
30
"Could Man Be an Irrational Animal?" in Hilary Komblith, ed., op, cit., 259.
Op. cit., xii.
Op. cit., 260.
THE LAWS OF THOUGHT
905
able. The requirement of perfect reliability would clearly rule out as unreasonable any inductive inference at all. Stich does not mean to be making the trivial point that the inferences Nisbett and Ross discuss are inductive and therefore less than perfectly reliable. Rather, he is making the more interesting claim that these inferences fail to meet a far more forgiving standard. Although these inferences are reliable in some domains, there are other domains in which they are not even minimally reliable. We need not ask for precise standards of reliability in order to make sense of this claim. If this account of how we ought to reason is to be properly assessed, however, some explanation will be needed of what is to count as a domain. The suggestions made by both Nisbett and Ross and Stich have a good deal of intuitive appeal; there do indeed seem to be different problem domains in which one and the same inferential strategy may be applied with different degrees of success. But some account is needed as to what constitutes such a domain. Stich's allusion to natural selection may suggest that the task of individuating domains may be left to evolutionary biology. At least some of our inferential strategies may be explained, we may suppose, by appeal to a certain range of situations in which these strategies arose; this range of situations would then constitute a single domain. As Stich himself notes, the fact that an inferential strategy arose in a certain domain does not in any way guarantee that it is reliable in that domain. Some inferential strategies will have survival value that is in no way dependent on the truth of the beliefs they produce. In any case, deference to evolutionary biology may help us to define one domain-the domain in which that strategy evolved-for each inferential strategy. This cannot, however, be enough. What we need is some account of problem domains which will allow us to say of a given inferential strategy that it is reliable in some domains and unreliable in others. Evolutionary biology by itself cannot perform this job because it selects a single domain for each inferential strategy and then lumps all others together as the complement of that domain. The intuition behind Stich's assessment, however, is that we need to be able to define many domains for each inferential strategy and then assess these strategies in each of the many domains. What I wish to argue is that any reasonable account of how to select domains which is consistent with the spirit of the comments of Nisbett and Ross and Stich will make reliability in all domains no more reasonable a standard against which to measure inductive inference than perfect reliability. Consider the case of perception. Perceptual processes are, as everyone agrees, highly reliable processes of belief acquisition. It is clear, however, that there are certain domains in which standard perceptual processes will tend to produce false, rather than true, beliefs. If we consider the standard
visual illusions, for example, we see that, like the inferential strategies al-. ready.discussed, perception is extremely well suited to operation in certain domains, and ill suited to operation in others. What would it take to improve on standard perceptual processes so as to make them reliable in all domains? It seems we would need some kind of mechanism which is sensitive to the differences between those domains in which perception is reliable and those in which it is not. By the same token, what would be required in the case of the various inferential processes discussed would be a set of additional mechanisms sensitive to the differences between those environments in which the inferential mechanisms are generally reliable and those in which they are not. In each case, these additional mechanisms would play the role of overseer. They would allow the mechanisms of belief acquisition which we currently have to work unimpeded in those environments in which they are successful, while they would interfere with their workings in the environments to which they are not well suited. One thing which inclines us to speak of different problem domains in the perceptual case is simply that there are certain presuppositions about the environment built in to our perceptual systems and, although these presuppositions are typically true, they are not always true.31 In the standard case, when the presuppositions of our perceptual systems are true, we are able accurately and quickly to identify objects around us. When these presuppositions are false, we are liable to err. The perceptual systems are perfectly reliable when all of the presuppositions are true. In domains where these presuppositions are false, perception will be unreliable. The natural way to define problem domains is thus in terms of the truth or falsity of the presuppositions of the system of belief acquisition in question. As a result, any system of belief acquisition which builds in presuppositions about the environment which are not universally true-and this surely includes all of the heuristics governing induction, if not all principles governing belief acquisition in general-will be unreliable in some domains. Someone might argue32that there is ample motivation for talk of different problem domains in perception and in reasoning, even apart from the account just given of the presuppositions of our perceptual systems. After all, we talk of vision being more reliable in good light than in bad; audition is said to be more reliable when there is little background noise rather than a lot; and so on. Perhaps these very reasonable judgments of comparative reliability can form the basis of an account of perceptual and inferential domains which is different from the one just given. If such an account of perceptual 31
32
See, e.g., David Marr, Vision, Freeman, 1982; Irvin Rock, The Logic of Perception, Bradford Books/MIT Press, 1983; Shimon Ullman, The Interpretation of Visual Motion, MIT Press, 1979. Indeed, a referee for this journal made this argument. I borrow liberally from the referee's repolt in stating this objection. THE LAWS OF THOUGHT
907
.domains can be provided, it may not be true that unreliability in some domains is an inevitable feature of any belief acquisition device. The standard of reliability in all domains might thus provide us with a feasible ideal. It is not at all clear that such an alternative account of perceptual and inferential domains can be provided. Although it is certainly true that many common sense judgments about comparative reliability may be naturally formulated in terms of domains, some of these common sense judgments are merely the pre-scientific approximations to the more accurately formulated account of the presuppositions of our perceptual systems. In these cases, of course, our common sense judgments do not lead to an alternative taxonomy. It would be unreasonable to think, however, that all our common sense judgments of comparative reliability lead to an account of domains which merely approximate the account provided by the cognitive sciences. Some of these judgments are too idiosyncratic; e.g., Jack is better in judging the age of men than he is in judging the age of women. Others involve categories which are unlikely to play a role in the cognitive sciences; e.g., Mary is more accurate in her judgments about Burgundy than her judgments about Bordeaux. While such comparisons may be perfectly legitimate, I do not believe that they may be called upon to ground a system of taxonomy which will make sense of the ideal of reliability in all domains. These common sense judgments, after all, may provide us with domains having little or no psychological unity; they need answer to nothing more than the interests of the agents making the judgments of comparative reliability. Assessments of reliability relative to such domains need not be false, of course, but at the same time there is no reason to think that, whatever an agent's interests in evaluation may be, they need automatically provide us with an account of domains which make for a feasible ideal. By the same token, the fact that domains of this sort are defined solely by the evaluator's subjective interests seems to rob this account of the very objectivity required of an account of cognitive ideals. While I cannot claim to have proven that any possible account of cognitive domains will automatically make the ideal of reliability in all domains wholly unfeasible, the above considerations surely do shift the burden of proof onto those who would claim to have such a feasible ideal. The psychologically realistic way of working out the idea of cognitive domains does not allow for the possibility of reliability in all domains. It is not clear that there is an alternative available to work out this idea. I am not suggesting that we should not think of our systems of belief acquisition as reliable in some domains and not in others. Indeed, I think that Nisbett and Ross are quite correct in arguing that this is an illuminating way to look at inference. Once we realize, however, what the psychological motivation is for defining the different problem domains, we see that the re-
quirement of reliability in all domains is completely unrealistic. The fact that our inductive inferences are unreliable in some domains does not distinguish them from the principles governing the acquisition of our perceptual beliefs, nor, indeed, does it distinguish them from any of the psychologically deep mechanisms of belief acquisition or retention. This by itself does not show that our inductive inferences are normatively acceptable as they stand, for the structural feature of reliability in some domains and unreliability in others is compatible with both acceptable and unacceptable inference. What I have attempted to show in this section is that the defects in our inductive inferences, if any, are far more difficult to demonstrate than first meets the eye. It should be clear by now that any attempt to defend the view that our basic inductive strategies are fundamentally defective must do far more than simply rehearse the now familiar data. If there are significant defects to be found here, these defects will require careful elaboration. The Limitations of Computability If the reliability in all domains account of how we ought to reason presents us with an unrealistic ideal, why not simply appeal to the canon of statistical inference? Indeed, this is precisely the standard against which many writers in this field measure human inference. Thus, Kahneman and Tversky comment, "In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of Why shouldn't we simply compare the ways in which people predi~tion."~~ reason against some standard account of proper statistical inference? As Gilbert Harman has argued," such a standard is unattainable. Proper use of statistical inference would require appeal to a principle of conditionalization: probabilities of propositions would need to be updated in light of new evidence. Harman notes, If one is prepared for various possible conditionalizations, then for every proposition P one wants to update, one must already have assigned probabilities to various conjunctions of P together with one or more of the possible evidence propositions and/or their denials. Unhappily, this leads to a wmbinatorial explosion, since the number of such conjunctions is an exponential function of the number of possibly relevant evidence propositions. In other words, to be prepared for coming to accept or reject any of ten evidence propositions, one would have to record probabilities of over a thousand such conjunctions for each proposition one is interested in updating. To be prepared for twenty evidence propositions, one must record a million probabilities. For t h h y evidence propositions, a billion probabilities are needed, and so forth.35 33
" 35
"On the Psychology of Prediction," reprinted in Kahneman, Slovic and Tversky, op. cit., 48. Change in View: Principles of Reasoning, Bradford Books/MIT Press, 1986,25-27. See also Christopher Cherniak, op, cit. Ibid., 25-26.
THELAWS OF THOUGW 909
Harman's point is not that it would be inconvenient for us to calculate probabilities. Rather, the kinds of calculations required are, from a practical point of view, unfeasible. In discussing inductive inference, Quine has commented that "the Humean predicament is the human predi~ament."~~ But the problem here is not merely a problem for human beings. The problem of combinatorial explosion is one which faces any computational device whatsoever. It is really a feature of the problem itself, rather than a feature peculiar to our, or anything else's, means of solving the problem. Probabilistic rules of inference are computationally intractable. It would thus be absurd to complain that human inference fails to measure up to such a standard of proper reasoning. An acceptable account of how we ought to reason must surely be one which is not as deeply impossible to implement as the problem of combinatorial explosion shows statistical inference to be. A reasonable ideal must, at a minimum, be computationally feasible. Obvious as this requirement is, it has important and surprising implications for any attempt to evaluate human inference, as Harman's point about statistical inference illustrates. The requirement of computational feasibility presents a substantive constraint on any theory of how we ought to reason. We should welcome such constraints, for they help not only to narrow the class of candidate idealizations, but to inform our conception of the ideal. It is only against the background of a sound idealization that we may reasonably evaluate our native inferential tendencies. Too much of the current literature on human inference measures our cognitive equipment against unrealizable ideals. The Natural Solution Let me suggest a way of looking at the literature on human inference which is, I believe, more realistic.37I cannot defend this view here,38but I hope the foregoing remarks are sufficient to make this kind of approach worthy of further investigation. Human beings are extremely well suited to obtaining information about the environments in which they tend to be found. Reliably obtaining information about the environment requires processes of belief acquisition which are tailored to the environment, for there are no computationally tractable processes of information gathering that are environ-
36
37
38
"Epistemology Naturalized," reprinted in Hilary Komblith, ed., op. cit., 17. The view suggested here is closely related to those suggested by Herbert Simon, Models of Man, Wiley, 1957 and The Sciences of the Artificial, second edition, MIT Press, 1981; James March, "Bounded Rationality, Ambiguity, and the Engineering of Choice," Bell Journal of Economics 9 (1978), 587608; and ChristopherChemiak, op. cit. I elaborate upon and defend this view at length in my Inductive Iderence and Its Natural Ground,Bradford Books/MF Press, 1993.
As a result, our inferences build in substantive assumptions ment about the environment; and while these assumptions are typically true, they are sometimes false. It is precisely when the assumptions are false that our inferential tendencies lead us astray. This is the price we pay, however, for efficient acquisition of information in normal environments. Indeed, if we are to obtain information about the environment at all, we must allow for processes which will fail us in non-standard environments. These points need to be kept in mind when we seek to evaluate our native inferential processes. We should not hold them up to Cartesian standards of infallibility, nor should we expect them to operate reliably in all possible environments by means of computationally intractable calculations. If we keep these essential limitations in mind, we will look far more charitably on the data psychologists have obtained. What they suggest is that ow inferences constitute the natural solution to an extraordinarily difficult problem of information processing. This is not to say that anything which nature builds into us is thereby epistemically respectable. Rather, I simply wish to urge a certain caution and circumspection in the evaluation of our native inferential tendencies. We should be slow to attribute error or irrationality to human beings on the basis of prima facie shortcomings in our inferences. These apparent shortcomings often hide a deeper wisdom. If I am correct in my interpretation of the literature on human inference, one kind of motivation for engaging in epistemological theorizing may well be undercut. We already reason quite well without the aid of epistemology. It is not the falsity of foundationalism which threatens to rob epistemology of its significance, as some have claimed. Nor is it that the naturalization of epistemology prevents us from evaluating, rather than merely describing, our processes of belief acquisition; it does no such thing. Epistemology may turn out to have less significance than one might antecedently have thought for the simple reason that we may already acquire our beliefs in a remarkably efficient and reliable manner unselfconsciously and without the aid of theorizing. This may be bad news for for the practical significance of epistemology, but it would surely be good news all things considered.
39
I have elaborated on this theme in "The Unattainability of Coherence," in J. Bender, ed., The Current State of the Coherence Theory, Kluwer, 1989,207-14. THE LAWS OF THOUGHT 9 11