Cartographies of the Mind
Studies in Brain and Mind Volume 4
Series Editors John W. Bickle, University of Cincinnati, Cincinnati, Ohio Kenneth J. Sufka, University of Mississippi, Oxford, Mississippi
Cartographies of the Mind Philosophy and Psychology in Intersection by
Massimo Marraffa Università Roma Tre, Rome, Italy
Mario De Caro Università Roma Tre, Rome, Italy
and
Francesco Ferretti Università Roma Tre, Rome, Italy
A C.I.P. Catalogue record for this book is available from the Library of Congress.
ISBN-10 ISBN-13 ISBN-10 ISBN-13
1-4020-5443-2 (HB) 978-1-4020-5443-3 (HB) 1-4020-5444-0 (e-book) 978-1-4020-5444-0 ( e-book)
Published by Springer, P.O. Box 17, 3300 AA Dordrecht, The Netherlands. www.springer.com
Printed on acid-free paper
All Rights Reserved © 2007 Springer No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work.
Somebody who wants the truth becomes a scientist; somebody who wants to give free play to his subjectivity may become a writer; but what should somebody do who wants something in between? Robert Musil
This page intentionally left blank
TABLE OF CONTENTS
Contributors
xi
Preface
xv I. THE INTERPLAY OF LEVELS
1 Setting the stage: Persons, minds and brains Massimo Marraffa
3
2 Computational explanation and mechanistic explanation of mind Gualtiero Piccinini
23
3 Computationalism under attack Roberto Cordeschi and Marcello Frixione
37
II. DIMENSIONS OF MIND 4 Vision science and the problem of perception Alfredo Paternoster
53
5 Synaesthesia, functionalism and phenomenology Fiona Macpherson
65
6 Integrating the philosophy and psychology of memory: Two case studies John Sutton
81
7 Emotion and cognition: A new map of the terrain Craig De Lancey
93
8 Categorization and concepts: A methodological framework Cristina Meini and Alfredo Paternoster
vii
105
viii
TABLE OF CONTENTS
9 Errors in deductive reasoning Pierdaniele Giaretta and Paolo Cherubini
117
10 Language and comprehension processes Elisabetta Gola
131
III. DIMENSIONS OF AGENCY A. Self-Knowledge 11 The unconscious Giovanni Jervis
147
12 Self-deception and hypothesis testing Alfred R. Mele
159
13 Autonomous agency and social psychology Eddy Nahmias
169
B. Consciousness 14 The cognitive role of phenomenal consciousness Tiziana Zalla
189
15 The unity of consciousness: A cartography Tim Bayne
201
16 Extended cognition and the unity of mind. Why we are not "spread into the world" Michele Di Francesco
211
C. Agency and the Self 17 Extreme self-denial Ralph Kennedy and George Graham
229
18 Empirical psychology, transcendental phenomenology, and the self Stephen L. White
243
19 How to deal with the free will issue: The roles of conceptual analysis and empirical science Mario De Caro
255
TABLE OF CONTENTS
ix
D. Social Agency 20 The beliefs of mute animals Simone Gozzano
271
21 Naive psychology and simulations Cristina Meini
283
22 The social mind Francesco Ferretti
295
23 Social behaviors and brain interventions: New strategies for reductionists Aaron Kostko and John Bickle
309
References
319
Index of names
363
Index of subjects
369
CONTRIBUTORS
Tim Bayne is Lecturer in Philosophy at St. Catherine’s College, Oxford. His research interests include philosophical psychology and philosophical psychopathology. At present he is completing a book on the unity of consciousness. John Bickle is Professor and Head of the Department of Philosophy and Professor in the Neuroscience Graduate Program at the University of Cincinnati. He is the author of Psychoneural Reduction: The New Wave (MIT Press 1998), Philosophy and Neuroscience: A Ruthlessly Reductive Account (Kluwer 2003), co-author (with Ronald Giere and Robert Mauldin) of Understanding Scientific Reasoning, 5th Edition (Thomson 2005), and editor of The Oxford Handbook of Philosophy and Neuroscience (Oxford UP, forthcoming). His research interests include the philosophy of neuroscience, scientific reductionism, and the cellular and molecular mechanisms of cognition and consciousness, on which he has published over forty papers and book chapters. Paolo Cherubini is Professor of Psychology in the Department of Psychology at the Università di Milano-Bicocca. His main research interests are in the cognitive experimental psychology of reasoning, thought, and decision making. Roberto Cordeschi is Professor of Philosophy at the Università di Roma “La Sapienza”, where he teaches philosophy of science. He is the author of several publications in the history of cybernetics and in the epistemological issues of cognitive science and artificial intelligence, including The Discovery of the Artificial (Kluwer 2002). Mario De Caro is Associate Professor in the Department of Philosophy at the Università Roma Tre; since 2000, he has also been teaching philosophy at Tufts University. Beside two books in Italian and several papers and book chapters in the areas of the philosophy of mind, philosophy of action and ethics, he is the editor of Interpretations and Causes: New Perspectives on Donald Davidson's Philosophy (Kluwer 1999), and the coeditor, with David Macarthur, of Naturalism in Question (Harvard UP 2004) and Normativity and Nature (Columbia UP, forthcoming).
xi
xii
CONTRIBUTORS
Craig De Lancey is Assistant Professor of Philosophy at the State University of New York at Oswego. His publications include Passionate Engines: What Emotions Reveal about Mind and Artificial Intelligence (Oxford UP 2001). Michele Di Francesco is Professor of Philosophy at the Università Vita-Salute S. Raffaele in Milan, where he teaches philosophy of mind and philosophy of cognitive sciences. His research interests include the ontology of mind and the philosophical consequences of cognitive science. In this connection he is dealing with issues concerning consciousness and the unity of the mind, mental causation, and emergentism. He is president of the Italian Society for Analytic Philosophy. Francesco Ferretti is Lecturer in the Department of Philosophy at the Università Roma Tre, where he teaches philosophy and cognitive science. He has published a book (in Italian) on mental imagery and at present he is completing a book on language, mind and human nature. Marcello Frixione is Associate Professor of Philosophy at the Università di Salerno, where he teaches philosophy of language and logic. His research interests are in the field of cognitive science, and include philosophical and epistemological issues, computation and cognition, knowledge representation in artificial intelligence and robotics. Pierdaniele Giaretta is Professor of Philosophy at the Università di Verona, where he teaches philosophy of science and logic. His primary research interests for most of the last decade have been in Russell’s logic and philosophy of logic, formal ontology, logic and reasoning. He has also worked on the role of ontology in knowledge representation, and the logical analysis of clinical diagnosis. Elisabetta Gola is Lecturer in the Department of Philosophy at the Università di Cagliari, where she teaches philosophy of mind. Her research interests include artificial intelligence tecniques applied to natural language comprehension and philosophical issues related to non-literal uses of meaning, on which he has published two books (in Italian) and numerous papers and book chapters. Simone Gozzano is Professor of Philosophy at the Università di L’Aquila, where he teaches philosophy of mind. His interests include intentionality (he wrote two books in Italian on the subject) and animal cognition (he edited two books on this topic). At present he is completing a book on mental causation. George Graham is the A.C. Reid Professor of Philosophy and Adjunct Faculty Graduate Program in Neuroscience at Wake Forest University in Winston-Salem. His research interests include philosophy of mind, philosophical psychopathology, and the conceptual foundations of cognitive science. Among his recent publications
CONTRIBUTORS
xiii
are the following: Oxford Textbook in Philosophy and Psychiatry, with K.W.M. Fulford and T. Thornton (Oxford UP 2006); and “Self-Ascription: Thought Insertion” in J. Radden (ed.), The Philosophy of Psychiatry: A Companion (Oxford UP 2004). Giovanni Jervis is Professor of Dynamic Psychology at the Università di Roma “La Sapienza”. A student of ethnologist Ernesto De Martino, his research has focused on social psychiatry (and psychology) and the foundations of psychoanalytic theories since the 1950s. Ralph Kennedy is Associate Professor and Chair in the Department of Philosophy at Wake Forest University in Winston-Salem. He is currently working (with Michaelis Michael) on a paper on what it is to think of something as an object. His past papers include “How not to derive ‘is’ from ‘could be’: Professor Rowe on the ontological argument” (Philosophical Studies 1989) and, with Charles Chihara, “The Dutch book argument: its logical flaws, its subjective sources” (Philosophical Studies 1979). Aaron Kostko is a Ph.D. student in the Department of Philosophy at the University of Cincinnati. His primary areas of specialization are philosophy of mind and philosophy of science, with particular emphasis on the emerging social neurosciences. His dissertation research attempts to provide an empirically informed account of the formation of self-conceptions and self-narratives, integrating findings from neuroscience and social psychology. Fiona Macpherson is Lecturer in Philosophy at the University of Glasgow, where she established and directs the Centre for the Study of Perceptual Experience. She is spending the academic year 2005-2006 on secondment at the Centre for Consciousness in the Research School of Social Science at the Australian National University. She has published papers in the philosophy of mind in journals such as Noûs, Philosophy and Phenomenological Research and Pacific Philosophical Quarterly. Massimo Marraffa is Lecturer in the Department of Philosophy at the Università Roma Tre, where he teaches philosophy and psychology. His research focuses primarily on issues in the philosophy of psychology and the foundations of cognitive science, on which he has published three books (in Italian) and several papers and book chapters. Cristina Meini is Lecturer in the Department of Humanities at the University of Piemonte Orientale in Vercelli, where she teaches psychology. Her research focuses on the philosophy of cognitive processes. In particular, her interests include the ontogenesis and phylogeny of folk psychology, on which she has published two books (in Italian) and several papers.
xiv
CONTRIBUTORS
Alfred R. Mele is the William H. and Lucyle T. Werkmeister Professor of Philosophy at Florida State University. He is the author of Irrationality (1987), Springs of Action (1992), Autonomous Agents (1995), Motivation and Agency (2003), and Free Will and Luck (2006), editor of Philosophy of Action (1997) and co-editor of Mental Causation (1993) and The Oxford Handbook of Rationality (2004), all published by Oxford University Press. He is also the author of SelfDeception Unmasked (Princeton UP 2001). Eddy Nahmias is Assistant Professor in the Department of Philosophy and the Brains & Behavior program at Georgia State University. His research is in the philosophy of mind and moral psychology, focusing on questions about human agency: what it is, how it is possible, and how it accords with scientific accounts of human nature. Nahmias is currently writing a book manuscript, Free Will and the Sciences of the Mind. Alfredo Paternoster is Associate Professor of Philosophy of Language at the Università di Sassari. His research interests include philosophy of language, philosophy of mind and foundations of cognitive science on which he has published two books (in Italian) and over 25 papers and book chapters. Gualtiero Piccinini is Assistant Professor of Philosophy at the University of Missouri in St. Louis. He works primarily in philosophy of mind, with an eye to psychology, neuroscience, and computer science. His articles have been published in Philosophy of Science, Australasian Journal of Philosophy, Philosophical Studies, Synthese, Canadian Journal of Philosophy, Studies in the History and Philosophy of Science, Journal of Consciousness Studies, and Minds and Machines. John Sutton is Associate Professor and Head of the Department of Philosophy at Macquarie University in Sydney. He works in the philosophy of cognitive science and the history of science, and is the author of Philosophy and Memory Traces: Descartes to connectionism (Cambridge UP 1998), and coeditor of Descartes’ Natural Philosophy (Routledge 2000). His recent work is on skill memory, autobiographical memory, language and memory, dreaming, distributed cognition, and the extended mind hypothesis. Stephen L. White is Associate Professor in the Department of Philosophy at Tufts University. He works in the philosophy of mind, epistemology, theory of action and ethics, and is the author of The Unity of the Self (MIT Press 1991) and The Necessity of Phenomenology (Oxford University Press, forthcoming). Tiziana Zalla is a cognitive scientist at the French Centre National de la Recherche Scientifique (CNRS) presently working at the Institut Jean-Nicod in Paris. She worked at the National Institute of Health in Bethesda and at the Institut de Sciences Cognitives in Lyon. She has published many papers on consciousness, intentionality and knowledge of action in patients with schizophrenia and autism.
PREFACE
This book aims at exploring the potential for interaction between philosophy of mind (the area of philosophy that deals with our commonsense conception of mental matters—commonly called “folk psychology”) and the science of psychology. When we consider the relationship between these two domains of inquiry, we find a spectrum of positions. At one extreme, there is the idea that the investigation into the mental is the prerogative of either discipline. This perspective, sometimes termed “isolationism”, can take two different forms. According to scientific isolationism, the problems of philosophy of mind are either illusory or the prerogative of scientific psychology. In this perspective, the proper business of the philosophy of mind is, at most, the accurate re-description of the problems traditionally regarded as its area of expertise so that they can be handed over to empirical research. By contrast, philosophical isolationism claims that philosophy of mind can proceed quite independently from any scientific enterprise: either because the very idea of a “science” of mind is seen as some sort of Rylean category mistake; or, less radically, because philosophical inquiry is conceived as having a “purely conceptual” or “transcendental” character, and hence it is constitutively autonomous from empirical research. We believe that both forms of isolationism are to be rejected. Scientific isolationism is constantly at risk of loosing the mental as its own object of study, replacing it with objects that belong to different levels of analysis. By contrast, philosophical isolationism easily runs the risk of getting caught in a circle within a conceptual framework that is assumed to be necessary and universal, but manifestly rests on the dubious analytic/synthetic distinction. Fortunately, there is a second, much more promising point of view—which can be called “interactionism”—, according to which philosophy of mind and scientific psychology should interact in the attempt to offer an integrated picture of the mental. In this perspective, contrary to philosophical isolationism, philosophy of mind is constrained by the findings of empirical research; but, contrary to scientific isolationism, it makes a non-replaceable contribution to the study of the mental by imposing on scientific psychology (through a methodology different from that of empirical investigation of the world) some crucial top-down constraints that derive from our folk psychological conceptual scheme. xv
xvi
PREFACE
In this perspective, the term “philosophy of psychology” is an appropriate label for the study of the interaction between an empirically-informed philosophy of mind and a philosophically-informed scientific psychology. This interaction consists in working back and forth between the ordinary image of ourselves as selfconscious, intentional, rational agents, and the scientific conception of ourselves as biochemically-implemented computational machines, by revising these two images wherever necessary so as to pursue the regulative ideal of a coherent selfconception. The book comprises three parts. In the first part, “The interplay of levels”, philosophy of psychology explores some foundational issues in scientific psychology. Here the focus is on the very possibility of a scientific psychology, with respect to both the legitimacy of its own level of analysis—the informationprocessing level—and the relationship that this entertains with, on the one hand, the lower level of the neurosciences, and, on the other hand, the higher level of the philosophical reconstruction of our folk psychological conceptual scheme. The second part of the book, “Dimensions of mind”, gets inside psychology, and the interactive approach is applied with the aim of clarifying issues and debates concerning some classical mental phenomena (vision, synaesthesia, memory, emotions, concepts, reasoning and language). Finally, in the third part, “Dimensions of agency”, this approach comes to grips with some thorny issues, which traditionally are considered impervious to the projects of naturalization, and hence are often paraded as evidence in favor of philosophical isolationism (selfknowledge, consciousness, the self, free will and social agency). Unsurprisingly, the authors of the essays collected in this section of the book differ in their views about how harmonious the interplay of philosophical analysis and empirical investigation is likely to be with regard to these issues; everybody, however, agrees about its fruitfulness. The Editors Rome, November 2006
PART I THE INTERPLAY OF LEVELS
CHAPTER 1 SETTING THE STAGE: PERSONS, MINDS AND BRAINS Massimo Marraffa
Over the last thirty years, the philosophy of science has become increasingly “local”. Its focus has shifted from the general features of scientific enterprise to the concepts, theories, and practices of particular disciplines. Philosophy of neuroscience, philosophy of psychology, and philosophy of cognitive science are three results of this growing specialization.1 This chapter is a very short introduction to the philosophy of cognitive psychology, especially in its computational incarnation. Cognitive psychology investigates complex organisms at the information-processing level of analysis, and it can be defined a peculiar level in the sense that it is suspended between two worlds. On the one hand, there is the ordinary image of ourselves as persons, namely as self-conscious, intentional, rational agents. On the other hand, we have the subpersonal sphere of the cerebral events, as investigated by neuroscience. Therefore, one of the main tasks for the philosopher of psychology is to unravel this peculiarity, trying to shed some light upon the relations between these different ways of describing ourselves. The following pages are dedicated to some classical attempts to accomplish this task. In the course of doing so, we shall draw a very quick sketch of the rise and development of cognitive psychology and cognitive science, setting the scene for the other chapters of this book. TP
PT
1. FROM FOLK PSYCHOLOGY TO COGNITIVE SCIENCE 1.1
The form and the status of folk psychology
Folk psychology as a theory. To navigate through the social world normal adults advert to a spontaneous capacity to “mentalize” or “mindread”, that is, to describe, explain, and predict their own and other people’s behavior on the basis of mental state attributions.2 According to the so-called “theory theory”, mindreading rests on a theory, or rather a proto-theory, often called “folk psychology”. This is a theory3 in the sense of being an integrated and coherent body of knowledge which organizes the multiform sphere of the mental essentially through two categories: qualia and TP
PT
TP
3 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 3–22. © 2007 Springer.
PT
4
MASSIMO MARRAFFA
intentional states. The former are the experiential or introspectible properties of mental states. Their essence seems to consist in their being captured from a subjective or first-person point of view—there is something that it is like to perceive a shade of red or to regret that Brutus killed Caesar. As a whole, these mental entities define the domain of phenomenal consciousness.4 In contrast, intentional states are states (such as believing, desiring, regretting, etc.) which have “direction toward an object” or “reference to a content”.5 If I believe that Brutus killed Caesar, my belief is directed toward an object or refers to a content, that is, what the sentence “Brutus killed Caesar” expresses. Intentional states are often termed “propositional attitudes” since—as the example shows—in ascribing them to a subject, we use sentences of the form “S believes (or desires, etc.) that p”, where the proposition p expresses the content of the subject S’s mental state. In any intentional state, the objects on which the state is directed are presented in a certain way, namely it has a representational character. When I believe that London is north of Paris, I represent a state of affairs in the form of a particular spatial relation between two objects. This point is often made by saying that intentional states are semantically evaluable, that is, they can be true or false— my belief that London is north of Paris is true if there is a fact in the world that makes it true.6 TP
PT
TP
TP
PT
PT
Compatibilism vs. Eliminativism. Social psychologists have investigated mindreading since at least the 1940s. In Heider and Simmel’s classic studies, subjects were presented with geometric shapes that were animated as if moving around in relation to each other. When asked to report what they saw, the subjects almost invariably treated these figures as intentional agents with motives and purposes, suggesting the existence of a universal and largely automatic capacity for mentalistic attribution.7 Pursuing this line of research would lead to Fritz Heider’s The Psychology of Interpersonal Relations (1958), a seminal book that is the main historical referent of the inquiry into folk psychology. In particular, it played a central role in the origination and definition of attribution theory, a field of social psychology that investigates the mechanisms underlying ordinary explanations of our own and other people’s behavior. Attribution theory is an offspring of Heider’s visionary work, but a quite different way of approaching folk psychology. Heider takes folk psychology in its real value of knowledge, arguing that “scientific psychology has a good deal to learn from common-sense psychology”.8 In contrast, most research on causal attribution is true to behaviorism’s methodological lesson and focuses on folk psychology’s naivetes.9 The contrast between these two attitudes toward the explanatory adequacy of folk psychology has shaped the philosophical debate on the fate of the ordinary image of ourselves in light of the tumultuous development of cognitive science. On this matter the basic issue is: will the theoretical entities invoked in folk psychology be a part of the ontology of a serious scientific psychology? And the answers range TP
PT
TP
TP
PT
PT
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
5
from Jerry Fodor’s “definitely yes”, based on the idea that propositional attitudes are the bedrock of a scientifically adequate psychology; to Stephen Stich’s “possibly not”, motivated by doubts about the folk concept of belief raised, inter alia, just by attribution theory;10 to Paul Churchland’s “absolutely not”, based on the idea that the deliverance from folk concepts is the condition of psychology’s being reducible to neuroscience, and hence having a scientific nature. These two perspectives on the status of folk psychology—the former “compatibilist”, the latter “eliminativist”—are the coordinates that help us to navigate through the complex conceptual landscape of the cognitive revolution. As we shall see, the rise of cognitive psychology is the result of the rejection of the behaviorist eliminativism (subsection 1.2) in favor of a compatibilist project which represents a sort of “experimental mentalism”11 (subsection 2.1). Nevertheless, the eliminativist ghost has continued to haunt cognitive psychology, taking on always new forms (subsection 2.2). TP
PT
TP
1.2
PT
The rise and fall of behaviorism
Psychology as phenomenology. Both the classical empiricist and the classical rationalist pictures of introspective self-knowledge (or, in up-to-date terms, “firstperson mindreading”) have granted it a special epistemic authority. According to Descartes, for example, the subject is transparent to itself, and the reflective awareness (conscientia) the mind has of its own contents provides knowledge enjoying a special kind of certainty, which contrasts with our knowledge of the physical world: the judgments about our current mental states and processes are infallible or, at least, incorrigible. In light of this traditional optimism about self-knowledge, it is not at all surprising that in the late nineteenth and early twentieth centuries scientific psychology is predominantly a psychology of introspective consciousness.12 Pursuing the project to make introspection a rigorous method of inquiry, which would upgrade psychology to the status of the other natural sciences, early experimental psychologists meticulously probed the contents of consciousness in an effort to offer a full description of the mental landscape as it appears to the subject. In short, this psychology was “a kind of phenomenological investigation of subjective self-awareness”.13 TP
TP
PT
PT
Eliminative behaviorism. By virtue of the mentalistic idiom, these introspectionist psychologists would not have trouble talking to “poets, critics, historians, economists, and indeed with their own grandmothers. The nonspecialist reader in 1910 would be in equally familiar territory in William James’s Principles of Psychology and in the novels of James’s brother Henry”.14 John Watson’s brand of behaviorism put an end to the good relationship between scientific psychology and folk psychology, urging to abandon the introspectionist attempts to make consciousness a subject of experimental investigation. A psychology aspiring to scientific respectability had to rely instead on publicly observable data, that is, TP
PT
6
MASSIMO MARRAFFA
patterns of responses (overt behavior) to stimuli (physical events in the environment). The outcome was an extremely austere conception of psychological explanation: the psychologist, equipped with nothing but Pavlov’s conditioning and Thorndike’s law of effect (precursor of Skinner’s operant conditioning), had to chart associative connections between classes of environmental inputs (or histories of exposure to environmental inputs) and classes of behavioral outputs. What occurred in the “head”, between input and output, was a topic for physiology (the ultimate behavioral science). The organism was regarded as a “black box”. Insofar as behaviorism removes the inner states and processes from psychology’s explanation and ontology, it can be considered a variant of the doctrine of eliminativism.15 In its strongest form, eliminativism predicts that part or all of our folk psychological conceptual apparatus will vanish into thin air just as it happened in the past, when scientific progress led to drop the folk theory of witchcraft or the protoscientific theories of phlogiston and caloric fluid. This prediction rests on an argument which moves from the premise of considering folk psychology as a massively defective theory to the conclusion that—just as witches, phlogiston and caloric fluid—folk psychological entities do not exist. (Sometimes this negative ontological conclusion is set by the weaker conclusion that folk psychological entities will not be part of the ontology of a mature science.) The behaviorist version of eliminativism predicts that the scientific theory which replaces the seriously mistaken folk psychological theory will be couched in the vocabulary of physical behavior. Eliminative behaviorism is a recurrent theme in the writings of Watson and Skinner, although in some passages they waver between an eliminative interpretation of behaviorism—an ontological and explanatory thesis: mental entities do not exist and hence the explanation of animal behavior will be non-mentalistic— and other two interpretations: (i) the methodological claim that mental entities exist but are irrelevant to the scientific study of animal behavior; and (ii) the semantical claim—known as “analytic” or “logical” behaviorism—that statements containing psychological terms are translatable into statements containing just terms referring to physical behavior. This is a reductive program: mental entities are not eliminated, but rather identified with dispositions to behave in certain ways under certain circumstances.16 A point is well worth emphasis. As Larry Hauser rightly says, “although behaviorism as an avowed movement may have few remaining advocates”, some of its “metaphysical and methodological challenges” are still very much alive.17 First and foremost, the fundamental objection that Skinner had to the mentalistic explanation in psychology, namely the homunculus fallacy, is a vital constraint on any serious mentalistic psychology. That is, a plausible theory of cognition must avoid the infinite regress triggered by the attempt to explain a cognitive capacity by tacitly positing an internal agent with that very capacity.18 TP
TP
PT
PT
TP
TP
PT
PT
Cognitive maps and syntactic structures. Since the 1930s and 1940s the increasing perception of the limits of the S(timulus)-R(esponse) explanation makes behaviorism evolve towards what would be, since the 1960s, cognitive psychology.
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
7
A landmark in this evolution is the classical series of rat experiments in the Berkeley laboratory of Edward C. Tolman. These experiments demonstrated that the mazenavigation behavior of rats could not be explained in terms of S-R mechanisms, leading Tolman to suggest that the animals were building up complex representational states, or “cognitive maps”, which helped them locate reinforcers.19 These results were pointing in the same direction as Kenneth Craik’s suggestion that the mind does not work directly on reality, but rather on “small-scale models” of it.20 Some ingenious attempts to refine the S-R schema were made to account for Tolman’s experimental results without his troublesome mentalistic concessions.21 However, such a schema turned out to be totally powerless when the focus shifted from maze navigation behavior in rats to verbal behavior in human beings. Thus, it is hardly surprising that one of the main factors of the transition from behaviorism to cognitivism was the impetuous development, since the late 1950s, of a mentalistic theory of language, namely Noam Chomsky’s generative linguistics.22 Over the course of his trenchant criticism of empiricist theories of linguistic learning, Chomsky put forward an argument that would become one of the tools of the cognitivist trade: the poverty of the stimulus argument.23 Let’s examine the input and the output of the process of first-language acquisition. A large amount of empirical evidence attests to a gap between the learning target achieved by the child (its mature linguistic competence) and the “primary linguistic data” (the child’s observations of utterances produced by adult members of its speech community). In other terms, the output contains more information than was present in the input. This extra information can be nothing but a contribution made by the human learner, that is, the innate cognition of certain facts about universal constraints on possible human languages (the so-called “Universal Grammar”). TP
TP
PT
TP
PT
TP
TP
1.3
PT
PT
PT
Inside the black box: The vicissitudes of information
Biological information processing. The scope of the Chomskian argument goes far beyond the case of language acquisition. And it is not an overstatement to claim that “Modern Cognitivism starts with the use of poverty of the stimulus arguments”.24 If it turns out that there is more information in the response than there is in the stimulus that prompts the response, we must assume the intervention of some kind of inner processing of the stimulus. This work that the organism does is an unobservable cause that the cognitivist infers from behavior. And this is epistemologically correct, since postulating unobservables such as electrons and genes is the standard practice in science. Therefore, cognitive psychology can be defined as the science that investigates the processing of information in the head, that is, “all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used”.25 Instead of the behaviorist “empty organism”, cognitivists reintroduce the mind construed as an “information processor” intervening between the impingements on sensory organs and the behavioral response.26 The input TP
TP
PT
PT
TP
PT
8
MASSIMO MARRAFFA
information is codified in the mind, thus becoming inner objects—mental representations—that can undergo various types of processing. In particular, these objects can be transformed, which means that our representation of reality is not the product of a passive assimilation of physical environment, but an active construction that can involve both a reduction and an integration. Biological information processing is capacity-limited and hence necessarily selective. We can attend to a relatively small number of stimuli, and a still smaller amount of them can be recalled. Hence it is possible that part of the input information gets lost, and then a reduction takes place. Alternatively, sensory input may be integrated, enriched, and it is in such a case that some well-known poverty of the stimulus arguments concerning perception and memory make their appearance. Perceptual constancies are a case in point. In the case of size constancy, for instance, the visual system takes account of the perceived distance of objects and scales perceptual size up accordingly. Therefore, in this case as in that of linguistic acquisition, there is more information in the perceptual response than there is in the proximal stimulus, and this extra information can be nothing but a contribution made by the perceiving organism. Perceptual integration had attracted psychologists’ attention well before the rise of cognitivism. Most notably, Hermann von Helmholtz considered perceptual processes as unconscious inferences, which take specifications of proximal stimulations as premises and yield hypotheses about their distal causes as conclusions. This constructive conception of perception has been named “Establishment View”,27 and, indeed, most of the work on vision that cognitive scientists have done since 1970s rests on this approach. In this lapse of time, however, constructivism did not go unchallenged. The advocates of J.J. Gibson’s ecological optics have contended that “the visual system, far from reconstructing or inferring, merely extracts, picks out, the information present in the stimulation, ‘attuning itself’ to the relevant information structures”.28 And we shall see below (subsection 3.2) that Gibson is the main source of inspiration for a recent theory of cognition known as “active externalism”. TP
PT
TP
PT
Computational functionalism. According to a largely dominant interpretation, the processes of transformation, storage, recovery and use of information are computations, namely rule-governed sequences of operations upon data structures (mental representations), which mediate the organism’s behavioral responses to perceptual stimuli. The notion of computation here presupposed goes back to Alan Turing’s work. His “Turing machines” are abstract computers since their characterization does not take into account constraints that are essential in planning a real computer (e.g., memory space and computing time), and above all in that they are defined without any reference to their physical makeup (i.e. the type of hardware that realizes them). In fact, Turing machine states are fully definable in terms of (i) the machine’s input, (ii) the output of the machine given its state and that input, and (iii) the next state of the machine given the current state. That is, the states are
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
9
functionally defined since all that matters to what they are, is what the machine does, rather than its physical realization. Now, if cognitive processes are computations, they also must be functionally individuated, that is, individuated by the causal role (or function) they play in the cognitive system of which they are a part, independently from how such a role is physically (or, better, neurologically) realized. This thesis on the essence of cognition is known as “computational functionalism”. Insofar as cognitive psychology subscribes to computational functionalism, it contributes to cognitive science, namely the project of interdisciplinary study of natural and artificial intelligence that begins its maturation in the late 1950s and reaches a stable intellectual and institutional set-up in the early 1980s.29 One point is worth emphasizing. Cognitive science is the study of cognition as information processing by a natural or artificial computer, but research in cognitive science is typically about a specific type of computer: for instance, computational psychology investigates the biological computer, whereas artificial intelligence (AI) explores the artificial one. Therefore, cognitive science is not a discipline, but rather a “doctrine” that has oriented and is orienting inquiries in a number of disciplines30—some descriptive and empirical (e.g., cognitive psychology, linguistics and, more recently, neuroscience), some speculative and foundational (e.g., philosophy), and some both speculative and applied (e.g., AI).31 TP
P
PT
PT
TP
PT
David Marr’s tripartite model of explanation. Computational functionalism underlies Marr’s deeply influential analysis of how different levels of explanation can be integrated to understand a cognitive phenomenon.32 This analysis can be regarded as “the first full-blown version of computationalism”.33 After attempting to elucidate how the brain performs cognitive tasks by starting with the response patterns of individual neurons (e.g., Hubel and Wiesel’s “on-centered” and “off-centered” cells), Marr realized that discovering such patterns is only a description of what is happening in the brain, not an explanation of how it discharges its tasks. Consequently, he concluded that a computational account of a cognitive phenomenon needs to integrate the level of analysis of the “wetware” with other two levels of analysis. At the most abstract level of explanation is the “computational theory”, where we specify what a system is doing and why. In Marr’s theory of vision, for example, the function of the visual system is to construct, on the basis of inputs to the photoreceptors, a 3-D object-centred shape representation. At this level, psychological functions are characterized only in terms of their input data, the final output, and the goal of the computation, in ways that are neutral on the mechanism. Between the computational theory level and the level of “implementation” (as Marr terms the level of analysis of the wetware) is the “algorithmic” level. This level— which is the one specific to psychology—concerns the cognitive mechanism (the algorithm) that performs the function described at the level of the computational theory. For example, Marr outlines at the algorithmic level the intermediate representations between the retinal image and the final output (primal sketch and 2½-D sketch), and suggests some of the subsystems that compute them.34 TP
PT
TP
PT
TP
PT
10
MASSIMO MARRAFFA
2. FOLK PSYCHOLOGY AND COMPUTATIONAL PSYCHOLOGY 2.1
Folk psychological computationalism
Mind as a syntax-driven machine. What kind of relation is there between the computational states and processes postulated at the algorithmic level and the folk psychological mental states and processes? According to Fodor, the relation is one of legitimation or grounding for our folk psychological explanatory practice: “One can say in a phrase what it is that computational psychology has been proving so successful at: viz. the vindication of generalizations about propositional attitudes, specifically, of the more or less commonsense sorts of generalizations about propositional attitudes”. Therefore, “[w]hat a computational theory does is to make clear the mechanism of intentional causation; to show how it is (nomologically) possible that purely computational—indeed, purely physical—systems should act out of their beliefs and desires”.35 There are two dimensions to the problem of making clear “the mechanism of intentional causation”, of showing how it is possible that purely physical systems should act out of their propositional attitudes. The first problem concerns the nature of the intentional mental states. They are both semantically evaluable and causally efficacious, two properties that apparently never occur together elsewhere. This putative uniqueness has fed many perplexities about the perspectives of a physicalist explication of intentional states. For many philosophers they still remain, in Quine’s famous phrase, “creatures of darkness”.36 Actually, there is something else that is both semantically evaluable and causally efficacious: symbols. They can be about things (e.g., the word “cat” refers to cats); and they are physically instantiated or tokened, which makes them causally efficacious (the word “cat” consists of, e.g., ink on paper). Hence there is an analogy between thoughts and symbols, and “the history of philosophical and psychological theorizing about the mind consists largely of attempts to exploit it by deriving the causal/semantic properties of the former from the causal/semantic properties of the latter”.37 Fodor’s Representational Theory of Mind (RTM) is the most recent heir to this tradition, claiming that intentional states are relations between an agent and mental representations regarded as symbols of a Language of Thought (LoT). This is a formal language akin to the first-order predicate calculus. The second problem concerns the mechanics of thinking over time. The folk psychological laws that govern intentional mental processes subsume causal interactions among intentional states preserving their semantic coherence. For example, reasoning (the mental process par excellence) is a causal sequence of intentional states that tends to preserve their semantic (rational, epistemic) properties. But what if not an inner interpreter might be sensitive to such properties? Here RTM is at risk of the above-mentioned homunculus fallacy. Accordingly, a mechanical explanation of rationality—that is, the proof that a purely physical mechanism can implement causal interactions among intentional states preserving TP
TP
TP
PT
PT
PT
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
11
their semantic coherence—needs a strategy to prevent the regress of inner interpreters. This strategy, Fodor suggests, consists of combining RTM with the Computational Theory of Mind (CTM), namely the hypothesis that intentional mental processes are causal sequences of symbol transformations driven by rules that are sensitive to the syntactic form of the symbols and not to their content. At the foundations of CTM there are the methods of proof theory and the Turing machines.38 The proof-theoretic approach in logic has showed us how to link up semantics to syntax. For any formalizable system of symbols, it is possible to specify a set of formal derivation rules which, albeit sensitive only to the syntactic form of symbols, allows us to make all and only the semantically valid inferences. In this way, certain semantic relations between symbols are “mimicked” by their purely syntactic ones. The relevance of this result cannot be exaggerated. According to Fodor and Pylyshyn, the “classical” cognitive science can be described as “an extended attempt to apply the methods of proof theory to the modeling of thought (and, similarly, of whatever other mental processes are plausibly viewed as involving inferences; preeminently learning and perception)”.39 Accordingly, the hope is that “syntactic analogues can be constructed for non-demonstrative inferences (or informal, commonsense reasoning) in something like the way that proof theory has provided syntactic analogues for validity”.40 Formalization suggests a strategy to bridge the gap between semantics and causal efficacy that blocks the mechanization of the semantic coherence of thought: in fact, given the connection that formalization makes between semantics and syntax, if a link was set up also between syntax and causal efficacy, then it would be possible to connect semantics with causation via syntax. Here is where Turing’s theory of computability comes into play. Any formalizable process can be characterized in terms of effectively computable functions (i.e. functions for which an algorithm can be given). As stated by the “Church-Turing thesis”, all the effectively computable functions can be carried out by a Turing machine (assuming that both the tape and time are infinite). Since any Turing machine can be implemented by a physical mechanism (e.g., a digital computer), it follows that, for any finite formal system, it is possible to devise a machine which is able to automate the inferences of that system. Because certain of the semantic relations among the symbols in a formal system can be “mimicked” by their syntactic relations, and because such a system can be implemented by a computer, it follows that it is possible to construct a machine driven by syntax whose state transitions satisfy semantic criteria of coherence. Because digital computers are purely physical systems, this shows us that it is possible for a purely physical system to make inferences which respect the semantics of the symbols without invoking a question-begging homunculus.41 According to the Representational and Computational Theory of Mind (RCTM), the mind is a particular kind of computer, and the causal interactions among intentional states are implemented by computations on the syntactic TP
PT
TP
TP
PT
PT
TP
12
MASSIMO MARRAFFA
properties of LoT symbols, which are physically tokened in the brain like data structures in a computer. LoT is a formal system, and hence its rules preserve the semantic properties of the symbols. Minds are, in Dennett’s oft-cited phrase, “syntactic engines that can mimic the competence of semantic engines”.42 In RCTM the propositional attitude relations in RTM are identified with the computational relations in CTM. Each propositional attitude is identified with a characteristic computational/functional role played by the LoT sentence that it is the content of that kind of attitude. For example, a LoT sentence p might be the content of a belief since “it is characteristically the output of perceptual systems and input to an inferential system that interacts decision-theoretically with desires to produce further sentences or action commands”.43 Or equivalently, to believe that p is for p to be available to one set of computations, whereas to desire, to regret, to hope that p is for p to be available to other sets of computations. TP
T
PT
T
TP
PT
Cognitive psychology as anti-phenomenology. Fodorean mentalism is not introspectionist mentalism in a new guise. As we have seen, the mind that Fodor takes as the subject of cognitive psychology is not the introspective consciousness, but a kind of formalization of the psychology of propositional attitudes. The propositional attitude states can occur both in explicit, conscious judgments and in mental states that the agent could not possibly introspect, even in principle. This presupposes that consciousness and intentionality can be studied in the absence of one another, an approach to mentality that would not have been possible in the preFreudian conceptual universe, where consciousness and intentionality were intrinsically linked. However, as Fodor reminds us, “Freud changed all that. He made it seem plausible that explaining behavior might require the postulation of intentional but unconscious states. Over the last century, and most especially in Chomskian linguistics and in cognitive psychology, Freud’s idea appears to have been amply vindicated”.44 Actually, on this matter one can be more radical than Fodor, claiming that cognitive psychology has not simply vindicated Freud, but has gone far beyond. In fact, the Freudian concept of the unconscious is parasitic to a concept of consciousness idealistically taken as “a primary quality of the mind”,45 whereas cognitive psychology has given rise to “a reinforcing overturning of traditional psychodynamic questions”, and starts with asking how consciousness, rather than the unconscious, is possible.46 In this way, cognitive psychology amends Freud in view of Darwin. That is, it follows Darwin’s anti-idealistic methodological lesson and proceeds bottom-up, attempting to explain how the complex psychological functions underlying first-person awareness evolve from the more basic ones.47 This attempt does not appeal to our introspective self-knowledge, but all those disciplines—first and foremost developmental psychology—that investigate the gradual construction of self-awareness. In other words, cognitive psychologists see the conscious subjective experience as “an advanced or derived mental phenomenon, not the foundation of all intentionality, all mentality”;48 or, in more Continental terms, cognitive psychology is an anti-phenomenology, that is, a critique of the subject, of its alleged “givenness”.49 TP
PT
TP
TP
PT
PT
TP
TP
TP
PT
PT
PT
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
13
In the next section we shall see how cognitive psychology has addressed its critical potential not only against our phenomenological intuitions about consciousness and self-consciousness, but also against its own intentional grounds, thus opening the door to new behavioristic and eliminativistic objections. 2.2
Behavioristic and eliminativistic challenges to RCTM
Anti-introspectionism, externalism, and the syntactic theory of mind. The compatibilist view of the interface between propositional attitude psychology and scientific psychology takes the former as a good working hypothesis on the overall computational organization of the human mind. Noteworthy work in cognitive science has assumed that the folk account of the architecture of the mind is largely correct, though it is far from complete. However, there are also findings and theories which seem to suggest that our cognitive system is organized along lines quite different from those theorized by folk psychology. Here is a classic example. In 1977, after reviewing the experimental social psychology literature on dissonance and self-attribution, Richard E. Nisbett and Timothy D. Wilson concluded that the reports about the causes of our behavior are not reconstructions of real mental states and processes, due to a direct introspective awareness, but rather a “confabulatory” activity, originated by the employment of “a priori causal theories”.50 In this perspective, introspection becomes a form of self-deception.51 These ideas have been hugely influential. In developmental psychology and cognitive psychiatry, the hypothesis that behind the illusion of a direct introspective access there is an inferential activity based on socially shared explanatory theories has been developed in the framework of the theory-theory approach to the inquiry into the cognitive mechanisms underlying mindreading. Here “theory” refers to a tacit knowledge structure, a body of mentally represented information driving the cognitive machinery underlying mentalization.52 For most advocates of this approach, this theory underlies both self-attribution and hetero-attribution of mental states. Therefore, “even though we seem to perceive our own mental states directly, this direct perception is an illusion. In fact, our knowledge of ourselves, like our knowledge of others, is the result of a theory”.53 Neuropsychology is another research area that abounds with phenomena undermining the reliability of introspective consciousness. Consider, for example, the “split-brain” syndrome.54 Split-brain patients are patients whose corpus callosum has been severed. As a result, the hemispheres of their brains can no longer communicate with one another, giving rise to a complex array of deficits. Suppose, for example, that the command “Walk” is flashed to the right hemisphere of a splitbrain subject: “the patient typically stands up from the chair and begins to take leave from the testing van. When asked where she is going, she (the left side of the brain) says, ‘I’m going into the house to get a Coke’”.55 A possible explanation of this pattern of behavior is that the right hemisphere responds to the command by making inferences that the subject cannot introspect or report, whereas the left hemisphere TP
PT
TP
TP
TP
TP
PT
PT
PT
TP
PT
PT
14
MASSIMO MARRAFFA
“interprets” the right hemisphere’s response and tells an implausible story unconnected with the command. We find a very similar hypothesis about the cognitive mechanisms underlying confabulation in Wilson (1985). He hypothesizes two relatively independent cognitive systems: an unconscious system underlying nonverbal behavior, and a largely conscious system, whose function is to attempt to verbalize, explain and communicate what is occurring in the unconscious system. The latter takes information from the former as input and makes inferences based on repertories of rationalizations afforded by theories about the self and the situation. Reflecting on Wilson’s hypothesis, Stich has highlighted its critical potential against the folk concept of belief. A fundamental tenet of folk psychology is that our cognitive system is so organized that the very same state which underlies the sincere assertion of “p” also may lead to a variety of nonverbal behaviors. But from Wilson’s dual system hypothesis follows that this principle is radically wrong, and “in those cases when the verbal subsystem leads us to say ‘p’ and our nonverbal subsystem leads us to behave as though we believed some incompatible proposition, there will simply be no saying which we believe”.56 Therefore, Stich concludes, Wilson’s model shows that the tenability of the folk conception of mental architecture, the legitimacy of taking it as the ground on which to build a scientific theory of the mind, “is very much an open empirical question”.57 Stich (1983) combines these doubts about the sorts of states and mechanisms that folk psychology invokes with another line of eliminativist argumentation, focused on folk psychology’s reliance on semantic content. Earlier we saw that Fodor’s argument for a scientific intentional psychology rests on a “correlation thesis”, according to which differences in content are mirrored by differences in syntax.58 It is thanks to this correlation that the semantic properties of mental states are causally implicated in the production of behavior. The thesis, however, seems to be false: the well-known Putnam’s and Burge’s arguments for semantic externalism seem to demonstrate that the ordinary semantic properties (“wide content” properties) of mental states do not supervene on their formal properties.59 Hence Fodor cannot “have it both ways”: he cannot endorse both an individualistic methodology (i.e. cognitive psychology should be restricted to quantifying over the formal properties of mental states) and the scientific intentional realism (i.e. the intentional properties of mental states, properties that are not formal, are and will be part of the ontology of the cognitive psychology). Assuming that scientific psychology must be individualistic, a way out from this impasse is to deny that intentional properties have any legitimate role in scientific psychological explanation. Stich’s “syntactic theory of mind” takes this eliminativist option, and argues that cognitive psychology should recast its theories and explanations in a way that does not appeal to the wide content properties of mental states, but only to their individualistic, formal properties.60 TP
PT
TP
TP
TP
PT
PT
PT
TP
PT
Externalism strikes again. RCTM is unquestionably the most powerful systematization of computational functionalism. It holds a pivotal position in contemporary philosophy of psychology because it was the first major synthesis of
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
15
functionalist philosophy of mind with the cognitive revolution in psychology, and with the first generation of AI. Over the last two decades, however, this theory has been under attack, mostly owing to the expansion of cognitive science in two directions: “vertically into the brain and horizontally into the environment”.61 The force propelling these downward and outward developments is the pressure put on the individualist, modular, computational and representational conception of the mind by neurosciences, neoconnectionist cognitive modeling, dynamic approaches to cognition, artificial life, real-world robotics, and other research programs sometimes grouped under the heading of “non- or post-classical” cognitive science. The current debate on the conceptual foundations of cognitive science shows a range of positions which are characterized by the more or less radical attitude towards the implications of the post-classical body of work. At one end of the spectrum there is the claim that RCTM is “by far the best theory of cognition that we’ve got”,62 and the post-classical research programs are much ado about nothing. At the other end of the spectrum there is a view of the post-classical body of research as an exercise of extraordinary science, which preludes to the establishment of a new paradigm.63 Then in between these two poles is a “revisionist” perspective, which accepts some critical requirements of the post-classical research programs— first and foremost the deep dissatisfaction with the antibiologism and individualism of RCTM—and uses them as guidelines to reconstruct the conceptual bases of cognitive science. Andy Clark is a leading advocate of revisionism. He believes that RCTM can be reconstructed making due allowances for “the environmental embedded, corporeally embodied, and neurally ‘embrained’ character of natural cognition”,64 but without collapsing into the anti-representationalism characteristic of the most radical readings of post-classical cognitive science. Accordingly, Clark pursues the metamorphosis of RCTM into just one component in a three-tiered explanatory strategy: TP
TP
PT
PT
TP
PT
TP
PT
(i) a dynamicist account of the gross behavior of the agent-environment system;65 TP
PT
(ii) a mechanistic account,66 describing how the components of the agentenvironment system interact to produce the collective properties described in (i); TP
PT
(iii) a representational and computational account of the components identified in (ii).67 TP
PT
Clark calls this tripartite explanatory strategy “minimal representationalism”, and puts it into a wider theoretical framework: the “active externalism”.68 Unlike the above-mentioned semantic externalism, where the mental contents of a subject depend on aspects of the environment which are clearly external to the subject’s cognitive processes, active externalism asserts that the environment can play an active role in constituting and driving cognitive processes. In the wake of Gibson, this environment is viewed as a complex of “affordances”, which brings to the T
T
T
T
TP
PT
16
MASSIMO MARRAFFA
formation of internal states that describe partial aspects of the world and prescribe possible actions.69 These are “action-oriented” representations which, unlike LoT symbols, are personal (in that they are related to the agent’s needs and the skills that it has), local (in that they relate to the circumstances currently surrounding the agent) and computationally cheap (compared with Marr’s rich inner models of the visual scene). Clark’s active externalism confirms a point we made earlier, namely the relevance to the present day of some behavioristic metaphysical and methodological challenges. In fact, insofar as “emphasis on the outward or behavioral aspects of thought or intelligence—and attendant de-emphasis of inward experiential or inner procedural aspects—is the hallmark of behaviorism”,70 active externalism is behavioristic. TP
PT
TP
PT
Eliminative connectionism. Clark’s revision of RCTM follows the antiindividualistic guidelines that characterize the body of research on situated and embodied cognition. Now we turn to another revision, which reflects the movement downwards, into the brain, arising from the connectionist cognitive modeling and computational neuroscience. During the 1970s the functionalist approach inclined some scientificallyminded philosophers to view computational psychology as radically autonomous from neuroscience. For example, in Special Sciences Fodor draws a principled argument for a very strong autonomy of psychology from a combination of functionalism, multiple realizability thesis, and token-identity theory.71 By the late 1970s, however, “some philosophers were objecting to the divorce of cognitive science from neuroscience, Paul M. and Patricia S. Churchland foremost amongst them. They tended to continue to endorse a version of the identity theory and to reject the language of thought hypothesis”.72 The Churchlands’ version of the type-identity theory comes from the attempt to use the resources of neoconnectionist cognitive modeling to develop a more biologically respectable form of computational functionalism. That is, they view the artificial neural networks as neurally inspired computational systems, and hence endorse the functionalist idea that the explanation of a cognitive process disregards the fact that its medium is made of nervous tissue: “Neuronal details are no more essential to connectionist conceptions of cognition than vacuum-tube or transistor details are essential to the classical conception of cognition embodied in orthodox AI, Fodorean psychology, and [folk psychology] itself”.73 What the Churchlands blame on classical computational functionalism (aka RCTM) is that it failed to distinguish the level of cerebral matter from the level of cerebral architecture. A functionalism that aspires after biological plausibility needs to view our knowledge of the functional structure of brain as a source of constraints on the computational modeling. From this point of view, the strengths of artificial neural networks (capacities of learning and self-organization, flexibility, robustness in the presence of perturbations, capacity of dealing with such low-level tasks as the processing of sensory inputs and motor outputs) depend on just those structural TP
TP
PT
TP
PT
PT
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
17
features of computation (the high parallelism as opposed to von Neumann’s sequential processing) which are inspired by how the brain works.74 According to the Churchlands, this gives rise to a deep difference between classical and connectionist computational functionalism. Assuming as paradigm of mentation types of thinking that lend themselves to being codified in formal models such as deductive logic, RCTM endorses a “linguistic-rationalist tradition” in the study of human cognition, which sticks to folk psychology and intentionalist philosophy of mind in taking agents to represent the world through sentence-like structures and to perform computations that mimic logical inferences.75 In contrast, connectionist computational functionalism is inspired by the functional organization of the brain, which “represents the world by means of very high-dimensional activation vectors, i.e. by a pattern of activation levels across a very large population of neurons” and “performs computations on those representations by effecting various complex vector-to-vector transformations from one neural population to another”.76 The availability of a brain-like computational modeling that breaks with the “propositional kinematics” and “logical dynamics” of folk psychology leads the Churchlands to reverse Fodor’s approach to the autonomy of psychology issue. Fodor claims that the irreducibility of psychological states and processes to neurobiological ones implies a radical autonomy of psychology from neuroscience. The Churchlands accept this claim but only to draw a totally different implication from it. They think that we should give up a computational psychology irreducible because inextricably intertwined with folk psychology, and dedicate ourselves to develop a reducible successor. This is the process that Robert McCauley terms “coevolutionS”, namely “co-evolution producing the eliminations of theories characteristic of scientific revolutions”.77 According to the Churchlands, coevolutionS is the phase which computational psychology has been going through since early 1980s, with the advent of connectionism. In fact, they claim that the intertheoretic difference between, on the one hand, the connectionist representations as activation vectors and computations as vector-to-vector transformations, and, on the other hand, the classical sentence-like representations and logical computations, is sufficiently great to prompt an ontologically radical theory change, which will bring about the total elimination of folk psychology. After the eliminative stage, the new neurally inspired psychology and neuroscience will co-evolve until they are unified by an approximate microreduction, where lower-level theories preserve an equipotent image of upperlevel theories without comprehensive mapping.78 From the Churchlands’ view, therefore, the approximate microreduction of psychology to neuroscience is the pay-off of the substitution of subsymbolic distributed representations for LoT style representations. But how plausible is the eliminative-reductive model of the co-evolution of psychology and neuroscience? An objection has been voiced by some advocates of a pluralistic view on the explanatory relations between psychology and neuroscience. “Explanatory pluralism” is a position in the philosophy of science holding that “theories at different levels of description, like psychology and neuroscience, can co-evolve, and TP
PT
TP
TP
PT
B
B
B
B
TP
PT
TP
PT
PT
18
MASSIMO MARRAFFA
mutually influence each other, without the higher-level theory being replaced by, or reduced to, the lower-level one”.79 From this point of view, the most serious shortcoming of the Churchlands’ model is its unidirectionality: since it gives to neuroscientific level a priority, when the theories of psychology and neuroscience fail to map onto one another neatly, the blame lies exclusively on psychology.80 However, the pluralists contend, at least some cases of co-evolution display bidirectionality, that is, psychology and neuroscience mutually influence each other without reduction of the higher-level theory to the lower-level one. To account for this bidirectionality we are required to adopt a more pragmatic conception of coevolutionary dynamics: a co-evolution in the perspective of explanatory pluralism.81 Explanatory pluralism seems to fit in very well with computational neuroscience.82 In fact, this is a “bridge” discipline between psychology and neuroscience which, on the one hand, puts bottom-up constraints on computational modeling, while on the other hand extends some principles of computational modeling to neuroscientific research, thus promoting the integration of neuroscientific theoretical constructs into computational psychology.83 TP
PT
TP
TP
TP
PT
PT
PT
TP
PT
3. CONCLUSION The tension between compatibilism and eliminativism is the dialectic motor of the development of scientific psychology in the twentieth century. On the one hand, the rise of cognitive psychology was the resultant of the repudiation of the eliminativist claims of behaviorism in favor of a compatibilist project that has produced forms of mentalism radically different from the introspectionist mentalism characteristic of the beginnings of scientific psychology. On the other hand, the new mentalistic psychology has lived a precarious balance, constantly at risk of collapse under the pressure of always new behavioristic and eliminativistic challenges. This dialectic is inescapable. Self-criticism is constitutive of a science that rests on such a fragile theoretical base as our folk psychological intuitions about the mental. We have seen that even Fodor, the champion of compatibilism, radically restricts the scope of his defense of folk psychology. His scientific intentional realism is the hypothesis that whichever kinds of states will be postulated by a mature scientific psychology, they must be such that, like propositional attitudes, are semantically valuable, logically structured, and causally efficacious. It is no trouble for Fodor to admit that many specific posits of the folk-psychological conceptual scheme (“perhaps even ‘belief’ and ‘desire’”84) might turn out to be theoretically inadequate. On the other hand, we cannot go too far away from the folk psychological intuitions on pain of loosing the very concept of mind. Accordingly, informationprocessing psychology is required to accomplish the very arduous task of negotiating a “reflective equilibrium” not only with the bottom-up constraints from neuroscience (as required by the above-mentioned explanatory pluralism), but also TP
PT
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
19
with the top-down constraints from the philosophical theorizing on our folk psychological conceptual scheme.85 TP
PT
NOTES 1
Philosophy of neuroscience: Bechtel et al. (2001); Bickle and Mandik (2002); Bickle (2003). Philosophy of psychology: Hatfield (1995); Block and Segal (1998); Botterill and Carruthers (1999); Bermúdez (2005); Mason, Sripada and Stich (forthcoming); Wilson (2005b). Philosophy of cognitive science: Clark (2001); Grush (2002); Davies (2005). 2 See this volume, chapters 20-22. 3 The idea that folk psychology is a theory can be differently construed depending on we adopt a personal or subpersonal perspective (see Stich and Ravenscroft 1994). At the personal level, folk psychology is a theory of mind implicit in our everyday talk about mental states (see Lewis 1972). At the subpersonal level, folk psychology can be defined a “theory” in the sense that it is a tacit knowledge structure, a body of internally represented information which guides the cognitive mechanisms underlying mindreading. In this perspective, the theory implicit in our everyday talk about the mind is likely to be “an articulation of that fragment of [the subpersonal folk psychological theory] which is available to conscious reflection” (Ravenscroft 2004, Concluding Remarks). 4 Block (1995) draws a distinction between “phenomenal consciousness” and “access consciousness”. A mental state is access conscious if its content is available for use in various information-processing processes, like inference, verbalization and action planning. See this volume, pp. 190 ff. 5 Brentano ([1874] 1973, pp. 88-89). 6 It follows that in attributing a true (or false) belief to an agent we build a metarepresentation that represents his/her true (or false) representation. See this volume, chapter 22 passim. 7 Heider and Simmel (1944). 8 Heider (1958, p. 5). 9 See Jervis (1993, p. 53, n. 12). See also this volume, pp. 13-14, 155-157, 160-164, 172-179. 10 This is Stich (1983). Stich (1996, chapter 1) has “deconstructed” his former eliminativism. 11 Fodor, Bever and Garrett (1974, p. xi). 12 Very suitably, Hatfield criticizes “the conventional story of psychology’s novel founding ca. 1879” (2002, p. 213), and argues that the new experimental psychology was the outcome of the gradual transformation of “a previous, natural philosophical psychology” (p. 209). 13 Jervis, this volume, p. 147. 14 Stich (1983, p. 1). 15 Stich (1996, 1999). 16 On eliminative behaviorism, see Byrne (1994) and Rey (1997, chapter 4). Hatfield (2002, pp. 215-217) convincingly argues, against the received view, that logical behaviorism did not exert a substantive influence on neobehaviorism. 17 Hauser (2005). 18 This point is emphasized by Dennett (1978, pp. 58ff.), Sterelny (1990, p. 33), and Wilson (1999, p. xix). 19 Tolman (1948). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
20
20
MASSIMO MARRAFFA
Craik (1943). Craik’s theory already appeals to “computation”, albeit only in an informal sense. See this volume, pp. 44-45. 21 See, e.g., Hull (1943). 22 See this volume, pp. 132-135. 23 See this volume, p. 307, n. 10. 24 Fodor (1990, 197). 25 Neisser (1967, 1). 26 “Empty organism” is the term used by E.G. Boring to characterize Skinner’s position (quoted in Newell and Simon 1972, 875). 27 Fodor and Pylyshyn (1981). 28 Paternoster, this volume, p. 55. 29 See Bechtel, Abrahamsen and Graham (1998); Nadel and Piattelli Palmarini (2002). 30 See Block (1983, p. 521) and Marconi (2001, p. 18). Harnish (2002) opposes this “narrow” conception of cognitive science to a “broad” one. 31 See Bogdan (1993). 32 Marr (1982). 33 Cordeschi and Frixione, this volume, p. 39. 34 On Marr’s theory of vision, see this volume, pp. 55-56. 35 Fodor (1985, p. 422, emphasis in original). Bermúdez (2000) glosses this passage by making a distinction between two different types of psychological explanation. The explanations of folk psychology are “horizontal” (they explain a particular event or state in terms of antecedent states and events). They are “strategic and predictive”, allowing us “to navigate the social world” (Bermúdez 2005, p. 33). By contrast, the explanations of computational psychology are “vertical”: they aim to provide “legitimation” or “grounding” for our folk-psychological horizontal explanatory practice (p. 36). Bermúdez makes clear that the latter are the explanations “extensively studied by philosophers of science, who tend to use the vocabulary of reduction (which, in my terms, is simply one type of vertical explanation)” (p. 336). 36 Quine ([1956] 1966). 37 Fodor (1994a, p. 295). 38 See Aydede (2004) and Horst (1996, 1999, 2005) to which the present subsection is indebted. 39 Fodor and Pylyshyn ([1988] 1995, pp. 112-113). 40 Ibid., p. 113. See Horst (2005, subsection 1.1). 41 See Horst (1999, p. 170; 2005, subsection 2.1); Aydede (2004, subsection 5.2). 42 Dennett (1998, p. 335). 43 Aydede (2004, section 1). 44 Fodor (1991, p. 12). See also this volume, chapter 16, section 3. 45 Jervis, this volume, p. 152. 46 Jervis (1993, p. 301). 47 Cf. Jervis: “By taking a methodical ‘bottom up’ approach, [scientific psychology] examines how our most basic psychological mechanisms (akin to the learning processes in relatively simple organisms) can be gradually revealed and provide us with the information TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
SETTING THE STAGE: PERSONS, MINDS AND BRAINS
21
we need to understand and identify ourselves as thinking, conscious beings” (this volume, p. 152). See also Meini and Paternoster’s “bottom-up” approach to concepts, this volume, chapter 8. 48 Dennett (1993, 193). Objections to this bottom-up approach to consciousness have been raised by those philosophers who think that the only legitimate sense of consciousness is phenomenal consciousness and anachronistically restore the classic primacy of first-person phenomenology (see, e.g., Searle 1992). Providentially, however, two much more attractive options are available: (i) it is possible to argue that the only legitimate sense of consciousness is access consciousness (see, e.g., Dennett 1991—see also this volume, chapter 17, section 6); (ii) it is possible to argue that phenomenal consciousness must be explicated in causal, functional, or representational (i.e. “access-related”) terms (see, e.g., this volume, chapter 14). 49 Paul Ricoeur characterizes Freudian psychoanalysis as “une anti-phenomenologie, qui exige, non la reduction a la conscience, mais la reduction de la conscience” (1969, p. 137). However, as we have just seen, Freud’s inquiry into the unconscious starts from a consciousness taken as given. As Jervis notes, this makes psychoanalysis “a dialectical variant of phenomenology” (1993, p. 320, n. 15). In contrast, cognitive psychology can be quite rightly regarded as an anti-phenomenology. 50 Nisbett and Wilson (1977, p. 233). 51 Jervis (this volume, pp. 149-150) sees in the emphasis on self-deception the “strength” of the Freudian concept of the unconscious. Mele (this volume, chapter 12) defends a deflationary view of self-deception based on a recent theory of lay hypothesis testing. 52 See above, n. 3. 53 Gopnik and Meltzoff (1994, p. 168). 54 See this volume, pp. 207-209. 55 Gazzaniga et al. (1998, p. 544). 56 Stich (1983, p. 231). But see Rey (1988) for a “compatibilist” reply to this argument. Recently, Stich himself has radically downsized his anti-introspectionism in view of some work on first-person mindreading: “the kinds of mistakes that are made in [the experiments reported by Nisbett and Wilson] are typically not mistakes in detecting one’s own mental states. Rather, the studies show that subjects make mistakes in reasoning about their own mental states” (Nichols and Stich 2003, p. 161). 57 Stich (1983, p. 230). 58 Ibid., p. 188. 59 See Putnam (1975) and Burge (1979). 60 See Stich (1983, chapter 8). Another option is Fodor’s argument that scientific psychology should employ a notion of “narrow content”, that is, a kind of content that supervenes on formal properties (Fodor 1987). Recently, however, Fodor (1994b) has changed his mind and has abandoned the narrow content (see Cain 2002, chapter 6). 61 Bechtel, Abrahamsen and Graham (1998, p. 77). 62 Fodor (2000, p. 1). 63 See the oft-cited van Gelder and Port (1995, pp. 2-4). 64 van Gelder (1999, p. 244). 65 See this volume, pp. 40 ff. 66 See this volume, pp. 27 ff. 67 See Clark (1997, p. 126). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
22
MASSIMO MARRAFFA
68
See this volume, 87, pp. 212 ff. On Gibsonian affordances, see this volume, pp. 241 ff. 70 Hauser (2005, section 1,a,v). 71 Fodor (1975, chapter 1). See also Fodor (1997). 72 Bechtel, Abrahamsen and Graham (1998, p. 65). See also Bickle and Mandik (2002). 73 Churchland and Churchland (1996, p. 226). 74 See Marconi (2001, pp. 29-30). 75 See, e.g., P.M. Churchland (1981c). 76 P.M. Churchland (1998, p. 41). However, this view of the relationship between connectionism and propositional attitudes is controversial. E.g., Smolensky (1995) thinks that it is both justifiable and necessary to ascribe to certain connectionist systems beliefs. Horgan and Tienson (1996) argue that LoT style representation is both necessary in general and realizable within connectionist architectures. 77 McCauley (1996, p. 26). 78 “Co-evolutionM” in ibid., p. 25. 79 de Jong (2001, p. 731). 80 McCauley (1996, p. 25). 81 “Co-evolutionP” in ibid., p. 27. 82 Cf. Churchland and Sejnowski: “The co-evolutionary advice regarding methodological efficiency is ‘let many flowers bloom’” (quoted in McCauley 1996, p. 33). On computational neuroscience, see the classic Churchland and Sejnowski (1992) and the recent Eliasmith (2005). 83 Cf. Clark and Eliasmith: “It is precisely the complex relations between implementation and function that have spawned a recent surge of interest in computational neuroscience. With the explicit goal of taking biological constraints as seriously as computational ones, computational neuroscience has begun to explore a vast range of realistic neural models. […] Such models should prove useful in providing constraints of their own. […] So, not only does biology inform the construction of computational models but, ideally, those same models can help suggest important experiments for neuroscientists to perform” (2002, p. 887). 84 Lower and Rey (1991, p. xiv). 85 In this volume, the chapters 16-19 focus on the top-down constraints, whereas the chapters 2 and 23 emphasize the bottom-up ones. TP
PT
69 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
B
B
B
B
CHAPTER 2
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION OF MIND Gualtiero Piccinini
[A] psychological theory that attributes to an organism a state or process that the organism has no physiological mechanisms capable of realizing is ipso facto incorrect. Jerry Fodor
1. COMPUTATIONAL EXPLANATION AND MENTAL CAPACITIES When we explain the specific capacities of computing mechanisms, we appeal to the computations they perform. For example, calculators—unlike, say, air conditioners—have the peculiar capacity of performing multiplications: if we press appropriate buttons on a (well functioning) calculator in the appropriate order, the calculator yields an output that we interpret to be the product of the numbers represented by the input data. Our most immediate explanation for this capacity is that under the relevant conditions, calculators perform an appropriate computation— a multiplication—on the input data. This is a paradigmatic example of computational explanation. Animals, and especially human beings, respond to their environments in extraordinarily subtle, specialized, and adaptive ways. In explaining those capacities, we often appeal to mentalistic constructs such as perceptions, memories, intentions, etc. We also recognize that the mechanisms that underlie mental capacities are neural mechanisms—no brains, no minds.1 But it is difficult to connect mentalistic constructs to their neural realizers—to see how perceptions, memories, intentions, and the like, could be realized by neural states and processes. In various forms, this problem has haunted the sciences of mind and brain since their origin. In the mid-twentieth century, Warren McCulloch and others devised an ingenious solution: mental capacities are explained by computations realized in the brain.2 This is the computational theory of mind (CTM), which explains mental capacities more or less in the way we explain the capacities of computing mechanisms. There are many versions of CTM: “classical” versions, which tend to ignore the brain, and “connectionist” versions, which are ostensively inspired by neural mechanisms. According to all of them, the brain is a computing mechanism, TP
TP
PT
PT
23 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 23–36. © 2007 Springer.
24
GUALTIERO PICCININI
and its capacities—including its mental capacities—are explained by its computations. CTM has encountered resistance. Some neuroscientists are skeptical that the brain may be adequately characterized as a computing mechanism in the relevant sense.3 Some psychologists think computational explanation of mental capacities is inadequate.4 And some philosophers find it implausible that certain mental capacities—especially consciousness—may be explained by computation.5 Whether CTM can explain every aspect of every mental capacity remains controversial. But without a doubt, CTM is a compelling theory. Digital computers are more similar to minds than anything else known to us. Computers can process information, perform calculations and inferences, and exhibit a dazzling variety of capacities, including that of guiding sophisticated robots. Computers and minds are sufficiently analogous that CTM appeals to most of those who are searching for a mechanistic explanation of mind. As a consequence, CTM has become the mainstream explanatory framework in psychology, neuroscience, and naturalistically inclined philosophy of mind.6 In some quarters, computational explanation is now so entrenched that it seems commonsensical. More than half a century after CTM’s introduction, it’s all too easy—and all too common—to take for granted that mental capacities are explained by neural computations. A recent book, which purports to defend CTM, begins by asking “how the computational events that take place within the spatial boundaries of your brain can be accounted for by computer science”.7 But if we presuppose that neural processes are computational before investigating, we turn CTM into dogma. If, instead, our theory is to be genuinely empirical and explanatory, it needs to be empirically testable. To bring empirical evidence to bear on CTM, we need an appropriate notion of computational explanation. In order to ground an empirical theory of mind, as CTM was designed to be, a satisfactory notion of computational explanation should satisfy at least two requirements. First, it should employ a robust notion of computation, such that there is a fact of the matter as to which computations are performed by which systems. This might be called the robustness requirement. Second, it should not be empirically vacuous, as it would be if CTM could be established a priori. This might be called the non-vacuity requirement. This chapter explicates the notion of computational explanation so as to satisfy these requirements and briefly discusses whether it plausibly applies to the mechanistic explanation of mind. TP
PT
TP
PT
TP
TP
PT
TP
PT
PT
2. COMPUTATIONAL EXPLANATION AND REPRESENTATIONS According to a popular view, a computational explanation is one that postulates the existence of internal representations within a mechanism and the appropriate manipulation of representations by the mechanism. According to this view, which I call the semantic view of computational explanation, computations are individuated at least in part by their semantic properties. A weaker variant is that computations are processes defined over representations. The semantic view is stated more or less
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION
25
explicitly in the writings of many supporters of CTM.8 The semantic view is appealing because it fits well both with our practice of treating the internal states of computing mechanisms as representations and with the representational character of those mentalistic constructs, such as perceptions and intentions, that are traditionally employed in explaining mental capacities. But the semantic view of computational explanation faces insuperable difficulties. Here I have room only for a few quick remarks; I have discussed the semantic view in detail elsewhere.9 For present purposes, we need to distinguish between what may be called essential representations and accidental ones. Essential representations are individuated, at least in part, by their content. In this sense, if two items represent different things, then they are different kinds of representation. For instance, at least in ordinary parlance, mentalistic constructs are typically individuated, at least in part, by their content: the concept of smoke is individuated by the fact that it represents smoke, while the concept of fire is individuated by the fact that it represents fire.10 Representations in this sense of the term have their content essentially: you can’t change their content without changing what they are. By contrast, accidental representations are individuated independently of their content; they represent one thing or another (or nothing at all) depending on whether they are interpreted and how. Strings of letters of the English alphabet are representations of this kind: they are individuated by the letters that form them, regardless of what they mean or even whether they mean anything at all. For instance, the word “bello” means (roughly) war to speakers of Latin, beautiful to speakers of Italian, and nothing in particular to speakers of most other languages.11 The main problem with the semantic construal of computational explanation is that it requires essential representations, but all it has available is accidental ones. If we try to individuate computations by appealing to the semantic properties of accidental representations, we obtain an inadequate notion of computation. For the same accidental representation may represent different things to different interpreters. By the same token, a process that is individuated by reference to the semantic properties of accidental representations may be taken by different interpreters to compute different things—to constitute different computations— without changing anything in the process itself. Just as speakers of different languages can interpret the same string of letters in different ways, under the semantic view (plus the notion of accidental representation) different observers could look at the same activity of the same mechanism and interpret it as two different computations. But a process that changes identity simply by changing its observer is not the kind of process that can support scientific generalizations and explanations. Such a notion of computational explanation fails to satisfy the robustness requirement. To obtain a genuinely explanatory notion of computation, the semantic view requires the first notion of representation—essential representation. In fact, those who have explicitly endorsed the semantic view have done so on the basis of the notion of essential representation.12 But there is no reason to believe that computational states, inputs, and outputs have their semantic properties essentially (i.e., that they are essential representations13). On the contrary, a careful look at how computational explanation TP
PT
TP
TP
PT
TP
PT
TP
TP
PT
PT
PT
26
GUALTIERO PICCININI
is deployed by computer scientists reveals that computations are individuated by the strings of symbols on which they are defined and by the operations performed on those symbols, regardless of which, if any, interpretation is applied to the strings. Psychologists and neuroscientists rarely distinguish between explanation that appeals to computation and explanation that appeals to representation; this has convinced many—especially philosophers of mind—that computational explanation of mind is essentially representational. But this is a mistake. Computational explanation appeals to inner computations, and computations are individuated independently of their semantic properties. Whether computational states represent anything, and what they represent, is an entirely separate matter. The point is not that mental states are not representations, or that if they are representations, they must be accidental ones. The point is also not that representations play no explanatory role within a theory of mind and brain; they may well play such a role (see below). The point is simply that if mental or neural states are computational, they are not so in virtue of their semantic properties. For this reason among others, the semantic view of computational explanation is inadequate. 3. COMPUTATIONAL EXPLANATION AND COMPUTATIONAL MODELING Another popular view is that a computational explanation is one that employs a computational model to describe the capacities of a system. I will refer to this as the modeling view of computational explanation. According to the modeling view, roughly speaking, anything that is described by a computation is also a computing mechanism that performs that computation. Although this view is not as popular as the semantic view, it is present in the works of many supporters of CTM.14 The modeling view is tempting because it appears to gain support from the widespread use of computational models in the sciences of mind and brain. Nevertheless, it is even less satisfactory than the semantic view. The main difficulty with the modeling view is that it turns so many things into computing mechanisms that it fails the non-vacuity requirement. Paradigmatic computational explanations are used to explain peculiar capacities of peculiar mechanisms. We normally use them to explain what calculators and computers do, but not to explain the capacities of most other mechanisms around us. When we explain the capacities of air conditioners, lungs, and other physical systems, we employ many concepts, but we normally do not appeal to computations. This gives rise to the widespread intuition that among physical systems, only a few special ones are computing mechanisms in an interesting sense. This intuition is an important motivation behind CTM. The idea is that the mental capacities that organisms exhibit—as opposed to their capacities to breathe, digest, or circulate blood—may be explained by appeal to neural computations. And yet, it is perfectly possible to build computational models of many physical processes, including respiration, digestion, and even galaxy formation. According to the modeling view, this is sufficient to turn lungs, stomachs, and TP
PT
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION
27
galaxies into computing mechanisms, in the same sense in which calculators are computing mechanisms and brains may or may not be. As a consequence, many things—perhaps all things—are turned into computing mechanisms. Some authors have accepted this consequence; they maintain that everything is a computing mechanism.15 As we have seen, this is counterintuitive, because it conflicts with our restricted use of computational explanation. Still, intuitions are often overridden by strong arguments. Why not accept that everything is a computing mechanism? The problem is that if everything is a computing mechanism (in the relevant sense), computational descriptions lose their explanatory character to the point that CTM is trivialized. To determine with some precision which class of systems can be described computationally to which degree of accuracy is difficult.16 Nevertheless, it is obvious that everything, including many neural systems, can be described computationally with some accuracy. This fact (under the modeling view of computational explanation) establishes the truth of CTM a priori, without requiring empirical investigation. But CTM was intended to be an empirical theory, grounded on an empirical hypothesis about the kinds of mechanism that explain mental capacities. If the versatility of computational description is sufficient to establish that neural mechanisms perform computations, then this cannot be an empirical hypothesis about the explanation of mental capacities—it is merely the trivial application to brains of a general thesis that applies to everything. In other words, the modeling view of computational explanation renders CTM empirically vacuous. A further complication facing the modeling view is that the same physical system may be given many computational descriptions that are different in nontrivial respects, for instance because they employ different computational formalisms, different assumptions about the system, or different amounts of computational resources. This makes the answer to the question of which computation is performed by a system indeterminate. In other words, not only is everything computational, but, according to the modeling view, everything performs as many computations as it has computational descriptions. As a consequence, this notion of computational explanation fails the robustness requirement. The modeling view—even more than the semantic view—is inadequate for assessing CTM. A variant of the modeling view is that a computational explanation is one that appeals to the generation of outputs on the grounds of inputs and internal states. But without some constraints on what counts as inputs and outputs of the appropriate kind, this variant faces the same problem. Every capacity and behavior of every system can be interpreted as the generation of an output from an input and internal states. Our question may be reformulated in the following way: which input-output processes, among the many exhibited by physical systems, deserve to be called computational in the relevant sense? TP
PT
TP
PT
4. MECHANISTIC EXPLANATION To make progress on this question, we should pay closer attention to the explanatory strategies employed in physiology and engineering. For brains and computing
28
GUALTIERO PICCININI
mechanisms are, respectively, biological and artificial mechanisms; it is plausible that they can be understood by the same strategies that have proven successful for other biological and artificial mechanisms. There is consensus that the capacities of biological and artificial mechanisms are to be explained mechanistically.17 Although different accounts of mechanistic explanation vary in their details, the following sketch is quite uncontroversial. A mechanistic explanation involves the partition of a mechanism into components, the assignment of functions or capacities to those components, and the identification of organizational relations between the components. For any capacity of a mechanism, a mechanistic explanation invokes appropriate functions of appropriate components of the mechanism, which, when appropriately organized under normal conditions, generate the capacity to be explained. The components’ capacities to fulfill their functions may be explained by the same strategy, namely, in terms of the components’ components, functions, and organization. For example, the capacity of a car to run is mechanistically explained by the following: the car contains an engine, wheels, etc.; under normal conditions, the engine generates motive power, the power is transmitted to the wheels by appropriate components, and the wheels are connected to the rest of the car so as to carry it for the ride. Given that the capacities of mechanisms are explained mechanistically, it remains to be seen how mechanistic explanation relates to computational explanation. TP
PT
5. COMPUTATIONAL EXPLANATION, FUNCTIONAL EXPLANATION, AND MECHANISTIC EXPLANATION Computational explanation is not always distinguished explicitly from mechanistic explanation. If we abstract away from some aspects of the components and their specific functions, a mechanistic explanation may be seen as explaining the capacities of a mechanism by appealing only to its internal states and processes plus its inputs, without concern with how the internal states and processes are physically implemented. This “abstract” version of mechanistic explanation is often called functional analysis or functional explanation.18 As we have seen, a variant of the modeling view construes computational explanation as explanation in terms of inputs and internal states—that is, as functional explanation. Under this construal, it becomes possible to identify these two explanatory strategies. In fact, many authors do not explicitly distinguish between computational explanation, functional explanation, and in some cases even mechanistic explanation.19 Identifying computational explanation with functional or mechanistic explanation may seem advantageous, because it appears to reconcile computational explanation with the well-established explanatory strategy that is in place in biology and engineering. To be sure, neuroscientists, psychologists, and computer scientists explain the capacities of brains and computers by appealing to internal states and processes. Nevertheless, a simple identification of computational explanation and functional explanation is based on an impoverished understanding of both TP
PT
TP
PT
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION
29
explanatory strategies. We have already seen above that computational explanation must be more than the appeal to inputs and internal states and processes, on pain of losing its specificity to a special class of mechanisms and trivializing CTM. For an explanation to be genuinely computational, some constraints need to be put on the nature of the inputs, outputs, and internal states and processes. A related point applies to functional explanation. The strength of functional explanation derives from the possibility of appealing to different kinds of internal states and processes. These processes are as different as digestion, refrigeration, and illumination. Of course, we could abstract away from the differences between all these processes and lump them all together under some generic notion of computational process. But then, all functional explanations would look very much alike, explaining every capacity in terms of inner computations. Most of the explanatory force of functional explanations, which depends on the differences between processes like digestion, refrigeration, and illumination, would be lost. In other words, if functional explanation is the same as computational explanation, then every artifact and biological organ is a computing mechanism. The brain is a computing mechanism in the same sense in which a stomach, a freezer, and a light bulb are computing mechanisms. This is not only a counterintuitive result. As before, this result trivializes CTM, which was designed to invoke a special activity (computation) to explain some special capacities (mental ones) as opposed to others. To avoid this consequence, we should conclude that computation may well be a process that deserves to be explained functionally (or even better, mechanistically), but it should not be identified with every process of every mechanism. Computation is only one process among others. 6. COMPUTATIONAL EXPLANATION AS A KIND OF MECHANISTIC EXPLANATION Once again, we can make progress by paying closer attention to the explanatory strategies employed by the relevant community of scientists—specifically, computer scientists and engineers. In understanding and explaining the capacities of calculators of computers, computer scientists do not limit themselves to functional explanation: they employ full-blown mechanistic explanation. They analyze computing mechanisms into processors, memory units, etc., and they explain the computations performed by the mechanisms in terms of the functions performed by appropriately organized components. But computer scientists employ mechanistic explanations that are specific to their field: the components, functions, and organizations employed in computer science are of a distinct kind. If this is correct, then the appropriate answer to our initial question about the nature of computational explanation is that it is a distinct kind of mechanistic explanation. Which kind? There are many kinds of computing mechanism, each of which comes with its specific components, functions, and functional organization. If we want to characterize computational explanations in a general way, we ought to identify
30
GUALTIERO PICCININI
features that are common to the mechanistic explanation of all computing mechanisms. The modern, mathematical notion of computation, which goes back to work by Alan Turing and others,20 can be formulated in terms of the mathematical notion of strings of symbols. For example, letters of the English alphabet are symbols, and concatenations of letters (i.e., words) are strings of symbols. More generally, symbols are states or particulars that belong to finitely many distinguishable types, and strings are concatenations of symbols. In concrete computing mechanisms, strings of symbols are realized by concrete concatenations of digits. Digits and strings of digits are states or particulars that are individuated by the fact that they are unambiguously distinguishable by the mechanisms that manipulate them. As I have argued elsewhere, to a first approximation, concrete computing mechanisms are mechanisms that manipulate strings of digits in accordance with rules that are general—namely, they apply to all strings from the relevant alphabet—and that depend on the input strings (and perhaps internal states) for their application.21 A computational explanation, then, is a mechanistic explanation in which the inputs, outputs, and perhaps internal states of the system are strings of digits, and the processing of the strings can be accurately captured by appropriate rules. This analysis of computing mechanisms applies to both so-called digital computing mechanisms as well as those classes of connectionist networks whose input-output functions can be analyzed within the language of computability theory.22 In other words, any connectionist network whose inputs and outputs can be characterized as strings of digits, and whose input-output function can be characterized by a fixed rule defined over the inputs (perhaps after a period of training), counts as a computing mechanism in the present sense. The analysis does not apply to so-called analog computers. The reason for this omission is that analog computers are a distinct class of mechanisms and deserve a separate analysis. I will not discuss analog computers here because they are not directly relevant to CTM.23 With the present analysis of computational explanation in hand, we are ready to discuss whether and how computational explanation is relevant to the explanation of mental capacities in terms of computations realized in the brain. Since we are interested in explaining mental capacities, we will primarily focus not on genetic or molecular neuroscience but on the levels that are most relevant to explaining mental capacities, that is, cellular and systems neuroscience. TP
PT
TP
TP
PT
PT
TP
PT
7. MECHANISTIC EXPLANATION IN NEUROSCIENCE Analogously to mechanistic explanation in other fields, mechanistic explanation in neuroscience is about how different components of the brain are organized together so as to exhibit the activities of the whole. But mechanistic explanation in neuroscience is also different from mechanistic explanation in most other domains, due to the peculiar functions performed by the brain. The functions of the brain (and more generally of the nervous system) may be approximately described as the feedback control of the organism and its parts.24 In TP
PT
31
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION
other words, the brain is in charge of bringing about a wide range of activities performed by the organism on the grounds of both its internal states and information received by the brain from both the rest of the organism and the environment. The activities in question include, or course, not only those involving the interaction between the whole organism and the environment, as in walking, feeding, or sleeping, but also a wider range of activities ranging from breathing to digesting to releasing hormones into the bloodstream. Different functions come with different mechanistic explanations, and feedback control is no exception. Given that the brain has this peculiar capacity, its mechanistic explanation requires the appeal to internal states that correlate with the rest of the body, the environment, and one another in appropriate ways. In order for the brain to control the organism, the internal states must correlate with the activities of the organism in appropriate ways and must be connected with effectors (muscles and glands) in appropriate ways. In order for the control to be based on feedback, the brain’s internal states must correlate with bodily and environmental variables in appropriate ways; in other words, there must be internal “representations” (this is the standard sense of the term in neuroscience). In order for the control to take other internal states into account, different parts of the system must affect one another in appropriate ways. Most importantly for present purposes, in order for the mutual influence between different parts of the system to be general and flexible, brains contain a medium-independent vehicle for transmitting signals. By mediumindependent vehicle, I mean a variable that can vary independently of the specific physical properties of any of the variables that must correlate with it (such as light waves, sound waves, muscle contractions, etc.). The medium-independent vehicles employed by brains are, of course, all-ornone events, known as action potentials or spikes, which are generated by neurons. Spikes are organized in sequences called trains, whose properties vary from neuron to neuron and from condition to condition. Spike trains from one neuron are often insufficient to produce a functionally relevant effect. At least in large nervous systems, such as human nervous systems, spike trains from populations of several dozens of neurons are thought to be the minimal processing units.25 Mechanistic explanation in neuroscience, at or above the levels that interest us here, consists in specifying how appropriately organized trains of spikes from different assemblies of neurons constitute the capacities of neural mechanisms, and how appropriately organized capacities of neural mechanisms constitute the brain’s capacities— including its mental capacities. Undoubtedly, the term “computational explanation” may be used simply to refer to mechanistic explanations of brains’ capacities. Perhaps this is even a good explication of the way some psychologists and neuroscientists employ this term today. Given this usage, computational explanation in neuroscience need not be analogous to computational explanation in computer science, and it need not suggest that neural mechanisms perform computations in the sense in which computers and calculators do. Nevertheless, the notion of computation was originally imported from computability theory into neuroscience, and from neuroscience into psychology and TP
PT
32
GUALTIERO PICCININI
AI, precisely in order to state the strong hypothesis that brains perform computations in the sense that computers and calculators do. Furthermore, it is still common to see references in neuroscience books to literature and results from computability theory as if they were relevant to neural computation.26 In light of this, it is appropriate to complete this discussion by asking whether neural mechanisms perform computations in the sense employed in computer science. Given that we now have a characterization of both computational explanation in computer science and mechanistic explanation in neuroscience, we are in a position to reformulate the question about computational explanation of mind in a more explicit and precise way. Are spike trains strings of digits, and is the neural generation and processing of spike trains computation?27 TP
TP
PT
PT
8. SYMBOLS, STRINGS, AND SPIKES We have seen that within current neuroscience, the variables that are employed to explain mental capacities are spike trains generated by neuronal populations. The question of computational explanation in neuroscience should thus be reformulated in terms of the properties of spike trains. Indeed, if we reformulate the original CTM using modern terminology, CTM was initially proposed as the hypothesis that spike trains are strings of digits.28 This hypothesis can now be tested by looking at what neuroscientists have empirically discovered by studying spike trains. A close look at current neuroscience reveals that within biologically plausible models, spike trains are far from being described as strings. I believe it is possible to identify principled reasons for why this is so, and I have attempted to do so elsewhere.29 Here, I only have space to briefly summarize the difficulties that I see in treating spike trains as strings. Since strings are made of atomic digits, it must be possible to decompose spike trains into atomic digits, namely events that have unambiguous functional significance within the mechanism during a functionally relevant time interval. But the current mathematical theories of spike generation leave no room for doing this.30 The best candidates for atomic digits, namely the presence and absence of a spike, have no determinate functional significance on their own; they only acquire functional significance within a spike train by contributing to an average firing rate. Moreover, spikes and their absence, unlike atomic digits, are not events that occur within well-defined time intervals of functionally significant duration. Even if the presence or absence of individual spikes (or any other component of spike trains) could be usefully treated as an atomic digit, however, there would remain difficulties in concatenating them intro strings. In order to do so, it must be possible to determine unambiguously, at the very least, which digits belong to a string and which do not. But again, within our current mathematical theories of spike trains, there is no non-arbitrary way to assign individual spikes (or groups thereof) to one string rather than another. As a matter of current practice, mechanistic explanation in neuroscience does not appeal to the manipulation of strings of digits by the nervous system. As I briefly TP
TP
PT
PT
TP
PT
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION
33
indicated, I believe there are principled reasons why this is so. Therefore, since computational explanation (in the sense employed in computer science) requires the appeal to strings of digits, explanation in neuroscience is not computational. It is important to stress that none of this eliminates the important peculiarities that mechanistic explanations in neuroscience possess relative to other explanations. For example, they are aimed at explaining the specific function of controlling organisms on the basis of feedback, and as a consequence, they appeal to mediumindependent vehicles and they rely heavily on correlations between internal states of the organism and environmental states (“neural representations”). 9. MECHANISTIC EXPLANATION OF MIND I have argued that once the notion of computational explanation that is relevant to CTM is in place, we have reasons to conclude that current neuroscientific explanations—including, of course, neuroscientific explanations of mental capacities—are not computational. As far as we can now tell, based on our best science of neural mechanisms, mental capacities are explained by the processing of spike trains by neuronal populations, and the processing of spike trains is interestingly different from the processes studied by computer scientists and engineers. In a loose sense, many people—including many neuroscientists—say and will continue to say that mental capacities are explained by neural computations. This may be interpreted to mean simply that mental capacities are explained by the peculiar processes present in the brain. But strictly speaking, current neuroscience does not mesh with CTM—in either classical or connectionist incarnations. Some philosophers may be tempted to reply that CTM is not threatened by the features of neuroscientific explanation, because CTM is “autonomous” from neuroscience.31 According to this line, CTM is a theory at a higher, more “abstract” level than the neural level(s); it is a psychological theory, not a neuroscientific one; it is unconstrained by the properties of the neural realizers. Anyone who takes this autonomy line is liable to be criticized for expounding a reactionary theory— reactionary because immune to empirical revision.32 I also believe that the autonomy reply presupposes an inadequate view of the relationship between psychology and neuroscience, but I do not have the space to argue this point here. Finally, the autonomy reply goes against the spirit in which CTM was proposed, for CTM was originally proposed as a theory of how mental capacities are explained by neural mechanisms.33 This appeal to authority, of course, is unlikely to persuade those who are sympathetic to the autonomy reply. For present purposes, the following counter-argument will have to suffice. No matter how abstract or autonomous from neuroscience we construe CTM to be, CTM postulates computations that are realized in some physical substratum or another. In ordinary biological organisms, the relevant physical substratum is the brain. In general, not everything can realize everything else. In other words, for any property A and putative realizers B1, …, Bn, any Bi realizes A if and only if having Bi constitutes having A.34 For instance, something is a (well functioning) heater if TP
PT
TP
TP
PT
B
B
B
PT
TP
PT
B
B
B
B
B
34
GUALTIERO PICCININI
and only if it possesses a property that constitutes the generation of heat—i.e., roughly speaking, a property that increases the average molecular kinetic energy of its surroundings. By the same token, for something—such as a neural process—to realize a computation, there must be a level of mechanistic description at which neural activity constitutes the processing of strings of digits in accordance with appropriate rules. This is the point behind the quote that opens this chapter: “[A] psychological theory that attributes to an organism a state or process that the organism has no physiological mechanisms capable of realizing is ipso facto incorrect”.35 As I have hinted, the empirical evidence is that neural mechanisms are incapable of realizing computations.36 Therefore, we should reject CTM. At this point, some readers may feel at a loss. Those of us who have been trained in the cognitive science tradition are so used to thinking of explanations of mental (or at least cognitive) capacities as computational that we might worry that without CTM, we lack any mechanistic explanation of mental capacities. This worry is misplaced. My argument is not based on some a priori objection to CTM, leaving us without any alternative to it. My argument is based on the existence of a sophisticated science of neural mechanisms, which did not exist when CTM was originally proposed but has developed since then. A goal of this science is the mechanistic explanation of mental capacities. If we agree that the realizers of mental states and processes are neural, neuroscience is where we ought to look for mechanisms in terms of which to formulate our theories of mind. This point is neutral between reductionism, anti-reductionism, or eliminativism about mental properties. It is a straightforward consequence of the fact that if there are any mechanistic explananda of mental capacities, they are realized in the brain. And while neural mechanisms may not be computational in the sense of computer science, enough is known about them to support a rich mechanistic theory of mind.37 TP
PT
TP
TP
PT
PT
NOTES 1
Some philosophers have argued that the realizers of mental states and processes include not only the brain but also things outside it (see, e.g., Wilson 2004). I will ignore this nuance, because this simplifies the exposition without affecting the points at issue in this paper in any fundamental way. 2 McCulloch and Pitts (1943); Wiener (1948); von Neumann (1958). See Piccinini (2004a) for more on the origin of this view. 3 See, e.g., Gerard (1951); Rubel (1985); Perkel (1990); Edelman (1992); Globus (1992). 4 See, e.g., Gibson (1979); Varela, Thompson and Rosch (1991); Thelen and Smith (1994); Port and van Gelder (1995); Johnson and Erneling (1997); Ó Nualláin, Mc Kevitt and Mac Aogáin (1997); Erneling and Johnson (2005). 5 See, e.g., Taube (1961); Block (1978); Putnam (1988); Maudlin (1989); Mellor (1989); Bringsjord (1995); Dreyfus (1998); Harnad (1996); Searle (1992); Penrose (1994); van Gelder (1995); Wright (1995); Horst (1996); Lucas (1996); Copeland (2000); Fetzer (2001). TP
PT
TP
PT
TP
PT
TP
PT
T
T
TP
T
PT
T
T
T
COMPUTATIONAL EXPLANATION AND MECHANISTIC EXPLANATION
6
35
See, e.g., Newell and Simon (1976); Fodor (1975); Pylyshyn (1984); Churchland and Sejnowski (1992). 7 Baum (2004, p. 1, emphasis added). 8 See, e.g., Cummins (1983); Churchland and Sejnowski (1992); Fodor (1998); Shagrir (2001). 9 Piccinini (2004b). 10 I am using “concept” in a pre-theoretical sense. Of course, there may be ways of individuating concepts independently of their content, ways that may be accessible to those who possess a scientific theory of concepts but not to ordinary speakers. Furthermore, according to Diego Marconi, it may not be strictly correct to say that my concept SMOKE represents smoke, because my concept is not tokened in the presence of all and only instances of smoke. This makes no difference to the present point: what matters here is that concepts, at least pre-theoretically, are individuated by what they represent—whatever that may be. 11 The distinction between essential and accidental representation should not be confused with the distinctions between original, intrinsic, and derived intentionality. Derived intentionality is intentionality conferred on something by something that already has it; original intentionality is intentionality that is not derived (Haugeland 1997). Something may have original intentionality without being an essential representation, because it may not have its content essentially. Intrinsic intentionality is the intentionality of entities that are intentional regardless of their relations with anything else (Searle 1983). Something may be an essential representation without having intrinsic intentionality, because its intentionality may be due to the relations it bears to other things. 12 Burge (1986); Segal (1991). 13 Egan (1995). 14 See, e.g., Putnam (1967); Churchland and Sejnowski (1992). See Piccinini (2004c) for discussion. 15 See, e.g., Putnam (1967); Churchland and Sejnowski (1992); Wolfram (2002). 16 For more on this, see Piccinini (forthcoming). 17 Bechtel and Richardson (1983); Machamer, Darden and Craver (2000); Craver (2001); Glennan (2002). 18 Fodor (1968); Cummins (2000). 19 See, e.g., Fodor (1968); Dennett (1978); Marr (1982); Churchland and Sejnowski (1992); Eliasmith (2003). 20 Turing ([1936] 1965). 21 To be a bit more precise, for each computing mechanism, there is a finite alphabet out of which strings of digits can be formed and a fixed rule that specifies, for any input string on that alphabet (and for any internal state, if relevant), whether there is an output string defined for that input (internal state), and which output string that is. If the rule defines no output for some inputs (internal states), the mechanism should produce no output for those inputs (internal states). For more details, see Piccinini (2006). 22 See, e.g., McCulloch and Pitts (1943); Minsky and Papert (1969); Hopfield (1982); Rumelhart and McClelland (1986); Siegelmann (1999). 23 Pace suggestions to the contrary that are found in the literature (see, e.g., Churchland and Sejnowski 1992). The main disanalogy is that while the vehicles of analog computations are continuous variables, the main vehicles of neural processes are spike trains, which are TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
36
GUALTIERO PICCININI
sequences of all-or-none events. For an analysis of analog computers and their relation to digital computers, see Piccinini (2004d). 24 The exact level of sophistication of this feedback control is irrelevant here. See Grush (2003) for discussion of some options. 25 Shadlen and Newsome (1998). 26 See, e.g., Churchland and Sejnowski (1992); Koch (1999). 27 Notice that the peculiarities of mechanistic explanations in neuroscience are not only logically independent of whether they constitute computational explanations in the sense of computer science; historically, their formulation preceded the invention of modern computability theory and computer design. For example, the self-conscious use of the notion of information in neuroscience dates back to at least the 1920s (Adrian 1928, cited by Garson 2003, to which the present section is indebted). 28 McCulloch and Pitts (1943). 29 Piccinini (unpublished). 30 Dayan and Abbott (2001). 31 Fodor (1997). 32 Churchland (1981c). 33 McCulloch and Pitts (1943); Wiener (1948); von Neumann (1958). 34 Pereboom and Kornblith (1991). 35 Fodor (1968, p. 110). 36 This way of putting the point may be a bit misleading. The evidence is that computation is not what neural mechanisms do in general. From this, it doesn’t follow that no neural mechanism can perform computations. Human beings are certainly capable of performing computations, and this is presumably explained by their neural mechanisms. So, human brains, at least, must possess a level of organization at which some specific neural activity does realize the performance of computations. 37 Thanks to José Bermúdez, Diego Marconi, and especially Carl Craver for helpful comments on previous versions of this chapter. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
PT
PT
TP
TP
TP
PT
CHAPTER 3 COMPUTATIONALISM UNDER ATTACK Roberto Cordeschi and Marcello Frixione
Since the early 1980s, computationalism in the study of the mind has been “under attack”1 by several critics of the so-called “classic” or “symbolic” approaches in AI and cognitive science. Computationalism was generically identified with such approaches. For example, it was identified with both Allen Newell and Herbert Simon’s Physical Symbol System Hypothesis and Jerry Fodor’s theory of Language of Thought, usually without taking into account the fact that such approaches are very different as to their methods and aims.2 Zenon Pylyshyn, in his influential book Computation and Cognition, claimed that both Newell and Fodor deeply influenced his ideas on cognition as computation.3 This probably added to the confusion, as many people still consider Pylyshyn’s book as paradigmatic of the computational approach in the study of the mind. Since then, cognitive scientists, AI researchers and also philosophers of the mind have been asked to take sides on different “paradigms” that have from time to time been proposed as opponents of (classic or symbolic) computationalism. Examples of such oppositions are: TP
PT
TP
TP
PT
PT
computationalism vs. connectionism, computationalism vs. dynamical systems, computationalism vs. situated and embodied cognition, computationalism vs. behavioural and evolutionary robotics. Our preliminary claim in section 1 is that computationalism should not be identified with what we would call the “paradigm (based on the metaphor) of the computer” (in the following, PoC). PoC is the (rather vague) statement that the mind functions “as a digital computer”. Actually, PoC is a restrictive version of computationalism, and nobody ever seriously upheld it, except in some rough versions of the computational approach and in some popular discussions about it. Usually, PoC is used as a straw man in many arguments against computationalism. In section 1 we look in some detail at PoC’s claims and argue that computationalism cannot be identified with PoC. In section 2 we point out that certain anticomputationalist arguments are based on this misleading identification. In section 3 we suggest that the view of the levels of explanation proposed by David Marr could clarify certain points of the debate on computationalism. In section 4 we touch on a controversial issue, namely the possibility of developing a notion of analog computation, similar to the notion of digital computation. A short conclusion follows in section 5. 37 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 37–49. © 2007 Springer.
38
ROBERTO CORDESCHI AND MARCELLO FRIXIONE
We do not debate other controversial issues here, e.g., that of so-called “pancomputationalism”, which, albeit related to the topic of this chapter, would deserve a deeper analysis, and is not directly relevant to our argument. Actually, the aim of this chapter is not to deal fully with the different issues of computationalism, but to put forward a preliminary investigation of the topic, which might free it of certain common misunderstandings. 1. THE “PARADIGM OF THE COMPUTER” AND COMPUTATIONALISM According to PoC, digital computers, considered to be the basis for explaining mental phenomena, are characterised by (at least one of) the following features: (1) They are sequential machines, inspired by the von Neumann architecture. Also concurrent computers with a limited number of processing units can be accommodated within PoC. In any case, PoC’s computers are machines based on a rigid distinction between memory and processing units (as is the case with von Neumann-style computers). (2) They are general purpose (i.e., universal) computers. That is to say, they are programmable computing machines that in principle (i.e., if not subjected to temporal constraints, and if their memory is supposed to be unlimited) can compute all computable functions according to Church’s Thesis. Opponents of computationalism have an easy time criticising PoC, in that both (1) and (2) can hardly be accommodated with available data about the mind/brain. As far as (1) is concerned, the nervous system is characterised by a high degree of parallelism: in the brain, a high number of interconnected units work in parallel. Therefore, a serial model of computation would be unsatisfactory in modelling many aspects of cognition. Moreover, there is no evidence in favour of the psychological or anatomical plausibility of an architectural distinction between storage and processing of information. As far as point (2) is concerned, empirical data favour the claim that at least some parts of the cognitive architecture consist of specialised and possibly anatomically localised modules. These, from a computational point of view, can be considered dedicated computational devices rather than processes implemented on a universal computer. Both (1) and (2) have been preferred targets of various opponents of classic AI and cognitive science, starting with the early supporters of connectionism in the 1980s.4 However, it is worth noting that nobody in the field of classic AI or cognitive science has ever seriously claimed that the brain/mind functions “as a von Neumann computer” (or “as a Turing machine”). Even Fodor and Pylyshyn widely argued that the issue of the debate was not this “absurd assumption” (as a matter of fact, a mere metaphor), but whether a connectionist explanation of the mind is possible, and perhaps more suitable than the classic one.5 Thus computationalism is not PoC. The central claim of computationalism is that mental processes are computations. More precisely, the theoretical constructs of TP
PT
TP
PT
39
COMPUTATIONALISM UNDER ATTACK
a theory of the mind are both the computational processes that are supposed to occur in the mind and the data structures (“representations”) that such processes manipulate. According to the orthodox version of computationalism, mental processes are effective or algorithmic processes in the sense of computability theory, i.e. processes that—according to Church’s Thesis—compute partial recursive (or, equivalently, Turing-computable) functions.6 Algorithmic, or effective, computations in the above sense are digital processes that manipulate discrete entities. In principle, some extended notion of computation could also be considered, which includes analog computation. We shall discuss this point later in the chapter (see section 4). At the moment, we shall consider only digital computations. Even in its orthodox form (that takes into account only digital computations, and identifies computability with Turing-computability) computationalism can fully negate both (1) and (2) above. As regards (1), parallel, non von Neumann architectures are compatible with a computational stance, as results, for example, from the analysis of the notion of algorithm developed by Robin Gandy.7 As for (2), the claim that a certain device performs an algorithmic computation is fully legitimate, even if it is not a general purpose computer. Computationalism is compatible with the thesis that the mind/brain is (entirely or in part) built up of modules that are special purpose computational devices. David Marr, who probably developed the first full-blown version of computationalism, was a strong supporter of modularism (we shall take Marr’s computationalist stance into account in greater detail below). It is not our aim to take sides here on the role of computational universality in cognitive sciences. For our present purposes, it is relevant that a computationalist can deny that the human mind/brain, or some part of it, is a universal computer. Different positions are possible on this issue. Followers of weak modularism would agree that certain parts of the human mind/brain (for example, the input and output modules) are dedicated computational devices, but that this is not true for central cognition.8 Followers of strong modularism would claim that the entire human mind/brain is made up of dedicated computational components. Putting aside human cognition, one might agree that a computational explanation of the abilities of simple cognitive systems (e.g., insects) is possible, without assuming that such systems are universal computational devices. TP
PT
TP
TP
PT
PT
2. TROUBLES WITH COMPUTATIONALISM Thus, we have concluded that computationalism is not PoC. However, such oppositions as those mentioned at the beginning of the present chapter are often based on the identification of computationalism with PoC. In our opinion, these oppositions are based on the fact that a restrictive view of computationalism is assumed as a polemical target. As an example we consider here the opposition between dynamical and computational explanations in cognitive science, as put forward by Tim van Gelder. On the one hand, van Gelder states:
40
ROBERTO CORDESCHI AND MARCELLO FRIXIONE
(a) dynamical systems “can compute, i.e., be computers”, but “effective computation is a specific kind of computation, resulting from a certain kind of constraint on the processes involved”; moreover “it can be proved that certain classes of dynamical systems are more powerful—can compute a wider class of functions—than Turing Machines. So, dynamical systems can compute, i.e., be computers, without needing to be digital computers”.9 TP
PT
On the other hand, he acknowledges that: (b) “most if not all dynamical systems of practical relevance to cognitive science are effectively computable”, in the sense that their behaviour “is governed by some computable function”, i.e., some partial recursive, or Turing-computable, function.10 So, “no dynamicist in cognitive science (to my knowledge) […] has taken up dynamical modelling on the promise of super-Turing capacities”.11 TP
PT
TP
PT
Summing up, on the basis of (a) and (b), we can conclude that dynamical systems can be computers, and that their behaviour (at least as far as cognition is concerned) can be described in terms of (Turing-)computable functions. However, van Gelder claims that dynamicism in cognitive science is not compatible with a computational approach. This is because, for van Gelder, “dynamic” computation is profoundly different from common digital computation. Dynamical systems “compute” recursive functions; but they perform such computations in a different way from digital computers. How could this claim be justified? As far as we can see, it might be done in one of the following ways. (i) A first possibility is to suppose that the functioning of cognitive dynamical systems depends on some “hidden” non recursive process. In other words, given a dynamical system DS as in Figure 3.1, the functioning of DS would depend on at least one component S, whose behaviour exceeds the limits of effective computability (in the sense that the behaviour of S can not be described by a partial recursive function). However, this “non recursive” behaviour is not visible from the outside. E.g., let us suppose that S computes the values of a function fS: N → N such that, for every x, fS(x) is the xth decimal figure of a certain non (Turing-)computable real number n. By definition, fS is not a Turingcomputable function. Therefore, DS functioning is not algorithmic. However, let us suppose that the output of S has some effect on the overall behaviour of DS if and only if the input of S is in a certain finite range (say, is less or equal to 10). In the other cases, the operations of S have no effect on the output of DS. All the other processes in DS are fully algorithmic. Therefore, DS computes an effectively (Turing-)computable function (S could be replaced by a look-up table).
41
COMPUTATIONALISM UNDER ATTACK
Figure 3.1. A dynamical system (DS). See text for full explanation. (ii) A second possibility is that van Gelder’s opposition is based on a restricted interpretation of what counts as digital computation, i.e., on what we called above a restricted view of computationalism. In this case, the notion of “digital computation” that van Gelder contrasts with dynamicist computation would include some specific, restricted architectural assumption, and would be heavily biased towards PoC. Position (i) should be supported by very strong justifications; otherwise the hypothesis of “hidden” non algorithmic processes (i.e., non algorithmic process that have no influence on the input-output behaviour of the system) would fall under Occam’s razor. In any case, it is unlikely that van Gelder endorses this thesis. According to his words, he does not even consider it crucial that dynamical systems adopted in cognitive science be continuous rather then discrete systems.12 Therefore, it seems that his claims about computationalism can be traced back to (ii), i.e., to a restricted view of computationalism. The comparison between Watt’s regulator and a computational regulator in van Gelder (1995) shows that many of van Gelder’s criticisms to computationalism stem from a restricted view of such an approach.13 One of the features that would distinguish a computational version of the regulator from Watt’s regulator is that the former, and not the latter, is “sequential and cyclic”, in van Gelder’s words. But, in general, algorithmic processes are not necessarily sequential. William Bechtel argues that frequently the explanation of complex processes has an initially sequential structure, and only later—when a better understanding of the process is achieved—are more complex interaction schemes developed (for Bechtel, models of fermentation give an example of this kind of evolution).14 This, however, does not mean that a given explanation ceases to be mechanistic in nature. Also in this case, an aspect of a restricted class of algorithms (i.e., sequentiality) is assumed to be characteristic of computationalism (probably, it is not by chance that sequential computation is seen as a distinctive feature of PoC). TP
TP
PT
TP
PT
PT
42
ROBERTO CORDESCHI AND MARCELLO FRIXIONE
3. KINDS OF COMPUTATIONAL EXPLANATIONS As pointed out above, many arguments against computationalism are based on disagreements concerning computational architecture. The same could be said of the format of representations. Sometimes, computationalism is identified with the choice of a particular kind of representation, typically “language-like” representations characteristic of classic AI and cognitive science (logic-based representations, production rules, semantic networks, frames, and so on). These representations are processed by explicit manipulation rules. Representations with a less “linguistic” structure (firstly, distributed or “subsymbolic” connectionist representations) have been considered less akin to computationalism. These claims too usually stem from a restricted view of computationalism. Computationalism, per se, is not compromised by any particular kind of representation or process (once accepted that they are effective processes). Frequently, these disputes come from confusion concerning different levels of explanation. The analysis of the levels of explanation in cognitive science developed by David Marr may be useful here. According to Marr, a computational explanation can be stated at three different levels, the level of the computational theory, the algorithmic level and the implementation level.15 The level of the computational theory is the most abstract; it is concerned with the specification of the task of a certain cognitive phenomenon. At this level, cognitive tasks are characterised only in terms of their input, their output, and the goal of the computation, without any reference to specific cognitive processes and mechanisms. In other words, at the level of computational theory a cognitive task is accounted for in terms of a functional mapping between inputs and outputs. The algorithmic and the implementation levels deal, at different levels of abstraction, with the specification of the task identified at the computational level. The algorithmic level explains “how” a certain task is carried out: it deals with the computational processes and with the processed data structures (i.e., the “representations”). The implementation level deals with the physical features of the device (e.g., neural structures) implementing the data structure and the procedures singled out at the algorithmic level. The relationship between computational theory and algorithmic level is the same as that existing between a mathematical function and an algorithm that computes its values. The aim of a computational theory is the individuation of a (computable) function f as a model of a given cognitive phenomenon. At the computational theory level, no assumption is made on the algorithms that compute f, nor, a fortiori, on their implementation. The role of the computational level is to allow a more abstract understanding of cognitive phenomena: the computational explanation of a cognitive phenomenon cannot be reduced to the exhibition of an algorithm (or, worse, a computer program) that simulates its behaviour (as happens, for Marr, in many alleged cognitive models developed in AI). Summing up, at the most abstract level (the level of computational theory, in Marr’s terminology) a computational account is completely neutral with respect to the mechanisms adopted (kinds of representation, data structures, algorithms and processes). For example, at this abstract level classic and connectionist theories TP
PT
S
43
COMPUTATIONALISM UNDER ATTACK
cannot be discriminated on the basis of the fact that the former adopt “linguistic” representations while the latter are based on, say, “subsymbolic” representations. Nor is appealing to parallel rather than serial computation relevant.16 Adopting certain kinds of representations and processes instead of others (provided that they are effective processes) is not sufficient, per se, to exceed the boundaries of computationalism. There is a further aspect according to which Marr’s analysis can be relevant here. The opposition between computational and dynamicist approaches has sometimes been formulated in terms of different kinds of explanation. Mechanistic explanations that are typical of the computational approach aim at explaining cognitive phenomena in terms of the mechanisms (representations or data structures and processes) determining them. According to the dynamicist approach, to explain a given phenomenon is equivalent to identifying the laws governing it, i.e., the equations that describe its evolution through time.17 In this sense, the dynamicist explanation would be more homogeneous to traditional nomological-deductive explanations adopted, for example, in physics.18 However, the computational approach is not in principle incompatible with the kind of explanation favoured by dynamicists. In Marr’s hierarchy, the computational explanation—in which cognitive phenomena are characterised solely in terms of the functional correspondences between inputs and outputs—is homogeneous with the “traditional” explanation stated in terms of systems of equations. Therefore (if we do not assume that the equations governing the dynamics of cognitive systems are not computable, but as seen above, this is not the case for van Gelder), the dynamicist approach per se does not offer a different kind of explanation. It gives up a further advantage that the computational approach can offer us, i.e., mechanistic explanations in terms of algorithms and representations. One could ask whether dynamicists (and other opponents of classic cognitive science, e.g. situated cognition theorists) can avoid the level of representations and algorithms. In other words, for which phenomena would a truly dynamicist explanation that makes no hypothesis on underlying processes turn out to be really satisfactory? And to what extent do examples of explanation proposed by dynamicists really leave mechanisms or processes out of consideration? These questions are at the core of a lively debate in cognitive science, but are beyond the aims of this chapter. Bechtel’s view seems to support our claim on dynamicist explanation. Let us suppose that a dynamicist theory for some cognitive phenomenon has been developed. At this point, according to Bechtel, a further question arises: “How is the underlying system able to instantiate the laws identified in these [dynamicist] accounts? One way to answer this question is to pursue a mechanistic explanation”.19 Here dynamicist and mechanistic explanations complement one another. A further role of dynamicist explanations with respect to mechanistic ones would be that of providing a preliminary understanding of the behaviour of the system being studied: “It is helpful to have a good description of what a system is doing before trying to explain how it does it”.20 Such a preliminary understanding can be given by a dynamicist explanation. From the above quotations the analogies TP
TP
TP
TP
PT
PT
TP
PT
PT
PT
44
ROBERTO CORDESCHI AND MARCELLO FRIXIONE
clearly emerge between (a) the role of dynamicist explanations with respect to mechanistic models in Bechtel’s view, and (b) the role of computational theories in Marr’s methodology.21 Summing up, different positions are possible, which can be summarised by the following table: TP
PT
van Gelder
Bechtel
Marr
Dynamic-equation systems
Dynamic-equation systems
Computational theory
Mechanistic explanation
Algorithms and representation level
In conclusion, according to van Gelder and the supporters of the dynamical system theory, dynamicists’ explanations are completely unrelated to the computational/mechanistic approach, and incompatible with it. Dynamicist explanations have no mechanistic counterpart. According to Bechtel, dynamicist and computational explanations, far from being incompatible, are complementary and can be fully integrated. Finally, in a more radical way, the level of explanation of dynamicism can be considered as part of a computational explanation. This is the case of Marr’s methodology, that is to say, of a version of computationalism that does not reduce computational explanations to the mere individuation of algorithms, or, even worse, to the mere design of computer programs. 4. ANALOG COMPUTATIONS The possibility of adopting analog processes and representations in cognitive explanations is a particularly tricky problem for the computationalist.22 By analog processes (representations) we mean processes (representations) based on continuous quantities. This topic also plays some role in many debates opposing classic cognitive science in favour of various alternative “paradigms”. The adoption of some notion of analog computation has been discussed from different points of view, for example by supporters of the dynamicist approach, or by connectionists such as Churchland and Sejnowski (1992). The transition to analog computation involves a great discontinuity with traditional computability theory which, as said above, is based on the hypothesis that computational notions are defined in terms of operations on discrete quantities. Now the shift from digital to analog seems not to involve giving up a computational approach. Consider the case of Kenneth Craik, the Cambridge psychologist who is usually considered as a forerunner of computationalism in the study of mental processes. In 1943, he formulated his “symbolic theory of thought” in terms of computations on analog symbols. His definition of models, that later TP
PT
45
COMPUTATIONALISM UNDER ATTACK
became popular within classic cognitive science, is associated to the notion of simulation as performed by analog computers, which were the prevailing computational devices at the time. An analog computer can “imitate” a natural or mental phenomenon by reproducing certain “essential features” (in Craik’s words), and ignoring others, that are not essential for simulation. Craik’s examples are Vannevar Bush’s differential analyser, Lord Kelvin’s tide predictor, self-directing anti-aircraft guns, and small-scale models of humanmade artefacts, such as a bridge or a boat. An external process, such as the design of a bridge or the rising of tides, is physically realised as a device in which states of the process are “translated” as “representation by symbols” or “representatives” in input, and then manipulated by suitable rules or procedures. Finally, as output one has a “‘retranslation’ of these symbols into external processes (as in building a bridge to design) or at least [a] recognition of the correspondence between these symbols and external events (as in realising that a prediction is fulfilled)”.23 Such a device is the model (or rather, the working model) of the external process, and symbols must be meant to have a very general sense: symbols are not only words or numerals, but can also be, for example, positions of gears in a mechanism, whose “mechanical process” parallels the external process, thus causing the transition from one state to another.24 A remark is needed here about the use of the term “analog”. This term can be used with at least two different meanings that are in some way related, but that do not fully coincide.25 According to the first, analog processes are based on the manipulation of continuous quantities. Here “analog” is opposed to “digital”. According to the second, closer to Craik’s, analog models depend on some “resemblance” relation existing between the representations and what is represented.26 Here “analog” is opposed to “propositional” or “symbolic” (though, as seen above, this use of “symbolic” does not coincide with Craik’s terminology). If a system is analog in the latter meaning but not in the former (i.e., if its representations are based on some form of “resemblance”, but they are made up by discrete elements), then there is no problem in considering it a computational system in all respects (taken for granted, of course, that its evolution is governed by effective processes according to Church’s thesis). Johnson-Laird’s mental models are an example of this position: they are analog representations in that they “resemble” what they represent; however, they are digital, and therefore they are not analog according to the former of the meanings mentioned above.27 A more complex issue is the case of systems that are analog in the first meaning (i.e., systems based on the manipulation of continuous quantities, leaving aside the fact whether they are analog according to the second meaning or not). All computable processes according to Church’s thesis are discrete. Continuous quantities can at best be approximated in digital terms, but cannot be coded without error in digital terms (the cardinality of the continuum is greater than the cardinality of countable sets). As a consequence, processes based on continuous quantities exceed the limits of the orthodox notion of computation (i.e., the notion of computation based on Church’s thesis). TP
TP
PT
PT
TP
TP
PT
PT
TP
PT
46
ROBERTO CORDESCHI AND MARCELLO FRIXIONE
There are many ongoing research projects on the foundations of analog computation, which aim at characterising it in a rigorous mathematical way. Different possibilities have been explored. For example, so-called recursive analysis is aimed at extending the class of partial recursive functions to a class of functions with real arguments and values. Another line of research is inspired by the General Purpose Analog Computer (GPAC), a model of analog computation proposed in 1941 by Claude Shannon with the aim of giving a precise mathematical characterisation of Bush’s differential analyzer.28 The problem with such attempts is that, contrary to what happens in the case of digital computation, a general notion of analog computation does not emerge. In other words, up to now no class of real functions has been identified, that, regarding analog computation, plays the same role played by the class of partial recursive functions in the case of digital computation. Thus, a class of real functions which is stable and invariant with respect to different ways of characterising analog computations does not exist. Different notions of analog computation result in different classes of real functions. This state of affairs is rather discouraging. When one claims that a certain analog model (in the sense of a model based on the processing of continuous quantities) is a computational model, it is not immediately clear what this means (or, rather, it is considerably less clear than in the digital case). In other words, it is much more difficult to establish the extent to which we are still within the boundaries of computation, and when such boundaries have been exceeded. Summing up, if compared with digital computation, at the moment the theoretical framework of analog computation is still rather confused. Therefore, it is our opinion that the choice of relinquishing digital in favour of analog computation should be based on very strong theoretical grounds. Generic considerations such as “the brain is an analog rather than digital device” are not sufficient. The fact that a cognitive system changes in a “smooth” way is not compelling. By adopting functions that take their arguments and values in the set of rational numbers, one can represent changes that are as “smooth” as desired. The set Q of rational numbers is dense: given any two rational numbers, there is always a rational number between them. But Q is still a countable set (any rational number can be represented as a pair of natural numbers), and functions of the type Q → Q can therefore be computed by a digital device. The need of going over to the set R of real numbers, and, therefore, to give up digital quantities in favour of continuous quantities, can be motivated by the need of adopting the methods and the theoretical apparatus of mathematical analysis. The need for such methods in the modelling of cognitive phenomena is still an open question. TP
PT
5. CONCLUSION In this chapter we have pointed out some misunderstandings in putting computationalism under attack.
47
COMPUTATIONALISM UNDER ATTACK
Criticisms have often been based on a particularly rigid, restrictive view of computationalism, considered as a “paradigm” artfully opposed to other alleged “paradigms”. It is beyond doubt that there are deep differences between the approaches that have from time to time been proposed, ranging from “classic” cognitive science, to connectionism and dynamicism (we have not dealt with other possible contenders here). However, opposition of the kind “computationalism vs. something else” is misleading. It does not account for the actual differences between those approaches, and for the as yet unresolved problems. Despite certain restrictive views, computationalism is not as rigid as it was described by some of its early supporters and some of its recent opponents. Probably, the main problem to be resolved is not what cognition might be if not computation, but what kind of computation might be cognition.29 TP
PT
NOTES 1
We have borrowed this expression from Scheutz (2002). Suffice it to consider here Fodor’s strong criticism to “Wagnerian” (i.e., classic) AI and to the very concept of behaviour simulation (Fodor 1983). 3 Pylyshyn (1984, pp. xxi-xxii). 4 Consider, for example, the following quotations: “The dissimilarity between computers and nervous systems […] have made the metaphor that identifies the brain with a computer seem more than a trifle tin. […] The known parallel architecture of the brain and the suspected distributed nature of information storage has suggested to some researches that greater success in understanding cognitive functions might be achieved by a radical departure from the sequential stereotype. The idea has been to try to understand how interconnected neuronlike elements, simultaneously processing information, might be accomplish such tasks as pattern recognition and learning” (Churchland 1986, pp. 458-459). “In considering the brain as a Turing machine, we must confront the unsettling observation that, for a brain, the proposed table of states and state transitions is unknown […], the symbols on the input tape are ambiguous and have no preassigned meanings, and the transition rules, whatever they may be, are not consistently applied. […] It would appear that little or nothing of value can be gained from the application of this failed analogy between the computer and the brain” (Edelman 1992, p. 227). 5 Fodor and Pylyshyn (1988, pp. 50-64). 6 Within computability theory, effective processes are characterised in terms of a class of arithmetic functions, the so-called partial recursive, or Turing computable, functions. Sometimes the objection has been raised that identifying computation with computation of the values of a function is restrictive (see, e.g., Scheutz 2002). Such an identification is suitable only when input data are provided at the beginning of the computation, and outputs are produced at the end. The majority of computer implemented algorithms do not work in this way: in most cases computer programs continue to interact with the user and/or with their environment, taking new inputs and producing new outputs, until the computation ends. Similar considerations also hold for the algorithms employed within cognitive science. These phenomena are the topic of the field of research called interactive computation. However, these aspects are not strictly relevant for our present argument, and we do not take them into account here. TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
48
7
ROBERTO CORDESCHI AND MARCELLO FRIXIONE
Gandy (1980) developed a very comprehensive analysis of algorithmic computation, which also includes parallel computing processes. This analysis resulted in a further confirmation of Church’s Thesis. Gandy individuated a number of very general constraints that every algorithmic process must satisfy. The computing devices that obey such constraints are called Gandy machines. Turing machines turn out to be a special case of Gandy machine. However, it can be proved that a Turing machine can do whatever can be done by a Gandy machine (in other words, all functions that can be computed by a Gandy machine can also be computed by a Turing machine). 8 See, e.g., Fodor (1983). 9 van Gelder (1998a, sections 6.3 and 6.10). 10 Ibid., section 6.4. 11 van Gelder (1998b, section 1.3). 12 Ibid., section 1.4. 13 van Gelder (1995). 14 Bechtel (1998, section 5). 15 See this volume, p. 9. 16 This does not mean that connectionism is simply an implementation theory, as stressed by certain early supporters of “classic” cognitive science. On the one hand, the above mentioned opposition regards representations and algorithms, not implementation issues. On the other hand, other features might be used to distinguish classic and connectionistic explanations at the computational level, e.g. connectionists’ emphasis on learning and the statistical side of cognition. 17 One has differential equations in the case of continuous dynamical systems and difference equations in the case of discrete dynamical systems. As to differential equations, a mechanical (algorithmic) model can only approximate their behaviour. We do not deal with this issue at the moment. As seen above, for van Gelder the distinctive feature of the dynamical approach does not consist in the use of continuous rather than discrete systems (van Gelder, 1988b, section 1.4). 18 See for example van Gelder’s distinction between “Hobbesian”, i.e., mechanistic, and “Humean” explanations in psychology (van Gelder 1998a). On this topic see also Bechtel (1998, sections 4 and 5); and Beer (2000, pp. 96-97). 19 Bechtel (1998, p. 312). 20 Ibidem. 21 Bechtel sees the role of dynamical explanations in the development of mechanical models as analogous to that of the ecological requirement stressed by certain cognitivist psychologists (e.g., Ulric Neisser). He observes that “the language of ecological validity is drawn from James Gibson, and it is noteworthy that several of today’s DST [Dynamical System Theory] theorists […] are also neo-Gibsonians” (Bechtel 1998, p. 312). It is noteworthy also that Marr considered Gibson to be “perhaps the nearest anyone came to the level of computational theory” (Marr 1982, p. 29). But Gibson “was mislead by the apparent simplicity of vision” (p. 30), so disregarding the mechanistic side of the theory (i.e., explanations in terms of representations and algorithms). Gibson, Marr concluded, “did not understand properly what information processing was, which led him to seriously underestimate the complexity of the information-processing problems involved in vision” (p. 29). 22 For a recent point of view on this topic see Trautteur (2005). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
COMPUTATIONALISM UNDER ATTACK
23
49
Craik (1943, p. 50). See Cordeschi (2002, chapter 4) for further details. 25 See, e.g., Pylyshyn (1984, pp. 199 ff.). 26 O’Brien (1998) calls this thesis structural isomorphism. For a partially similar position, see Trenholme (1993). 27 Johnson-Laird (1983). Analog models in the second meaning can in turn be combined with both propositional and connectionist representations, still remaining within the boundaries of digital computation. See for example Chella, Frixione and Gaglio (1997, 2000) for a hybrid model in the field of artificial vision and robotics, which combines analog models with both propositional representations and connectionist networks. 28 On these topics see, e.g., Pour-El and Richards (1989); Weihrauch (2000). 29 Thanks to Diego Marconi, Massimo Marraffa, Teresa Numerico, Dario Palladino and Giuseppe Trautteur for useful critical remarks on previous versions of this chapter. TP
PT
24 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
PART II DIMENSIONS OF MIND
CHAPTER 4 VISION SCIENCE AND THE PROBLEM OF PERCEPTION Alfredo Paternoster
“Vision science—as Palmer puts it in his introduction to a recent, impressive work on the subject—is not just one branch of cognitive science, but the single most coherent, integrated and successful”.1 However chauvinist this judgment may seem, vision has undoubtedly been and still is a very important area in cognitive science, stimulating a particularly rich discussion between philosophers and scientists. There are two opposing, radical attitudes with regard to the relation between philosophy and cognitive science. On the one hand are those (including most, but by no means all philosophers) who think that conclusions following from a priori arguments are more substantial than the empirical results provided by current sciences of mind. These people are inclined to regard such or such psychological theory as confirmation of their philosophical bents. On the other hand, there are scholars who accord priority at least to the most well-established experimental results. Building on empirical evidence, they try to outline a philosophical picture coherent with (current) scientific findings. Although I am in general more sympathetic with the latter strategy, I believe there are problems in the field which are still so hard to assess that we ought to assume a more pliant and cautious attitude, as will be shown in this chapter. Based upon the analysis of a case study— the controversy between direct and indirect realism—I will argue that some results from cognitive science can cast light on philosophical problems about visual perception. At the same time, however, it will become clear how the empirical research is also still constrained by philosophical hypotheses and prejudices. TP
PT
1. THE PHILOSOPHICAL OPPOSITION BETWEEN DIRECT AND INDIRECT PERCEPTION Philosophy has been much concerned with perception, especially in the seventeenth and eighteenth centuries, when the main subjects of philosophical reflection were the origins and foundations of knowledge. Many different questions have arisen in this domain, yet it is not an overstatement to claim that there is something to be regarded as the philosophical problem of perception, which could be expressed by the following question: can we directly perceive the external world?2 Readers without a philosophical background may, quite legitimately, consider this question quite bizarre. What does “directly” mean? If it means that TP
53 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 53–64. © 2007 Springer.
PT
54
ALFREDO PATERNOSTER
visual experience is automatic, immediate (that is, instantaneous) and fully unconscious of the underlying complex cerebral mechanisms, then the claim would seem to be trivially true. Yet, the idea that the external world could be given to us indirectly comes from what seems to be an equally trivial, prima facie harmless remark, namely, that we access the world through our sense receptors.3 Although we have good reasons to believe that receptors are reliable, they are nonetheless characterized by some resolution or grain features and certain specific ways of coding information. Our perception of the world is thus constrained by how the receptors work and bounded by their powers. For instance, we are blind and deaf to some frequencies; things appear to us in such and such way, e.g., as having certain shapes and hues. It is only a small step from here to the somewhat skeptical conclusion that we do not know the world as it really is, but only as it appears to us. That is why the issue of perception has traditionally been an epistemological one in the classical sense, i.e., an issue concerning the reliability and justification of knowledge, closely related to the issue of skepticism. In one of his personifications, the skeptic dares put in question the existence of the world. Indeed, the dependence on sense receptors might be interpreted as entailing that we live in a world created by our senses, so that the world does not exist at all (“ontological phenomenalism”). However, notwithstanding our deference to the venerable question of skepticism, we do not take this skeptical hypothesis seriously, in the light of the naturalistic attitude underlying the essays included in this book. We may rather propose the more moderate philosophical position of indirect realism, according to which the contents of visual experience—what seems us to see in an act or event of vision4—are, or derive from, the output of senses (in the broadest sense of the word, see note 3). That is, they are a kind of mental event. The mental nature of perceptual contents does not undermine the realism of this view, since the causal source of the contents is the external environment, which contains objects, properties and relations. These real items are reconstructed, usually in a reliable way, by mental—and ultimately, neural—operations. By contrast, according to direct realism, the contents of visual perception are objects, properties and relations in the real world. Direct realists hold that indirect realism has a few, strongly implausible, epistemological and metaphysical consequences. The metaphysical upshot consists in the reification of mental entities, that is, a new form of Cartesian dualism; the epistemological consequence is the skepticism about the existence of the external world5. More precisely, indirect realism cannot defeat the skeptical hypothesis: the non-existence of the world cannot be ruled out. In this respect, indirect realism would be on a par with ontological phenomenalism. Direct realists claim that these upshots are intrinsic to a familiar and unfortunately persistent epistemological account of perception, which dates back to Descartes and is still alive in the sense-data theory,6 the most recent version of classical British empiricism. This account is based on the idea that external things cause internal impressions. In the empiricist philosophical tradition sense receptors yield to sense impressions, and ideas come from impressions, by association or inference (depending on the version). As a consequence, the sensorial system yields TP
PT
TP
TP
TP
PT
PT
PT
55
VISION SCIENCE AND THE PROBLEM OF PERCEPTION
an “idea veil”, the so-called veil of perception. What we can directly access in ordinary visual experience is not the world in itself, but a phenomenal world, whose features are defined by the nature of the receptors. For instance, in the sense-data theory, objects, properties and relations are construed with the primary contents of visual experience, such as color hues, blobs, spots, elementary shapes. We may disagree on what the perceptual primitives actually are, that is, on what the immediate constituents of visual experience are; but regardless of this, they are mental entities. According to Hilary Putnam, the failure of indirect realism also entails the bankruptcy of computational cognitive science, or, at least, of a certain philosophy of mind which seems to be implicit in classical cognitive science.7 Computationalism and indirect realism are indeed both committed to the thesis that what we can access in visual perception is the output of sense receptors. On the other hand, computationalists may interpret this the other way round, claiming that, if computational cognitive science actually implies indirect realism, this would be a good reason for endorsing indirect realism. Either way the argument is brought to bear, it presupposes the existence of a link between the computational explanation of perception and indirect realism. Do we have good reason to believe this premise to be true? This is not an easy question to answer. I will argue that, although cognitive science cannot, in principle, provide a straightforward (or… direct!) answer to this question, it offers some useful cues to play down the question. TP
PT
2. TWO PSYCHOLOGICAL ACCOUNTS OF VISION From a quick overview of a few handbooks on the psychology of perception it is apparent that this is also an area of lively debate between those who support direct and indirect theories.8 Constructive theories are usually regarded as “indirect”, whereas “(the theory of) direct perception” is the term used by James J. Gibson to denote his ecological optics, which is explicitly presented as an alternative theory to constructive theories.9 These two approaches differ on several aspects. From the perspective relevant here, the crucial point is the following. According to constructivists,10 what is perceived is underdetermined by the information carried by photoreceptors, that is, the retinal information is compatible with more than one visual interpretation. The visual system must therefore integrate these data with further information in order to determine, in a “reconstructive” way, what there is in the world. This extra information is already available to the perceiver, either being innate, or (more traditionally) coming from learning. Sometimes this process of integration is regarded as a kind of unconscious inference. By contrast, according to Gibson and his followers,11 the visual system, far from reconstructing or inferring, merely extracts, picks out, the information present in the stimulation, “attuning itself” to the relevant information structures. The underdetermination of the distal stimulus is an established fact in geometrical optics. For instance, one and the same retinal image can be projected TP
TP
PT
PT
TP
TP
PT
PT
56
ALFREDO PATERNOSTER
from very different shapes. Therefore, to pursue his strategy, the ecologist must appeal to a different notion of stimulus. According to Gibson, information in the stimulus does not merely specify elementary features such as edges, contours or blobs. The information carried is about higher order invariants, that is, complex properties such as the texture gradient, optical flow, horizon ratio, which are identical to certain phenomenally perceived features. Or, at the very least, these higher order invariants co-vary with phenomenal items, according to simple laws. The stimulus can be so rich and structured because vision is not static. The agent’s eyes, as well as his head and body, move all the time. Movements generate differences, and differences carry information. The constructive conception of vision is too static, since visual processes are taken as the processing of static images on the retina. On the contrary, thanks to movement, the stimulus is not a twodimensional image. It is rather a rich pattern of light, the so-called ambient optic array, which is rich enough to specify the pattern of surfaces in the visual field. Computational vision, which is still based on the paradigm of Marr’s theory,12 can be regarded as a form of constructivism, since it is committed to the thesis that the information carried by the stimulus must be conspicuously integrated in order to produce the ordinary visual experience. What characterizes the computational approach are precisely the specific suggestions about how this integration is performed: what is to be computed, and how it is computed. According to the computational approach, a good psychological theory of vision should describe, inter alia, the algorithms which, step by step, reconstruct the properties of the distal stimulus. By contrast, the information extraction processes postulated by ecologists cannot be decomposed in psychologically more basic operations or (presumed) computations. Arguably, it is the task of the physiologist to discover how the nervous system is attuned to higher order invariants, as it is a task of biochemistry to cast further light onto the details of the neurophysiological story. But the ecological psychologist need not investigate over and above the specification of invariants: what they are and how they can be carried by the ambient optic array. The differences between the ecological theory and the computational theory—or, more generally, between ecologists and constructivists—are gradually diminishing. There is indeed a mutual “outpouring” of ideas, although Gibson’s explicit rejection of the notion of computation makes it difficult to attain a real synthesis13. Be that as it may, it is beyond the purpose of this chapter to establish a systematic confrontation between these two approaches. My aim is rather to assess whether the above-mentioned features of the two approaches provide empirical evidence to either of the philosophical positions being discussed. Can we say that the computational theory of perception is a good instance of indirect realism? When psychologists say that perception is direct (/indirect), do they mean the same thing that philosophers intend when referring to perception as “direct” (/“indirect”)? Is there a third way over and above direct and indirect realism? These are the kind of questions we shall try to answer. TP
PT
TP
PT
57
VISION SCIENCE AND THE PROBLEM OF PERCEPTION
3. PSYCHOLOGICAL AND PHILOSOPHICAL ACCOUNTS: A COMPARISON Firstly, it is important to point out that the issue discussed in philosophy is epistemological, whereas the psychological issue is explanatory. Philosophers argue about how perception should be regarded in order to fill a certain epistemological role: to warrant our knowledge of the external world. Psychologists deal with the problem of how visual perception works. They are not concerned with issues of justification since, in their view, nothing has to be justified. It is assumed that perception usually, though not always, works well. Sure, these two aspects are not fully independent of each other. For instance, an assessment of perception in inferential terms could be used to support the epistemological thesis according to which perception is the source of conceptual knowledge. However, one should be cautious in deriving epistemological conclusions from explanatory claims, since there seems to be no necessary link between a certain account of how perception works and such or such epistemological interpretation. In this sense, the concept of perception proposed from time to time by philosophers is to a large extent independent of the scientific accounts of its functioning. As we shall see, confusion between these two kinds of discourse is one of the reasons underlying Putnam’s hasty claim according to which the computational theory of perception is a sophisticated version of the sense-data theory. As a consequence, Putnam charges (unduly) cognitive science with not being able to rule out skepticism. Second, the terms “direct” and “indirect” are used in different senses in the two discourses. As we have seen, philosophers argue that perception is indirect insofar as the content of visual experience is a mental entity (from now on I will refer to this thesis as IPH, Indirect in the PHilosophical sense). The computational theory of perception can be said to be “indirect” on the following (PSychological) aspects: IPS1
To perceive properties and objects of the real world requires the integration (in a computational and/or inferential way) of information in the stimulus. In other words, world properties cannot directly be picked out in the stimulus.
IPS2
To perceive requires mental or psychological operations. That is to say, the integration mentioned in IPS1 can properly be described as a collection of psychological operations.
B
B
B
B
As far as I can tell, IPS1 has in itself nothing to do with the philosophical concept of indirect perception. It is only when taken together with IPS2 that IPS1 seems to warrant (philosophical) indirect realism. The question is therefore whether IPS2 is equivalent to IPH. Is it just a question of nuances or, instead, are the two claims substantially different? According to computationalists, perception involves several processing stages, the outcome of which is the construction of certain visual representations. These are regarded as mental entities. In this sense, having a visual experience is B
B
B
B
B
B
B
B
58
ALFREDO PATERNOSTER
tantamount to being in a relation with a mental representation. There is less than one step from this thesis to IPH, that is, to the claim that percepts are mental entities. On the other hand, indirect realism does not fit well with computational cognitive science, since this thesis regards the nature of conscious contents of experience. Cognitive science is not specifically concerned with conscious states. On the contrary, in most cases cognitive models concern subpersonal processes, which manipulate information whose structure does not emerge as an object of first-person awareness.14 Computational vision largely regards those mechanisms and structures whereby we come to have a perceptual experience, whereas indirect realism concerns perceptual experience as such. In other words, indirect realism is concerned with the kind of states in which we experience, in the phenomenological sense, something. These are states in which something appears to us in a certain way. Take, for instance, the paradigm of indirect realism, the sense-data theory. Its starting point is that hallucinations and veridical perceptions cannot be phenomenally discriminated. Thus the conclusion of the argument concerns only the phenomenal structure of visual experience. In the opposite field, the link between the ecological theory and direct realism is more apparent. We could even regard Gibson as having tried to outline the philosophical position usually labeled as “direct realism”, giving it a solid empirical basis. Indeed, since 1966 (and even before) he has presented his theory as an alternative to the philosophical doctrine of indirect realism, to which most psychological theories of perception are committed.15 And he includes computational vision among the psychological theories affected by the philosophical prejudice whereby to perceive involves the construction of mental entities. Therefore, when Putnam regards the computational theory as a new, (pseudo?)scientific version of the sense-data theory, he makes Gibson’s very same point. Both claim that computationalism, insofar as it is committed to the notion of mental representation, re-introduces the veil of perception, entailing an unacceptable mindbody dualism.16 We would thus expect the rejection of mental representations in the ecological theory to be indisputable, i.e., that the Gibsonian notion of “direct”, far from being the mere denial of the opposite claim, makes it thoroughly perspicuous that the ecological theory is not committed to any kind of mentalism. However, brief reflection shows how there are a few objections to the Gibsonian use of “direct”. First of all, invariants are not environmental properties; they are properties of the (proximal) stimulus, at least in the sense that they cannot be defined independently of the perceiver.17 Indeed we perceive environmental properties by perceiving invariants. Then how can it be claimed that we perceive the external world directly? The ecologist’s answer is that there is a nomological relation between environmental properties and invariants: invariants are neither mental entities, nor the outcome of mental manipulations, in the sense that they are not signs or symbols of environmental properties.18 Thus, it is easy to reply that the computational theory can also be described, after all, as a theory of direct perception: the kind of processing performed on the proximal stimulus—which nomologically co-varies with environmental properties—is not mental, meaning that it is not semantic or TP
PT
TP
TP
PT
TP
PT
TP
PT
PT
VISION SCIENCE AND THE PROBLEM OF PERCEPTION
59
intentional, at least if we are confined to the domain of early vision (which corresponds, more or less, to the two lower processing stages according to Marr’s theory). Early vision representations are neither signs nor symbols. Gibson notoriously denies that perception involves computations, but this is not relevant here. The point is rather whether, in the light of Gibson’s criteria, visual representations are signs. Well, they are not. Representations causally co-vary with environmental properties. Being the result of certain computations does not make them more symbolic than invariants. Whether or not it is appropriate to say that representations are intentional is debatable; what is important is that, once (we) adopted the intentional talk, invariants turn out to be intentional as well. To sum up, anyone who wants to endorse the ecological theory as a form of direct realism, faces the following dilemma: either the ecological theory is also a theory of indirect perception, insofar as invariants are kinds of mental entities, or we lack a clear sense whereby ecological theory can be said to be “direct” (and computational vision can be said to be “indirect”). There may be another, more positive way of characterizing Gibson’s version of direct realism, a way that makes the criticism of computational theory as a form of indirect realism more perspicuous. The idea is the following. Perceiving is something an animal does, that is, it is a kind of behavior. And behavior is a property of the whole agent. Therefore, explaining perception is to give an account of the relation between an agent and his environment, rather than to describe the working of a neural system at mental-functional level, as computationalists are inclined to think. Noë and Thompson found quite an effective way of expressing this point, which they take to be crucial in Gibson: Perception […] is not an occurrence that takes place in the brain of the perceiver, but rather is an act of the whole animal, the act of perceptually guided exploration of the environment. One misdescribes vision if one thinks of it as a subpersonal process whereby the brain builds up an internal model of the environment on the basis of impoverished sensory images. Such a conception of vision is pitched at the wrong level, namely, that of the internal enabling conditions for vision […].19 TP
PT
In short, vision is a function of the whole organism; it is what allows it to navigate and act successfully in the environment. As this quotation from Noë and Thompson shows,20 this is a different kind of explanation. While according to the ecological vision visual processes are processes of the whole agent, the computational approach has an analytical, decompositionoriented character: computationalists search for what in the head, or in the brain, makes the visual experience of the whole subject possible. We could say, following John McDowell,21 that Gibson’s theory is a phenomenological theory of visual perception, in a dual sense. On the one hand, it is a personal level explanation; on the other, it is a theory which accounts for the sensorial (conscious) experience of an TP
TP
PT
PT
60
ALFREDO PATERNOSTER
agent. However, things are actually more complicated than this. The notion of direct information extraction involves invariants, and at least some of these—e.g., the optical flow—are relevant to processing levels typical of what computationalists call “early vision”. These are parameters that have nothing to do with the conscious experience of an organism. Anyhow, even without wishing to subscribe to the phenomenological interpretation of the ecological theory, it is true that it is a personal level theory, inasmuch as it is essentially a macrostructural description of perception, that is, a theory of how a whole agent sees. This description postulates some psychologically primitive processes, such as the extraction of invariants, whose elucidation is a task for neurophysiology. By contrast, the computational theorist thinks these “black boxes” should be described using a psychological vocabulary, and, specifically, in terms of algorithms. The computational theory provides us with a microstructural explanation of perception, which is still psychological. It is exactly in this sense that the ecological theory repudiates mentalism: it does not regard the operations performed by the visual system as mental operations. However, this still does not prove that the kind of mentalism implicit in computationalism amounts to straightforwardly upholding indirect realism, characterized as the idea that there are mental entities “interposed” between the agents and the world. Just insofar as it is subpersonal and nonphenomenological, computational vision is not located at the same level of standard (philosophical) indirect realism. 4. A SOMEWHAT DEFLATIONARY APPROACH Let us draw some tentative conclusions. First of all, our discussion of psychological theories suggests that direct realism and indirect realism can both be said to be (roughly) true, in two distinct domains of interpretation, that is, with respect to two different levels of description. More specifically, there is a sense of “indirect” such that it is not inappropriate to qualify visual perception as “indirect”, at least if one endorses (as I am inclined to) the thesis according to which the study of the operations performed by the visual system is relevant to psychology. Indeed, since there is robust evidence for the thesis that vision involves a collection of computational/representational stages, visual perception turns out to be indirect at the subpersonal description level (see, supra, IPS1 and IPS2). At the personal level, where perception is described as a relation between the whole agent and the environment, it seems more appropriate to refer to perception as “direct”. The truth of direct realism comes from the powerful intuition that, in a perceptual act or event, the world is simply given to us, rather than represented (= re-presented): I, as the subject of visual experience, am not related to the process which makes the experience possible, I am related to the causal source of the experience. There is no more reason to doubt that what I am seeing is a real cat than there would be reason to doubt that I am holding a real cat. In this sense, perception should not be considered as an interface between us and the world. It is rather the part of our body—the functional subsystem—that allows us to orient ourselves successfully in B
B
B
B
VISION SCIENCE AND THE PROBLEM OF PERCEPTION
61
the environment. This characterization, however, is clearly about the issue of what perception is for us, that is, what perception is when regarded as a phenomenological event. It has nothing to do with the microstructure of vision, i.e., the problem of how the visual subsystem works. The microstructure of visual perception clearly shows that perception is indirect at the subpersonal level. But this sense of “indirect”—let us call it the psychological sense—does not correspond to the standard philosophical sense, because the representations hypothesized by computationalists are not things one can see. They are not the content of visual experience; they are, rather, structures in virtue of which visual experience is possible. Therefore Putnam’s assault on computationalism appears misplaced, since he misses the distinction between the phenomenological (macrostructural) level and the microstructural level. “Indirect” in the psychological sense does not imply that percepts are mental entities; it only entails that percepts depend on mental operations, and this is precisely what is meant by the claim that percepts are “constructed”. Likewise, “direct” in the psychological sense does not entail that there are no mental operations at all. To assess perception as direct amounts (a) to denying the presence of cognitive operations, i.e. conceptual mediators, or thought processes; and (b) to minimizing or even completely ruling out the psychological significance of these operations. Indeed, since they are identified with processes of invariant extraction, these operations should be considered as psychologically primitive. We could thus say that empirical psychology—considered as a whole—plays down the relevance of the opposition between direct and indirect. These two qualifications account for two different points of view which can coexist peacefully. Consider the following analogy. Take a file transfer process, implemented by a pair of FTP programs. One program is running on your client PC, the other on a remote server. Could we say that the two programs are in direct contact? Well, yes and no. In a sense, it is correct to say that the two programs communicate directly, because the requests specified by one of them are interpreted and executed by the other. However, there is a great deal of software (not to mention the several physical channels) interfacing the two programs. A request generated by the FTP-client is first transmitted to another local program, then to another, and so on, until it is received by a remote program and eventually by the FTP-server, the end point. There are many layers of processing, all required to allow communication. Put more simply, we could say that there are two kinds of communication: logical communication, between the two FTP programs, and physical communication, between any pair of adjacent devices (either software or hardware) in the transmission chain. Likewise, a perceptual act could be considered as a logical communication between a subject and a piece of real world, but, in order to make perception possible, many processing stages are required and, in particular, some representational structures must be constructed. From this analogy it is evident that the indirect view, re-interpreted as proposed, does not involve the philosophical bankruptcy (dualism, skepticism, impossibility to refer) of which Putnam complains. In fact my view is not committed to the thesis that representations are interfaces. Or, more specifically, they are not interfaces as in something which separates agents and reality. On the contrary,
62
ALFREDO PATERNOSTER
representations (and processing) are what make the logical communication possible. I admit, however, that logical communication is somehow prior, in accord with the direct realist intuition. In this way both the “indirect intuition”, according to which perceptual content depends on mental operations, and the “direct” demand that we are simply in touch with the world are accommodated. With regard to the charge of reifying mental entities, the answer is that mental representations are physical entities individuated at the functional level; mental representations do not constitute an independent ontological realm, over and above the physical realm. However, there is an important limit to the analogy. FTP programs are (the end-point) “boxes” in the transmission chain, whereas agents are not boxes, they are rather the wholes in which the boxes are instantiated. Moreover, the relation between a person and her computational (or, for that matter, physical) subsystems is not a standard parts/whole relation. Therefore the analogy accounts for the multilayered structure of perception, but not for the nature of the relation between an agent and the content of her experience, which remains to a certain extent a mystery. Indeed computationalists are faced with the problem of explaining exactly which kind of relation there is between us, the phenomenal subjects, and the representations built by perceptual subsystems. According to a standard account, the content of my visual experience supervenes on a computational-representational structure. This means that the instantiation of a representation (e.g., a Marr’s 2½-D sketch of a cat) necessarily yields the occurrence of an experience with a given content (e.g., my seeing a cat) but this statement leaves two questions open. First, there are good reasons to doubt that there is a systematic matching between common sense mental properties and computational properties, since the respective individuation criteria are very different.22 Second, why is a perceptual content experienced so-and-so, when a given computational-representational structure is instantiated? It is a well known fact that this is the problem of the alleged explanatory gap between physical-functional facts and phenomenal consciousness.23 I cannot deal with these problems here. Rather, I wish to point out that, even if one supposes that perceptual events (= token-states, such as my seeing a cat at t1) are identical to computational events—which I admit merely for the sake of argument—this does not mean either that the mental is “reified” or that we (as persons) are in touch with mental entities. Computational states are classes of neurophysiological events and, of course, we are not in touch with our brain events, at least not in a standard sense. The above-mentioned difficulties remain, but they do not specifically concern perception, and have nothing to do with dualism or skepticism. TP
PT
TP
B
PT
B
5. CONCLUSIONS Vision science offers several elements which allow us to solve, or at least to see in a different light, some traditional philosophical problems, such as the controversy between direct and indirect realism. However, although computational and ecological theories aim to be empirical accounts of visual perception, both embody
VISION SCIENCE AND THE PROBLEM OF PERCEPTION
63
some assumptions that we may well qualify as “philosophical”, about, e.g., how the proximal stimulus should be individuated, or what exactly should be included in vision. It is hard to see, at the moment, how empirical psychology might get entirely rid of these assumptions, which are still a priori to a certain extent. This explains why a few philosophical “cramps” are still to be found, and the psychological scientific study of vision cannot set aside some assumptions that are hardly justified on a purely empirical basis. Moreover, we are far from understanding the relation between the content of visual experiences and the representational structures postulated by computational psychology. Until we are able to cast light on this problem, accommodating or explaining away philosophical intuitions in a scientific framework, the theoretical status of (computational) cognitive science will remain shaky to a certain extent. Our analysis of vision has highlighted the existence of a persistent interweaving between the empirical and the a priori—between science and philosophy. I think that this should not be considered as evidence that vision science has not been successful, or that it has not advanced so much since the beginnings of psychological research. On the contrary, the interaction between philosophy and science turns out to be cognitively fruitful; the point is rather that there are problems which are, in virtue of their nature, intrinsically hard to cope with in a purely empirical way. NOTES 1
Palmer (1999, pp. xvii-xviii). See Smith (2002). 3 By “sense receptors” I mean not only the sensory organs, but also the overall set of neural subsystems dedicated to processing sensorial information, as far as it is possible to single out these (it is notoriously difficult to define accurate boundaries between sensation and perception). It is worth noting that there is a great temptation to say that our access to the world is “mediated by sense receptors”, an expression which seems to suggest explicitly that perception is indirect. 4 From now on I shall use “content” of a perception or visual experience, instead of “object”, in order to make it clear that I am talking about what we seem to see, without begging the question whether what we seem to see is what there actually is, e.g., the question of what is the nature of the “things” we see. 5 Arguments of this kind have recently been offered by McDowell (1994) and Putnam (1999). 6 By, e.g., Russell (1912, chapter 1; 1918, chapter 8) and Ayer (1940). 7 Putnam (1999). 8 See, e.g., Bruce, Green and Georgeson (1996); Palmer (1999); Rock (1983). 9 Gibson (1972). 10 The constructive conception, which dates back to Helmholtz, includes Richard Gregory (1970, 1980) and the late Irvin Rock (1983, 1997) among its most recent and influential scholars. TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
TP
PT
PT
64
11
ALFREDO PATERNOSTER
Gibson (1979); Michaels and Carello (1981); Cutting (1986). Marr (1982). Since then, many studies have been conducted in the wake of the computational paradigm, and some of Marr’s hypotheses have been disconfirmed. Nevertheless, the general guide-lines of the paradigm are still those defined by Marr. 13 Neisser (1976) endorses the experimental approach of ecologism, yet holds the constructivist notion of anticipatory schema. Ballard (1991, 1996), who proposed computational models clearly inspired by Gibson’s insights, outlines a sensorimotor paradigm the central idea of which is that perception cannot be considered apart from action. Norman (2001) suggests (on the basis of neuropsychological evidence) that the two approaches account for two distinct perceptual functions, so that both are required to provide a full explanation. 14 Therefore, there is no scientific correlate with the notion of sense datum. 15 “I argue that the seeing of an environment by an observer […] is direct in that it is not mediated by visual sensations or sense data” (Gibson 1972, p. 77). 16 Gibson (1972); Putnam (1999, pp. 101-102, 169-170). 17 Arguably, the Gibsonian sense of “environment” is subject-related. One thing is the world, another thing is the environment. We might even say that there is no room in Gibson’s account for the notion of real world. But in this case his view could hardly be regarded as a form of standard philosophical (direct) realism. 18 See Schwartz (1994, pp. 144 ff.). 19 Noë and Thompson (2002, p. 3). 20 See also O’Regan and Noë (2001). 21 McDowell (1994). 22 As Dennett puts it: “The actual internal states that cause behavior will not be functionally individuated […] the way belief/desire psychology carves things up” ([1981] 1987, p. 71). It seems that one can only subscribe to supervenience at the price of a strong, and perhaps too strong, idealization. 23 See, e.g., Chalmers (1996). TP
PT
12 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 5 SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY Fiona Macpherson
“Synaesthesia” is most often characterised as a union or mixing of the senses.1 Richard Cytowic describes it thus: “It denotes the rare capacity to hear colours, taste shapes or experience other equally startling sensory blendings whose quality seems difficult for most of us to imagine”.2 One famous example is of a man who “tasted shapes”. When he experienced flavours he also experienced shapes rubbing against his face or hands.3 Such popular characterisations are rough and ready. What is certainly true about synaesthesia is that it involves the interaction between sensory phenomena: in response to certain stimuli some sensory phenomena are elicited in synaesthetes that are not elicited in non-synaesthetes. However, the exact nature of the additional sensory phenomena forms a large part of the debate on the nature of synaesthesia. Synaesthesia is a condition that has been known about for some time. In the late nineteenth century, and early twentieth, century very many articles appeared on the topic in the psychological literature.4 Much of this work on synaesthesia relied on introspective reports of subjects. In consequence, when later in the twentieth century psychologists eschewed introspective reports and radical behaviourist methodology became the order of the day, synaesthesia was rarely a topic of research. In more recent times, however, psychology has once again changed tack. With the advent of cognitive psychology and of objective techniques that try to probe the nature of conscious states of the mind that are reported in introspection, psychological interest in synaesthesia has resumed. Many new findings about the subject have recently been brought to light. In philosophy, interest in synaesthesia is only just beginning to arise. The phenomenon is potentially philosophically interesting for several reasons. One reason is that evidence about cross-modal phenomena may influence answers to questions that philosophers ask about how to individuate the senses, about the relationships between the senses, and about what the detailed characterisation of experiences in the different modalities should be. Another is because the sensory systems, such as vision and audition, are usually taken to be our paradigm of cognitive modular systems. Roughly speaking, modular systems are ones that cannot be rationally influenced by beliefs or other high-level cognitive states or even influenced by other parts of the perceptual system.5 Recently, philosophers and psychologists have debated whether synaesthesia consists in a breakdown in TP
TP
TP
TP
PT
PT
PT
TP
PT
TP
PT
65 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 65–80. © 2007 Springer.
PT
66
FIONA MACPHERSON
modularity or whether synaesthetes have an additional perceptual module compared with non-synaesthetes.6 Psychologists are also investigating the nature of the synaesthetic experience. This is an interesting topic in itself, but the investigation also gives rise to philosophical interest. The question of whether, or to what extent, the nature of conscious states can be determined by empirical means is one that philosophers have long debated. Psychological methodology and suppositions should be scrutinised by philosophers who have long dealt with theoretical questions of this nature. At the same time, philosophers may gain new insights from the techniques that psychologists have applied to studying synaesthesia. Lastly, new psychological phenomena can often provide evidence that philosophical theories of the mind ought to accommodate. If they cannot then the phenomena constitute counterexamples to those theories and they ought to be modified or abandoned. It has been claimed that synaesthesia constitutes a counterexample to functionalism; thus, philosophers ought to investigate this claim. 7 These last two reasons as to why synaesthesia is of interest to philosophers are the ones that will be discussed in this chapter, the structure of which will be as follows: first, I will examine an influential definition of the phenomenon and suggest a better one that takes into account recent findings. I will then describe what functionalist theories of the mind are. Following this, I explain in detail the argument that synaesthesia provides a counterexample to functionalism. I go on to question the argument on the grounds that there are versions of functionalism that are not challenged by the counterexample. I elucidate these types of functionalism. In addition, I claim that, if the argument is to work, it needs to be established that the synaesthetic experience can be identical to some nonsynaesthetic perceptual experience. I look at the evidence for this claim and suggest that further work needs to be done to establish it. TP
TP
PT
PT
1. THE NATURE OF SYNAESTHESIA Harrison and Baron-Cohen offer a definition of synaesthesia. They claim that it occurs “when stimulation of one sensory modality automatically triggers a perception in a second modality, in the absence of any direct stimulation to this second modality”.8 The most common form of synaesthesia is “coloured hearing”, where certain sounds or spoken words trigger visual experiences of colour.9 However, many different forms of synaesthesia have been reported and it has been suggested that synaesthesia can occur between experiences in any two sensory modalities.10 (From now on, I will call the triggered experience the “synaesthetic experience”.) Note that, unlike some popular characterisations of examples of synaesthesia (as “hearing colours” or “tasting shapes”), the above definition does not suggest that a property normally experienced only in one modality is experienced as either being in a different modality or as being a property of some object or feature normally detected only by a different modality. It does not suggest, for example, that TP
PT
TP
TP
PT
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
67
in “coloured-hearing” colours are experienced to be properties of sounds. This is appropriate as there is no good evidence to back up this popular characterisation as will be shown below.11 This common characterisation of synaesthesia, however, needs correction or supplementary comment in at least five respects. First, experiences in some sensory modalities can trigger synaesthetic experiences that are not in any of the traditional five sensory modalities (vision, audition, touch, taste and smell). The only cases of this reported are where the synaesthetic experiences are experiences of movement and bodily postures.12 In light of this, one might wonder whether synaesthetic experience has to be confined to sensory experience. However, although the issue of what it is that makes some bodily process a sensory process is a complicated one, and one on which there is little agreement in the literature, it is commonly accepted that there are more than the traditional five sensory modalities.13 A sense of balance and a sense of the position of one’s body and movement of one’s body are obvious extensions to the traditional five modalities. Therefore, I would argue that the few synaesthetic experiences that have been reported that are not within the traditional five modalities are nonetheless experiences that lie within a sensory modality. The second point is more important: the characterisation of synaesthesia is inaccurate in a key respect. It should not insist that synaesthesia must always be an inter-modal phenomenon. It has recently been reported that an experience in one modality can cause an additional experience or element of experience in the same modality.14 In such intra-modal cases, subjects report that a visually experienced grapheme elicits an additional experience of colour.15 For example, subjects may claim to experience different colours when they look at each of the letters of the alphabet that are all printed in black ink. These cases of grapheme-colour linkages are clearly treated as cases of synaesthesia in the literature. Indeed, much of the recent important experimental work on synaesthesia concerns such cases, as will become apparent below. The third point to make about the definition is that it fails to take account of what we now know of the nature of the stimulus required to induce a synaesthetic experience. Many synaesthetes report that no physical stimulus is required to induce the synaesthetic experience; they only need to think of the synaesthetic stimulus in order for the synaesthetic effect to occur.16 This has been backed up experimentally by Dixon et al. who tested their subject, C, a grapheme-colour synaesthete, of whom it is claimed “activating the concept of a digit by a mental calculation was sufficient to induce a colour experience”.17 Dixon et al. carried out a variant of the Stroop test on their subject.18 The subject was presented with two digits separated by an arithmetical operator. To the right of the digits was a colour patch. The answer to the arithmetical problem was not given and, thus, required mental calculation. The subject was asked to name the colour of the colour patch. They took longer to name it when the patch’s colour was incongruent with the colour that was synaesthetically experienced in the subject in response to the solution to the arithmetical problem compared to when the patch’s colour was congruent. This suggests that a synaesthetically induced colour experience interfered with the colour naming in the incongruous conditions.19 TP
TP
PT
PT
TP
TP
PT
TP
TP
TP
TP
PT
TP
PT
PT
PT
PT
PT
68
FIONA MACPHERSON
Despite the strong evidence that some synaesthetes’ synaesthetic experience is elicited not only in response to a sensory stimulus but also in response to merely imagining or thinking of that stimulus, there are some synaesthetes whose synaesthesia cannot be elicited by imagination or thought alone. In response to this finding, Ramachandran and Hubbard hypothesize that there are two distinctive groups of synaesthetes: higher and lower.20 The higher synaesthetes have synaesthetic experiences in response to stimulation of the senses and also to imagination or thought of these stimuli. The lower synaesthetes’ synaesthesia is triggered only by the former and not the latter. In addition, in lower graphemecolour synaesthetes, it is only a specific type of grapheme that elicits the synaesthetic experience, for example, Arabic numeral “5” but not Roman numeral “V” could be the synaesthetic trigger. In contrast, in higher synaesthetes, it is frequently the case that an Arabic “5”, a Roman numeral “V”, and even just an appropriate number of grouped dots corresponding to the number five, will elicit (the same) synaesthetic experience. Related to the first and the third considerations, there is recent evidence to suppose that in some cases of synaesthesia the relevant synaesthetic stimulus is an emotional response of the subject. Ward reports cases of synaesthesia where only emotionally eliciting stimuli, such as familiar people, the names of familiar people, and other words that have been objectively noted to typically produce emotional responses in people, induce synaesthetic colour experiences.21 Interestingly, a synaesthetic colour experience can come to be elicited in response to a person, when previously no such experience was elicited, when the person becomes more familiar to the subject. Another noteworthy fact is that the colour represented in the synaesthetic experience appears to depend on the emotion that the subject feels. As will be discussed in more detail below, synaesthetic connections appear to be constant throughout a person’s life. These facts support the supposition that it is the emotion that is the synaesthetic trigger: the same emotions always evoke the same synaesthetic response, and stimuli that invoke emotions, such as people, will evoke different responses when they provoke different emotions. These cases suggest that the relevant stimulus is not primarily the stimulation of a sensory modality but, rather, is the stimulation of the emotional (often called affective) system.22 The last point I will make about the above definition of synaesthesia is that it is not precise about the nature of the synaesthetic “trigger” or cause. The synaesthetic trigger was said to consist of “stimulation of one sensory modality”. But what exactly does such stimulation amount to? The definition above is silent on this issue. We have already seen that mere imagining of a stimulus that typically invokes synaesthesia can, in some synaesthetes, trigger a synaesthetic experience. However, aside from this special case, what is known of the cause of the synaesthetic experience? Two options need to be contrasted in the first instance. The first option is that a conscious perceptual experience causes the synaesthetic experience. A second, and more demanding, option is that a conscious perceptual experience is required in order to have a synaesthetic experience but, in addition, the subject has to recognise what it is that their experience is of. In other words, both an TP
PT
TP
TP
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
69
experience of something plus recognition of what it is that is being experienced is required to cause the synaesthetic experience. There is some flatly contradictory evidence concerning which of these two options is correct. On the one hand, experiments in which a letter was briefly shown to a grapheme-colour subject, but which was masked by the presentation of another stimulus to stop the conscious recognition of the letter, yielded the results that the masked stimulus did not interfere with naming target colours in the way that would be expected if the masked stimulus had invoked a synaesthetic colour experience.23 On the other hand, the perceptual “pop-out” experiments of Ramachandran and Hubbard, explained in detail below, suggest the opposite, as do their “crowding” experiments.24 The latter experiments draw on the fact that a letter, when presented at the periphery of the visual field, is easily identified. However, when the letter is similarly presented, save for the fact that it is surrounded by other letters, it cannot be identified—it is “crowded”. Nonetheless, such crowded letters still elicit synaesthetic colour experiences in grapheme-colour synaesthetes, and, indeed, the colour experiences can be used by the synaesthetes to identify what the letter must be. This apparent contradiction can be resolved if the distinction postulated by Ramachandran and Hubbard (2001b), mentioned above, is correct. They claim that higher synaesthetes, in whom the very idea of the stimulus provokes a synaesthetic experience, may need to consciously identify a stimulus before it gives rise to synaesthetic experience; this may not be true of lower synaesthetes, in whom imagining or thinking of the stimulus does not induce a synaesthetic experience. Thus, the higher synaesthetes may require recognition, while the lower do not. Certainly, experimenters should be aware of the possibility of different types of synaesthesia and be mindful of this fact when designing experiments in order to test the nature of synaesthesia. A third option, which stands in contrast to each of the above, ought to be mentioned. It might be that in the absence of a perceptual experience (or a perceptual experience together with the appropriate recognition of what is experienced) mere stimulation of some of the physical structures of the body (sensory organs, nerves or brain—those which are normally stimulated prior to one undergoing the non-synaesthetic effects) could cause the synaesthetic experience. The thought would be that in normal cases of synaesthesia, it is not the mental non-synaesthetic effects that cause the synaesthetic experience. Rather, both the synaesthetic experience and the mental non-synaesthetic effects have a common cause that consists in purely physical stimulation of the sensory organs, nerves or brain. In my opinion, there is no good evidence for or against this third option. There are no studies that consider whether the physical activity in the central nervous system that normally precedes the non-synaesthetic perceptual experience (or that experience together with appropriate recognition) could, if prevented from causing the mental non-synaesthetic effects, elicit a synaesthetic experience. This is clearly one area where psychologists could investigate synaesthesia further experimentally. TP
PT
TP
TP
PT
PT
70
FIONA MACPHERSON
It might be thought that the evidence in favour of either of the first two options above tell against this third option. However, this would be incorrect. The evidence for and against option one and two merely constitutes such evidence, on the assumption that one or other option must be true. It does not address what would happen if one were able to interfere with the causal chain that normally leads to synaesthetic and non-synaesthetic perceptual experience (or to that experience plus appropriate recognition) by intervening at the last point in the causal chain where it is possible to prevent the mental non-synaesthetic effects taking place. Thus, if this third option turned out to be the correct one, there would still be a question as to whether the mere physical activity in question was physical activity that was the normal precursor to the non-synaesthetic experience alone or the normal precursor to the non-synaesthetic experience together with the appropriate recognition of what seemed to be experienced. All these results should be taken into account when trying to define synaesthesia, and I suggest that the best definition, in light of the above, is as follows: Synaesthesia is a condition in which either: (i) an experience in one sensory modality, or (ii) an experience not in a sensory modality, such as an experience of emotion, or (iii) an imagining or thought of what is so experienced, or (iv) a mental state outlined in either (i)-(iii), together with recognition of what the mental state represents is either a sufficient automatic cause of, or has a common sufficient automatic cause (lying within the central nervous system of the subject) with, an experience or element of experience that is associated with some sensory modality and is distinct from (i). This synaesthetic experience or element of experience can be associated with the same or a different sensory modality from that which may be ordinarily associated with the mental state in (i)-(iv). The reason for claiming that the causes in the definition are sufficient causes is to rule out cases of cross-modal illusions counting as cases of synaesthesia. One nice example of such an illusion is the McGurk effect.25 A subject repeatedly is exposed to the same sound, such as “ba”. However, what the subject experiences depends on the lip movements that they observe that appear to be producing the sound. Observation of some lip movements, such as that corresponding to those made when saying “ba” (and observations of no lip movements), will result in the subject reporting the “ba” sound. Observation of other lip movements, such as those made when saying “ga”, will lead to reports of a “da” sound. In this case, the experience of the “da” sound is not caused by seeing the lip movement alone. It is TP
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
71
also caused by the auditory system processing the “ba” sound. The visual experience of the lip movement is therefore not a sufficient cause of the auditory experience. Thus, this case is not a case of synaesthesia. I believe that the distinction between cases of non-synaesthetic cross-modal illusions and cases of synaesthesia proper depends on the distinction between illusion (where we see something but misperceive it in one or more ways) and hallucination (where we see nothing and merely have an experience as if something were before us). To the extent that the distinction between illusion and hallucination is not sharp, neither will be the distinction between cross-modal illusion and synaesthesia. A final point needs to be made about the above definition. The synaesthetic experience was said to be “an experience or element of experience that is associated with some sensory modality”. The definition is not more specific about the nature of the synaesthetic experience because there is a great deal of uncertainty about its nature. As we will see below, there is some evidence that the experience is most like perceptual experience and some that it is most like imaginative experience. 2. THE CHALLENGE TO FUNCTIONALISM Functionalism is a theory in the philosophy of mind. At a first approximation, it says that what makes a state a mental state, and makes it the type of mental state that it is, is its causal role. The causal role of the mental state is comprised by the causes and effects of the state and these may include both physical and mental states or properties. Functionalists disagree about what the correct level of specification of the causal role should be. Candidates include the level of folk psychology, scientific psychology or neuroscience. The causal roles may either be thought of as wide (that is as extending outside the body and mentioning objects and properties in the environment of the subject) or as narrow (that is, extending only to the surface of the body or to some privileged part of the body such as the central nervous system). Some functionalists identify mental states with those states that play the causal role in question whereas others claim that mental states are higher-order states: to be in a mental state is to be in the state of having that causal role occupied by some state. For my purposes these differences will not be relevant. An important and relevant distinction can be drawn between what I will call “strong” and “weak” functionalism. Weak functionalism claims that if two mental states are of different types then they will have different functional roles. Strong functionalism claims that if two mental states are of different types then they will have different functional roles and if two mental states have different functional roles then they will be different types. Mental states are of different types in virtue of a number of features. They are of different types if they are of different general kinds such as beliefs, desires, experiences, emotions etc. They are also of different types if they have different contents, that is, if they represent the world to be a different way. Thus, the belief that a cat is on the mat is a different belief from the belief that the dog is on the lawn. Mental states are also of different types if they have different phenomenal character.26 (This certainly seems true in the case of TP
PT
72
FIONA MACPHERSON
states such as experiences and sensations, which are relevant to this discussion.) There may be other features that distinguish types of states, but those need not concern us here. Jeffrey Gray et al. have argued in a series of papers that synaesthesia provides a counterexample to functionalism.27 It is clear that the functionalism they have in mind is strong functionalism.28 The counterexample is one where the same type of mental state has different functional roles. Consider a sound-colour synaesthete who has a colour experience when they hear a certain sound. Suppose that the colour experience is of the same type that they would have when they look at a patch of red. Call this type of experience an experience of redness. According to Gray et al., experiences of redness have two different functional roles in the synaesthete. When the experience is had synaesthetically, it is caused by a sound and by stimulation to the auditory system. When the experience is had nonsynaesthetically it will, presumably, be caused by looking at a patch of redness and by stimulation of the retina and the other early parts of the visual system. Thus, the two experiences are of the same type but they have different functional roles, and this contradicts the claim of strong functionalism that if two mental states have different functional roles then they will be different types. TP
TP
PT
PT
3. DOES THE CHALLENGE SUCCEED WITH REGARD TO THE CORRECT UNDERSTANDING OF FUNCTIONALISM? Does Gray’s argument succeed? In this section I argue that sophisticated versions of functionalism are not threatened by Gray’s potential counterexample. Weak functionalism clearly escapes the potential counterexample. This fact will not undermine Gray’s argument, however, because, as we have seen, his target is strong functionalism. Nevertheless, one ought to note that weak functionalism is a position that functionalists could hold and there seems to be no significant theoretical advantage gained from holding strong functionalism.29 Even so, I think that there are at least two versions of strong functionalism that are not threatened by the alleged counterexample. The first version is strong functionalism limited to a privileged functional role. The motive for such a position would be that if mental states are specified by their total actual functional role then almost no mental states would ever be counted as the same kind. For example, suppose that I have a visual experience of black and white stripes. On one occasion, this might cause me to think about mint humbugs. On another, the same type of experience might cause me to think of St. Mirren football team. The point is that the very same type of mental state can sometimes have different causes and effects. Similarly, because of the interconnectedness of mental states, it is very probable that, unless two people share all the same mental states, then a mental state that we would think that they have in common will, as a matter of fact, have different causal interactions. For example, the belief that the weather will be good tomorrow might cause me to believe that the fireworks will go ahead and you to believe that the barbeque will go ahead. (We might know of TP
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
73
different events taking place the next day.) Therefore, it seems that a certain part of the functional role of a mental state needs to be privileged as being the core role that any two tokens of the same type of that mental state must share. In relation to the prima facie counterexample above, what such a functionalist could do is hold that the core functional role of an experience of redness is the part common to the synaesthetic and non-synaesthetic experience. For example, this might be that such a state is caused by activity in area V4 of the cortex and gives rise to the belief that an experience of redness is being had. It could be argued that experiences of redness must have that functional role and any state that has that functional role is an experience of redness.30 The second version of strong functionalism that is not threatened is strong normative functionalism. A significant feature of the definitions of weak and strong functionalism above is that they contain no normative element. However, many versions of functionalism do contain such an element.31 A normative functionalism would claim that what makes something a mental state, and the mental state that it is, is its typical causal role or the causal role that the state has in optimal conditions. A strong version of normative functionalism would say: in normal or optimal circumstances, if two mental states are of different types then they have different functional roles and if two mental states have different functional roles then they will be different mental states. This version of functionalism is not threatened by the counterexample because one could claim that, in the case of synaesthesia, conditions are not normal or optimal: synaesthesia is a case of malfunctioning. One could claim that, while the non-synaesthetic experience of redness plays a certain functional role, that type of experience in non-normal conditions can have the functional role of the synaesthetic experience of red. If one thought that mental states are to be identified with physical states that typically play a certain causal role, one could claim that the physical state that normally plays the functional role of the experience of redness plays a different functional role when a synaesthetic experience is had. The playing of the abnormal functional role, however, does not stop the state being identified as the one that in the normal case plays a different causal role: the one to be identified with experiences of redness. In virtue of there being two kinds of strong functionalism that avoid the potential counterexample I suggest that Gray et al.’s argument does not challenge strong functionalism. However, an interesting question remains that will be discussed in the next section. TP
PT
TP
PT
4. THE NATURE OF THE SYNAESTHETIC EXPERIENCE Gray’s challenge to functionalism assumes that the synaesthetic experience is identical to a non-synaesthetic perceptual experience of redness. This view of the nature of the synaesthetic experience is usually contrasted with the thesis that the experience is merely like one of imagination.32,33 What is the evidence that the synaesthetic experience is a perceptual experience, rather than an imaginative one? TP
PT
T
TP
74
FIONA MACPHERSON
Further philosophical interest in this question exists because it is worthwhile investigating what can be established about the phenomenal character of the synaesthetic experience. The question concerning what can be objectively established about subjective experiences is a well-known one in philosophy.34 The first objective scientific tests for synaesthesia were consistency and Stroop tests.35 Consistency tests traded on the fact that while synaesthetes vary greatly about what stimulus/synaesthetic experience pairings they have, each synaesthete always experiences the same pairings.36 Synaesthetes were found to be more accurate in recalling these pairings than non-synaesthetes who had been instructed to invent and remember such pairings. This was true even when the synaesthetes, unlike the non-synaesthetes, were not warned that they would be retested and were retested after a much greater time interval than the nonsynaesthetes. Details of the Stroop test and of the variants used to test for synaesthesia are in footnotes 18 and 19 above. However, while this evidence shows that synaesthetes are different in some ways from non-synaesthetes, it is not very illuminating about the nature of the synaesthetic experience. For all the consistency experiment shows, professed synaesthetes may simply be having imaginative experiences that they have either learned to associate with a stimulus or that arise due to some other cause. Similarly, results from variants of the Stroop test show that grapheme-colour synaesthetic experience, whatever its nature, is automatic and can’t be suppressed and that it interferes with colour naming, but it does not show that it is just like a perceptual experience. This conclusion is backed up by a study which found that nonsynaesthetes trained to associate shapes with colour labels also displayed a large Stroop effect when asked to name the colours of such shapes when the colours were incongruent to the ones they had learned to associate with them. This shows that Stroop-effects can manifest themselves in the absence of an appropriate perceptual experience.37 Empirical evidence in support of the idea that the synaesthetic experience is perceptual comes from two sources. The first is a number of experiments that try to establish that synaesthetic experience is perceptual by showing that the effects of the synaesthetic experience are like that of perceptual experience. These experiments all focus on grapheme-colour synaesthesia. One controversial example is the pop-out experiments of Ramachandran and Hubbard.38 Pop-out is the effect responsible for the fact that a target can be easily picked out from an array of distractors when the target is a different colour from the distractors. Synaesthetes were better than non-synaesthetes at identifying a target among distractors of the same colour when the target induced a different synaesthetic colour than the distractors, when given one second to do so. However, doubts about the methodology of these experiments have been raised by Rich and Mattingley.39 Their doubts have been borne out experimentally by Blake et al. who showed that with speeded response times and an increasing number of distractors the synaesthetes results were unlike that of pop-out.40 Another example is the recent experiment by Blake et al. that showed that rows and columns of identically coloured graphemes that induce synaesthetic colour TP
TP
PT
TP
TP
TP
PT
PT
PT
PT
TP
TP
PT
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
75
experience can induce the McCollough after effect, and was reported to do so by synaesthetes who did not know about the effect.41 This effect normally occurs when a subject looks alternately at, say, red columns and then green rows for several minutes. On being presented afterwards with an achromatic grating, subject reports that they see green columns and red rows. This experiment, and ones like it, namely ones that try to show that the synaesthetic experience has the effects that perceptual experiences have, can only provide evidence for the perceptual nature of synaesthetic experience if it can be shown that the effects in question are only induced by perceptual experience. The kind of evidence that there is that only perceptual experience has these effects comes from empirical inductive evidence alone. Given that the evidence is of this nature, there is room for a philosophical sceptic to argue that synaesthetes could be the exception to the rule. The evidence does not conclusively prove that the synaesthetic experience is like perceptual experience. Perhaps synaesthetes’ synaesthetic experiences are phenomenally just like imaginative experiences but that, unlike nonsynaesthetic imaginative experiences, they can have effects that are typically thought to only be caused by perceptual experiences. The second source of evidence comes from brain imaging studies. The most convincing of these appears to show that areas of (sound-colour) synaesthetes’ brains known to be implicated in non-synaesthetically seeing colour (V4 or V8) are active when they hear sounds. This does not happen in non-synaesthetes, in particular, in those who have been trained to associate a sound with a colour and who are asked to visually imagine the colour when they hear the sound.42 However, that evidence does not conclusively show that synaesthetes really have perceptual colour experiences. For how do we know that activity in such an area always causes perceptual colour experience? Even if activity in this area is usually correlated with such experience, this does not show that such activity is sufficient. Indeed, in the sound-colour synaesthetes it is known that area V1 is not active. However, V1 is active in the processes that lead to ordinary non-synaesthetic perceptual experiences of colour.43 This might lead some to speculate that V1 is not required in order to undergo a conscious perceptual process. However, as the saying goes, one man’s modus ponens is another man’s modus tollens. One could as easily conclude that, as V1 is not active, the synaesthetes are not undergoing perceptual experiences. To sum up, it seems apparent that the evidence above is empirical, defeasible evidence in favour of the synaesthetic experience being perception-like. It further seems that psychological and neuroscientific evidence in this domain will be of this kind. It is hard to imagine proof that would show conclusively what the synaesthetic experience was like. Given this, one might be sceptical of ever showing what the nature of synaesthetic experience is like conclusively. One might think that this backs up the pessimistic claim that there is something about the experience of others that will lie forever beyond our ken. In contrast to this pessimistic conclusion, however, it ought to be remembered that the two sources of evidence converge and that they converge with a further piece of evidence: the reports of many synaesthetes.44 It has recently been TP
PT
TP
TP
PT
TP
PT
PT
76
FIONA MACPHERSON
reported that synaesthetes fall into two classes: associators and projectors. The former experience synaesthetic colours as being in their mind’s eye or head. The latter experience them as projected in front of them in public space. 45 The latter kind of synaesthete appears to be reporting perceptual experiences. One might conclude that the fact these different sources of evidence point to the same conclusion provides excellent evidence to believe that synaesthetic experience can be like perceptual experience, even if one admits that the evidence is still defeasible. We may never reach certainty in this area of enquiry, but we may amount a lot of evidence in favour of the same conclusion, in which case it seems that we ought to believe it. If this is right, then we ought to believe Gray’s contention that synaesthetic experience can be just like perceptual experience. Before concluding, however, one final point ought to be noted. Most of the evidence in this section that appears to support the claim that the synaesthetic experience is like perception only applies to intra-model grapheme-colour synaesthesia.46 Yet, there is an extraordinary feature of this experience so far not mentioned. In grapheme-colour synaesthesia, we are supposing that a numeral “5”, say, provokes a perception-like synaesthetic experience as of red. It is often claimed that the experience is such that the numeral looks to have the synaesthetic colour. At the same time, however, synaesthetes can tell what the colour of the ink is that such numerals are printed in, say, black. It is tempting to suppose that they can do this because the “5” looks black to them. Thus, it is tempting to think that the experience is such that the numeral looks to be both black and red at the same time! Indeed, introspective reports of projector synaesthetes back up this conclusion: “When probed about the locations of the two colors, A.D. reported that she didn’t know how to explain it, but that both appeared on the shape in the same location at the same time”.47 The correct description of such experiences seems to be that the experiences represent two colours to be in the same place at the same time.48 Can there be such experience? It may be that there cannot, in which case we must reach a better understanding of the synaesthetic experience. Alternatively, it may be that it is possible. In that case, we must think more carefully about what the phenomenal characters of such experiences might be like. Do they contain elements corresponding to the experiences of the colours that we are all familiar with or are they altogether different? Further empirical work needs to be carried out to establish as much as possible about the phenomenology of such experience. In addition, further philosophical work is needed to establish what sorts of experience it is possible for there to be. Finally, note that these grapheme-colour synaesthetic experiences are not of the same type as any non-synaesthetic perceptual experience (which don’t represent two colours in the same place at the same time). Thus, they cannot provide a counterexample to functionalism of the form that Gray’s argument requires: two experiences being of the same type, yet having different functional roles. Yet, it is only in the case of these grapheme-colour synaesthetic experiences that the three types of evidence mentioned above converge on the conclusion that they are perceptual and, thus, only in these cases that a good case exists for synaesthetic TP
TP
TP
PT
PT
PT
TP
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
77
experience being perceptual. Therefore, it has yet to be shown, with much plausibility, that a synaesthetic experience exists that is both perceptual and identical to some non-synaesthetic perceptual experience. It follows that it has not been shown that a counterexample of the form that Gray’s argument requires exists. Thus, it has not been shown conclusively that there is a counterexample to strong functionalism—even a basic, non-normative kind that does not privilege some core functional role. 5. CONCLUSION The nature of synaesthesia is not yet fully understood. However, the evidence as to its nature is considerable and growing a pace. I have discussed the nature of synaesthesia and given a definition of it that corresponds to what we know of the phenomenon at present. It should be noted that, in all likelihood, there are different kinds of synaesthesia and experimental work on the topic should take note of the different kinds that there might be and the relationship between them. I have outlined and discussed Gray et al.’s argument that synaesthetic experience provides a counterexample to strong functionalism. I have argued that it does not on the grounds that there are versions of strong functionalism that are not affected by the argument. I have also argued that, in any case, the evidence that the synaesthetic experience is of the right kind, namely, the same as some nonsynaesthetic perceptual experience, which Gray et al.’s argument requires is weak. The evidence that the synaesthetic experience is perceptual is only strong in the grapheme-colour case but such experiences appear to involve experiencing an object as having two colours at once, which does not happen, as far as we are aware, in non-synaesthetic experience. Thus, the argument that synaesthesia presents a counterexample to functionalism has been undermined in two respects.49 TP
PT
NOTES 1
See, e.g., Marks (1975); Cytowic (1993) and ([1995] 1997); Motluk (1994); Harrison and Baron-Cohen (1997); Gray, J.A. (1998); and Harrison (2001). 2 Cytowic ([1995] 1997, p. 17). 3 Cytowic (1993) and ([1995] 1997, p. 21). 4 See Marks (1975) and Harrison (2001) for summaries. 5 A fuller statement of what it is for a system to be modular is given in Fodor (1983). See also this volume, p. 191. Of course, whether there are any cognitive modules is a question that has received much attention in the literature. 6 See Segal (1997); Gray, R. (2001a); Baron-Cohen et al. (1993). 7 A related claim has also been made that synaesthesia constitutes a counterexample to representationalism. This claim will not be examined here but has been debated in Wager (1999), (2001); Gray, R. (2001b). 8 Harrison and Baron-Cohen (1997, p. 3). This definition is widely cited in the literature. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
78
9
FIONA MACPHERSON
Harrison and Baron-Cohen (1997). See Cytowic (1993, p. 6). However, Harrison and Baron-Cohen (1997) claim that tactile perception causing auditory experiences is almost never reported. Similarly, Cytowic ([1995] 1997, p. 21) reports that smell and taste are very infrequent synaesthetic triggers or responses (despite his extensive study of taste as a synaesthetic trigger). 11 Academics who endorse the common conception include Hurley and Noë (forthcoming) who cite as evidence only Marks (1975). However, although Marks does claim “Sometimes synaesthetic subjects report the associated visual sensation to appear not in visual space but rather in the sound itself” (p. 71), he gives no references as to where or when such reports have been made. Given that there are no other reports of this in the academic literature, this evidence is not convincing. 12 See Cytowic (1993, p. 6) and ([1995] 1997, p. 21). In all other respects, these cases appear identical to that of other cases of synaesthesia, and are classed as cases of synaesthesia in the literature. 13 See, e.g., Rivlin and Gravelle (1984) and Keeley (2002). A recent exception is Nudds (2003). I am unsympathetic to his argument which stems from the thought that we ought to respect the common folk-psychological belief that there are five, and only five, senses. 14 See Mills et al. (1999); Dixon et al. (2000); Mattingley et al. (2001); Grossenbacher and Lovelace (2001); Ramachandran and Hubbard (2001a), (2001b), (2003); Rich and Mattingley (2002); Smilek and Dixon (2002). 15 A grapheme is a basic unit of written language, examples of which include letters, numerals and punctuation marks. 16 See Cytowic (1989, p. 49); Dixon et al. (2000); Ramachandran and Hubbard (2001a). 17 Dixon et al. (2000, p. 365). 18 The original Stroop test consists of words that are the names of colours, which are printed in ink that is either the colour the word refers to or a different colour. When the name of a colour is printed in a colour of ink other than the colour that the word refers to, subjects take longer to name the colour of ink that the word is printed in compared to when the name of a colour is printed in the same colour of ink that the word refers to. This effect is seen in the general population of perceivers. 19 Variants on the Stroop test are often used as objective tests for grapheme-colour synaesthesia. These variants exploit the fact that synaesthetes take longer to name the colour of the ink that words are printed in when the colour of the ink is incongruent to the synaesthetic colour that they experience in response to that word compared with the situation in which the ink is the same colour as the synaesthetic colour that they experience. They also take longer to name the colour of the ink in the incongruent case compared with people who do not have synaesthesia. See Dixon et al. (2000); MacLeod and Dunbar (1988); Mattingley et al. (2001); Mills et al. (1999); Odgaard et al. (1999); Wollen and Ruggiero (1983). 20 Ramachandran and Hubbard (2001b). 21 See Ward (2004). 22 Of course the emotion may have to be induced by a stimulus that impacts upon the subject by means of the senses, but this does not stop its being the stimulation of the emotional system of the subject that is the relevant cause of the synaesthetic experience. 23 See Mattingley et al. (2001); Rich and Mattingley (2002). 24 See Ramachandran and Hubbard (2001a), (2001b), (2003). TP
PT
10 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
SYNAESTHESIA, FUNCTIONALISM AND PHENOMENOLOGY
25
79
See McGurk and MacDonald (1976). The phenomenal character of an experience refers to the quality of experience in virtue of which there is, to use a familiar phrase "something that it is like" to undergo that experience. See Nagel (1974). 27 Gray, J. A. et al. (1997), (2002); Gray, J. A. (1998), (2003). 28 See, e.g., Gray, J.A. et al. (2002, p. 7). 29 For example, the most plausible type of empirical functionalism holds that the folkpsychological roles of mental states reference fix on other finer-grained functional roles that it is the job of science to uncover. (See Braddon-Mitchell and Jackson 1996, p. 80.) If science uncovered two disparate fine-grained functional roles that played the courser-grained folkpsychological role then one might conclude that the mental state in question was realised by two different functional roles. One might therefore affirm weak functionalism but deny strong functionalism. 30 In fact I would not endorse this account as I believe that creatures not sophisticated enough to have belief can nonetheless have experiences of redness, but the functionalist will argue that some plausible account could be given. 31 See Armstrong (1970) who says mental states are “apt” to have certain causal roles, Lewis (1980) who says that mental states “tend” to have certain functional roles and Papineau (2000) who uses “normally” causes and “tends” to cause. (Note that Lewis’s case of “mad pain” is similar in many respects to the case of the synaesthete. Here is a case where a philosophical thought experiment predicted in advance that cases such as that of the synaesthete could arise.) Whether “normally” should be taken as statistically normal or not will depend on which version of this view one finds most plausible. 32 This debate in the literature is normally conducted under the assumption that there is a sharp distinction to be made between perceptual experience and imaginative experience. In particular, I think that it is typically assumed that there is a difference of phenomenal character between a perceptual and an imaginative experience. However, the distinction may not be as sharp as some assume, and perceptual experience and imaginative experience may lie on a continuum. Nonetheless, I believe that there are clear phenomenal differences between perceptual experiences and imaginative experiences at either end of the continuum. For the purposes of this paper, I will assume either that there is a sharp distinction to be drawn or one is looking to distinguish cases at the far ends of the continuum that clearly exhibit differences. However, a full treatment of this issue would have to delve further into this debate. 33 One might think that another counterexample to functionalism, of the same form as Gray’s, could be generated if the synaesthetic experience were identical to some imaginative experience, for one could claim that some synaesthetic experience and some imaginative experience were identical save for the fact that they had different causal roles: synaesthetic experience is involuntary and is caused by another experience while a non-synaesthetic imaginative experience is voluntary and not caused by another experience. However, if there is a problem for functionalism here it is a problem with imagination more generally. It is very difficult to see how one could specify a causal role for imaginative experience. Nonsynaesthetic imaginative experiences of the same type may or may not be voluntary. They may have all different sorts of causes and effects, which may or may not include perceptual experience or any other mental state. Synaesthesia, it seems, adds no new problem for functionalism here. This is backed further by noting that Gray et al.’s challenge is clearly meant to turn on the thought that synaesthetic experience is just like some perceptual TP
PT
26 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
80
FIONA MACPHERSON
experience (of which it is more plausible that a full, strong and unrestricted version of functionalism can be given). Gray et al. clearly try to establish that there is evidence that synaesthetic experience is like perceptual experience. 34 It is famously discussed in Nagel (1974) and many papers have been written on that topic since. 35 See Baron-Cohen et al. (1987) and (1993); Dixon et al. (2000); MacLeod and Dunbar (1988); Mattingley et al. (2001); Mills et al. (1999); Odgaard et al (1999) and Wollen and Ruggiero (1983). 36 This is generally true of all forms of synaesthesia. There may, however, be common underlying patterns to which all synaesthetes conform, at least in one or two forms of synaesthesia. See Marks (1975). However, note that the evidence is rather weak and unclear. 37 See MacLeod and Dunbar (1988). 38 Ramachandran and Hubbard (2001a), (2001b) and (2003). 39 Rich and Mattingley (2002). 40 See Blake et al. (2005). They postulate that the synaesthete they studied “performs a seriallike search through the visual display, just like non-synaesthetic individuals” rather than experiencing pop-out but, in addition, and to explain the results by Ramachandran and Hubbard, “he was able to reject distractors more quickly using his synaesthetic colour” (p. 59). 41 See Blake et al. (2005). 42 See Nunn et al. (2002). 43 Ibid. 44 Until recently, psychologists included very little testimony from synaesthetes in their reports about the condition. This is starting to change. Some reports are included in Dixon et al. (2004, pp. 335-336); Cytowic ([1995] 1997, p. 23); and Harrison (2001, p. 104). Of course, it should be noted that we should not always take introspective reports at face value, and thus introspective evidence on its own ought not to convince us that synaesthetic experience is perceptual. 45 See Dixon et al. (2004, pp. 335-336). 46 All the first type of evidence, and most of the introspective evidence from the last couple of years, has pertained to grapheme-colour synaesthesia. 47 Sagiv and Robertson (2005, p. 100). See also Blake et al. (2005, pp. 49 and 55). 48 Note that this question is different from the question of whether two colours can be in the same time at the same place, which may be answered in the negative, while the former question is answered in the positive, without inconsistency. 49 Thanks to David Bain, Michael Brady, Jim Edwards, Rebecca Lawson, Scott Love, Philip Percival, Mike Scott and Michael Tye for useful discussion, comments and references. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 6 INTEGRATING THE PHILOSOPHY AND PSYCHOLOGY OF MEMORY: TWO CASE STUDIES John Sutton
Memory is studied across a bewildering range of disciplines and subdisciplines in the neural, cognitive, and social sciences, and the term covers a wide range of related phenomena. In an integrative spirit, this chapter examines two case studies in memory research in which empirically-informed philosophy and philosophicallyinformed sciences of the mind can be mutually informative, such that the interaction between psychology and philosophy can open up new research problems—and set new challenges—for our understanding of certain aspects of memory. In each case, there is already enough interdisciplinary interaction on specific issues to give some confidence in the potential productivity of mutual exchange: but in each case, residual gulfs in research style and background assumptions remain to be addressed. The two areas are the developmental psychology of autobiographical memory, and the study of shared memories and social memory phenomena. I show points of contact between a flourishing social-interactionist tradition in developmental psychology and one line of thought in recent philosophy of mind concerning memory, time, and causation; and, more briefly, I sketch a series of connected issues about memory in social psychology and the social sciences which have recently been brought into contact with theoretical ideas about distributed cognition and the “extended mind”. These are, then, two focussed forays into a vast array of live topics for the cross-disciplinary study of memory over the next decade: I have offered broader surveys of the field elsewhere.1 Just one further example of another, different area very much in need of cross-disciplinary integration is the study of habit memory and skill memory, where philosophers of cognitive science have been just beginning to catch up with the phenomenologists in looking to empirical work for mutual illumination.2 Obviously there are other issues and other paths through related terrain, and readers should also pursue different integrative and constructive treatments, from both philosophical and psychological starting-points.3 One integrative role of philosophy in the cognitive sciences lies in the juxtaposition of related concepts and theoretical commitments from different branches of these sciences which have not yet been addressed together: this is not a negligible job, for increasing specialization in empirical fields brings the danger that scientists remain unaware of or misunderstand the relevance of work in neighbouring subdisciplines. But naturalistically oriented philosophers of cognitive TP
TP
PT
PT
TP
81 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 81–92. © 2007 Springer.
PT
82
JOHN SUTTON
science also have two more ambitious aims. They hope occasionally to play an active and constructive part in the integrative theory-construction which can result from such meetings of traditions and lines of research.4 And they also rightly take it as their job to construct frameworks within the philosophy of science for making sense of interlevel and interfield relations, whether from a revised reductionist perspective5 or with a focus on interfield integration.6 A number of other conceptual issues in the psychology of memory would benefit from careful treatment in the philosophy of science: debates about the nature and psychological reality of distinct “memory systems”,7 for example, need to be connected with broader discussions of modularity and psychological kinds. But although some of this work in the philosophy of science has taken the sciences of memory as an important test case,8 philosophers have otherwise paid surprisingly little attention to empirical studies of memory. Despite the rich history of theorizing about the roles of memory and narrative in the construction and maintenance of personal identity, for example, only a few philosophers9 have looked to the psychology of autobiographical memory for understanding of the constraints on our contact with the personal past. So in discussing the developmental psychology of autobiographical memory below, we cover one recent strand of the philosophy of mind, in work by John Campbell and by Christoph Hoerl and Teresa McCormack, in which questions about the emergence of memory in childhood have been asked. The two topics addressed in this chapter, then, are merely initial samples of the many memory-related projects waiting for naturalistically-inclined philosophers of mind. TP
TP
PT
TP
TP
PT
PT
PT
TP
PT
TP
PT
1. DEVELOPMENT OF AUTOBIOGRAPHICAL MEMORY Although children start talking about the past pretty much as soon as they start talking, their initial references are fleeting and fragmentary, and the richer capacity to refer to specific events in the personal past develops only gradually. For over a century, psychologists have wondered how this slow development of autobiographical memory is connected to the intriguing inability of most adults to remember many events or experiences from their early childhood in any kind of rich detail (“infantile amnesia”). So the explanatory target in the developmental psychology of autobiographical memory is the child’s emerging ability to think about episodes and personal experiences at particular past times. This is more than the capacity to understand sequences of events or intervals between events, and more than general knowledge of how things usually go. While there are significant terminological and conceptual differences across traditions in this area, for present purposes we can treat the psychological labels “autobiographical memory” and “episodic memory” and the philosophical labels “personal memory” and “experiential memory” as all designating roughly the same relevant set of phenomena.10 Endel Tulving’s notion of “mental time travel” is useful in helping us initially hone in on our topic: when TP
PT
83
INTEGRATING THE PHILOSOPHY AND PSYCHOLOGY OF MEMORY
we are engaged in this kind of remembering, we are not merely being influenced by our past, but are thinking about particular past experiences as past.11 Building on a 20-year tradition of social interactionist work in developmental psychology, Katherine Nelson and Robyn Fivush have proposed a social cultural developmental theory of autobiographical memory.12 Their framework, with its Vygotskian and dynamicist inspirations,13 offers a rich picture of multiply interactive developmental systems spanning the child’s brain and local narrative environment. The emergence of autobiographical memory in childhood is TP
PT
TP
TP
PT
PT
the outcome of a social cultural cognitive system, wherein different components are being opened to experiences over time, wherein experiences vary over time and context, and wherein individual histories determine how social and cognitive sources are combined in varying ways.14 TP
PT
The direction of influence, in some presentations of this framework, is from social and narrative context to autobiographical memory: as Robyn Fivush puts it, “it is through joint reminiscing that one comes to have a personal past”.15 A slightly different but compatible stress is on the “spiral” nature of the process, in which the child’s changing competence in dialogue about the past itself in turn influences the parent’s reminiscence style, encouraging the dynamic co-construction of richer narratives.16 The process gets underway as, in many of the child’s earliest references to the past, both structure and content is provided to a large degree by adults, whose communicative actions provide the scaffolding for such early memories. So in this social-interactionist tradition the focus is on the impact of differing parental and cultural styles or models for the recounting of past events on the child’s own developing memory. In general, for example, the spontaneous later memory activity of children whose parents talk about the past more elaboratively and richly, or more emotionally, will itself be more elaborative or emotional17; in general, both mothers and fathers talk more richly and more emotionally about the past with girls than with boys18; and a range of cultural differences track these interactions, so that, for example, Caucasian American children’s spontaneous memories highlight the self more, in general, than do those of Korean children.19 A number of methodological and theoretical questions arise about this research20, but it is a robust experimental tradition which is now being extended in substantial longitudinal studies.21 This is crucial not only because we want to know more about any possible longer-term effects of the early narrative environment, but also because we need to tease out the interactions between many different factors. Where some earlier work in this tradition may have given the impression that parental influence—in particular maternal reminiscence style—was always the primary driving force in the emergence of autobiographical memory, the more recent versions clearly operate in a developmental systems framework in which the influence of multiple concurrent processes can vary across individuals. So in TP
TP
PT
TP
TP
PT
TP
TP
PT
TP
PT
PT
PT
PT
84
JOHN SUTTON
addition to the roles of language in memory, they address the earlier neural and psychological development of other memory systems, the development of a selfschema and of theory of mind, the emergence of a concept of the past, and the role of emotional factors such as attachment. Elaine Reese, for example, has tested the independent contributions of self-recognition, language skill, attachment security, interest and motivation, and maternal reminiscing to children’s later autobiographical memory skill.22 Study of such highly history-dependent developmental processes, in which social and neural influences are “bidirectionally and fundamentally interactive at all levels of organization” poses severe theoretical and empirical challenges.23 Multiple pathways can lead to generally converging outcomes, but also to idiosyncratically unique individual variation.24 How then does this psychological framework relate to philosophical understandings of the nature and role of autobiographical memory? One relevant sophisticated approach is that of John Campbell and his colleagues, which delineates the interconnected features of our mature memory capacities in a way which may seem to be in some tension with the social-interactionist developmental framework.25 If autobiographical memory is memory of what one saw and did, when and where, at a particular past time, then according to Campbell it requires the subject to have a conception of the causal connectedness of both physical objects and the self. Children need to grasp that both world and self have a history for genuine autobiographical remembering to get off the ground. For Campbell, this suggests that temporal asymmetry is built into autobiographical memory, in that we are inevitably realists about the past, conceiving of past events as being all, in principle, integratable within a single temporal sequence. Various principles of plot construction thus ground our ordinary memory practices: we assume, for example, that the remembered “I” has traced “a continuous spatio-temporal route through all the narratives of memory, a route continuous with the present and future location of the remembering subject”.26 We can, in mature autobiographical remembering, assign causal significance to specific events, so that our temporal orientation is by particular times rather than simply by rhythms or phases.27 I can distinguish one particular occasion on which I had lunch with a colleague on a Tuesday from all other similar occasions. Even though our ordinary ongoing social interaction may depend only on my ability to track the generic pattern or script of this routine, it can of course be crucial in certain key personal and interpersonal contexts to remember a specific episode. Following Campbell, Christoph Hoerl argues that this feature of our concept of time grounds our awareness of the singularity of events and especially of actions. We are thus “sensitive to the irrevocability of certain acts”, so that we, unlike other animals and (perhaps) some severely amnesic patients, incorporate a sense of the uniqueness and potential significance of particular choices and actions into our plans and our conceptions of how to live.28 Because Campbell’s picture treats autobiographical memory as part of a core cluster of interconnected features of self-conscious thinking, it’s natural to ask how this cluster emerges in the first place. One reason for looking to developmental TP
TP
PT
PT
TP
TP
PT
TP
PT
TP
TP
PT
PT
PT
INTEGRATING THE PHILOSOPHY AND PSYCHOLOGY OF MEMORY
85
psychology here is to ward off the charge that this account is over-intellectualist: by tracing the gradual emergence of this entwined cluster of capacities we might feel more confident in the possibility of a naturalistic understanding of the psychological status of the putative principles of plot construction in mature autobiographical memory. The point is not to battle over when, in the complex process by which this sophisticated battery of abilities arises, the label of “true” or “full” autobiographical memory should be applied, but rather to seek a detailed delineation of the phases and components of that process, and their interrelations. But another feature of Campbell’s picture may seem to set it at odds with the social-interactionist account of memory development which I sketched above. At first blush, Campbell’s view looks rather individualistic, in stressing the place of autobiographical memory within self-conscious thought without explicit reference to the social or narrative environment of early talk about the personal past. What room can it allow for investigating the differential effects of, for example, elaborative or emotional conversations between parent and child on the developing spontaneous memory capacities of the child? What role, in particular, could shared remembering practices have in scaffolding the child’s emerging understanding of temporal asymmetry and the difference between past and present? Although the differences in aims and traditions between the philosophical and psychological traditions in question will not be easily or completely bridged, in this instance we are fortunate to find some recent work which explicitly synthesizes the two in order to arrive at a richer and genuinely interdisciplinary view: it’s no accident that this is the result of a collaboration between a psychologist and a philosopher, Teresa McCormack and Christoph Hoerl.29 The crucial move in the constructive reconciliation is to scrutinize more closely just what the joint aspect of early reminiscing activity is doing for the child, or what it is that is internalized as a result of the adults’ mnemonic scaffolding. According to Hoerl and McCormack, the memory sharing in which parents and children engage can best be understood as a peculiar form of joint attention, directed – unlike other forms of joint attention—at the past. From a philosophical framework close to Campbell’s30 they draw the idea that what the child needs is a new kind of reasoning capacity, one which grasps “the causal significance of the order in which sequences of events unfold”: in particular the child has to come to see that “later events in the sequence can obliterate or change the effect of earlier ones”, so that the state of the world and of the child’s current feelings depends on this independently ordered history.31 This is a more sophisticated ability than the straightforward temporal updating involved when the child can alter its model of the world as they observe or infer it being modified. Using a delayed video feedback technique in which children are shown two games in different orders, Povinelli et al. showed that 3-year-olds could not use information about which of two events happened more recently to update their model of the world as a series of causally related events unfolds, but that with clear instructions 5-year-olds could do so. Building on these methods in ingenious experiments which examine not only temporal updating but also the ability to make temporal-causal inferences, McCormack and Hoerl have shown that children under age 5, and some 5-year-olds, TP
TP
TP
PT
PT
PT
86
JOHN SUTTON
who can successfully engage in simple updating, have serious difficulty in making those kinds of temporal-causal inferences in which they must grasp the objective sequence of events. 32 Hoerl and McCormack then suggest that this kind of temporal-causal reasoning is exactly what’s elicited or jointly generated in conversations about past events, in which parent and child together construct a temporally structured narrative which explains the influence of the past on the present.33 In joint reminiscence of this kind, a parent is often not merely modeling these narrative abilities, but also directly exerting an influence on the child, by encouraging the child to see that things are not now as they once were. The shared outlook on the past which emerges is thus also evaluative, and in turn grounds other ongoing collaborative activities: children come to value memories of particular past events for themselves, “because the sharing of such memories is a way of establishing, maintaining, or negotiating a distinctively social relationship with others”.34 More generally, Hoerl and McCormack’s synthetic account shows us how the local narrative practices studied by the social-interactionists, with all their cultural idiosyncrasies, themselves put the child in touch with an objective conception of time and causation. The practical engagement involved in jointly attending to past events and sharing memories helps the child understand that there can be different perspectives on the same once-occupied time; and thus such shared co-constructed narratives shape the child’s initial grasp of the causal connectedness of self and world. Where in Campbell’s account there was a sharp distinction between practical and reflective modes of representing time, we can now see the practical and social origin of the child’s attention to the past as essential for the child’s ability to access and integrate both egocentric and objective conceptions of time. In this highly promising linkage of philosophy and psychology, there is as yet no clear means to examine different individual trajectories in the emergence of the requisite kind of temporal-causal thinking; this is where, for example, we might hope to inject attention to emotional development and patterns of attachment in relation to early memory capacities. Further work is also needed on the relation between verbal skill and memory development, aimed both at broader conceptual understanding of the relations of language and thought in autobiographical memory, and at more specific investigations of the nature and the timing of any elaborative talk which might enhance verbal and nonverbal recall. But Hoerl and McCormack’s programme offers one enticing example of the possibilities for an empiricallyinformed philosophy and a philosophically-informed cognitive science. TP
PT
TP
TP
PT
PT
2. SHARED MEMORY, SOCIAL MEMORY, AND SOCIAL ONTOLOGY In everyday life, and in many branches of the social sciences, memories (like beliefs, desires, intentions, and so on) are commonly attributed not only to individuals but also to small groups, families, institutions and organizations, nations, and other collectives. In mainstream individualist cognitive psychology and philosophy of mind, such talk tends to be treated either as innocently metaphorical or as
87
INTEGRATING THE PHILOSOPHY AND PSYCHOLOGY OF MEMORY
troublingly anti-naturalistic, on the edge of Jungian archetypes or morphic resonance. If robust and naturalistically-acceptable grounds could be found for understanding certain kinds of “we-remember” statements as legitimately expressing real shared or social memories, this would not only be of independent interest and utility for the relevant disciplines which deal with such putative phenomena of memory, from history and political theory to cognitive anthropology and, indeed, social psychology; it would also be an important test case for opening lines between the cognitive sciences and the social sciences, and between the philosophy of mind and social philosophy. One promising direction in which to initiate such integrative enquiries is by calling on the theoretical framework offered by recent multidisciplinary developments in “distributed cognition” and the “extended mind”.35 This is due to the anti-individualism or “active externalism” of these frameworks, by which both mental processes and mental states can spread across brain, body, and environment.36 In this brief discussion I focus on the roles of the social and cultural world in such distributed cognitive processes, rather than on the technological or physical environment which has perhaps featured more commonly both in the philosophical literature37 and in critical work on “structural memory” within the social sciences.38 The ordinary kind of phenomenon in question is illustrated by a story told by the developmental psychologist Susan Engel, whose 12-year-old son once looked up from his homework to ask his mother’s help with a writing assignment, asking “Mom, what is my most important memory?”.39 One easy reading of such anecdotes is deflationary, taking the role of other people as mere cues or triggers to activate the full memory in the individual’s head. On this view, appropriate studies of “social memory” would aggregate many individuals’ memories in some specific social context: the sociologists Schwartz and Schuman, for example, react against “models that exclude the individual” by surveying what many individuals remember about, for example, Abraham Lincoln.40 My task in this short section is to marshall some ideas from philosophy and psychology in service of an initial undermining of this deflationary response. In contrast to the claim that other people are always only acting as mere cues or supplements, I suggest that sometimes an enduring, dispositional memory state can be spread across individuals; and in contrast to the claim that the full memory in such cases is waiting inside individual’s heads to be triggered by the right stimulus, I suggest that the “memory” which endures in a single brain is often only partial or, in the terms of the French sociologist Maurice Halbwachs, “incomplete” and “shrouded”.41 Obviously this can be little more than a sketch of some alternatives to merely aggregative individualistic approaches. I begin with two relatively cautious observations/ recommendations. Firstly, a plausible account of social memory is more likely to be anchored initially in mundane, small-scale cases than at the macrolevels of national memory: implications for social theories of collective responsibility or national identity may more securely arise from studies based in interpersonal, family, or small group contexts. Secondly, following Robert Wilson,42 it’s useful to distinguish two different routes to an account of social TP
TP
PT
PT
TP
TP
PT
PT
TP
PT
TP
TP
TP
PT
PT
PT
88
JOHN SUTTON
memory, even if we go on to pursue both. A weaker version (though still much stronger than the deflationary option) is that individuals can and do engage in some forms of remembering only when (or differently when) they form part of some social group; a stronger version is that it is that group itself which is in some circumstances the remembering subject. As Wilson points out,43 the former “social manifestation thesis” itself can be given weaker and stronger readings, with the distributed cognition framework suggesting stronger readings in which many forms of individual activities of memory are constituted or realized by “wide” features of the social context. In what follows, I address some empirical work on memory which can best be understood through such an interpretation of the social manifestation thesis, merely mentioning in conclusion some philosophical considerations in favour of also pursuing the stronger “plural subject” account of memory. When a small group of people—a family group, for example—have lived through certain experiences together, each member will retain their own memories of the events. But it often happens in families that there is some subsequent discussion, reinterpretation, or negotiation about what has happened—about the significance, the affective tone, or just the bare facts. It’s no surprise that the initial individual memories may differ from each other in certain ways; and it’s well established in social-cognitive psychology that the sharing of memories in the group is likely to elicit more than any of the individuals had remembered, but less than the aggregated sum of individual memories (the latter effect is sometimes called collaborative inhibition—outlying individual memories are for various reasons often dampened or lost in the process). Recent work by William Hirst and his colleagues adds the extra twist of investigating the way in which specific group dynamics and processes can influence the individual members’ subsequent enduring memories. In the basic design, each individual first gives their own memories of an event which the whole group has experienced. After various delays, the group as a whole is then asked to recall what happened; and after a further manipulable delay, each member again offers their own memory.44 Hirst is particularly interested in cases in which a dominant individual or “narrator”—such as one parent in a family group—can have a disproportionate influence on the content (or emotional tone, or narrative structure) of both the group’s consensual account (where one emerges) and the members’ subsequent individual recollections. Memory contents “migrate” in the process of shared remembering, so that sometimes each member’s later recall incorporates, without their awareness, elements which were only offered by the dominant narrator in the group phase. Here, then, not only can we think of the collective account produced by the group as itself a “shared” or social memory; we can also see the subsequent individual memories as only manifesting as they do in this specific social context. This research—like related applied work on “memory conformity” among, for example, groups of witnesses45—is part of a mainstream focus in the recent cognitive psychology of memory on the constructive nature of remembering and on the various ways in which “false memories” can arise, even in autobiographical memory, through suggestion, influence, or other misadventure.46 But much of this TP
TP
PT
PT
TP
PT
TP
PT
INTEGRATING THE PHILOSOPHY AND PSYCHOLOGY OF MEMORY
89
false memory research itself has an individualistic tone which may suggest, again, the deflationary reading of Hirst’s work on shared remembering. Just as false memories are often put down to the distorting influences of external authorities who taint the individual’s memory in one way or another with misinformation and error,47 so we might see the role of the group as the social contamination of the ordinary memory processes which basically run inside the head. But this sharp division between normal individual remembering and abnormal socially-influenced remembering is unrealistic. As Sue Campbell has argued, much ordinary successful “good remembering” depends essentially on the support and involvement of other people.48 As with the developmental scaffolding provided by parents for early memory (section 1 above), so adult remembering is not necessarily distorted by the close involvement of other people: rather, much remembering is intrinsically “relational”. We can accept the lessons of the false memory literature about the various mechanisms and forms of influence in socially embedded and socially manifested remembering without taking on board the associated individualist spin by which “influence” is inevitably negative. Just one suggestive example comes from recent studies by Maryanne Garry and her colleagues. Acknowledging that in real settings, “when confronted with a difficult to remember narrative about [their] childhood, people are likely to rely on others to verify their memories”, they allowed subjects exposed to false information to discuss their memories with a sibling.49 Whereas a significant number of those initially given false information had incorporated it into their own memories when recalling independently, after this phase of “discussion” with their sibling the proportion dropped dramatically. In the right circumstances, other people, as well as photos or other artefacts, can actively and successfully promote or maintain good remembering. These kinds of empirical research programmes help fill out our responses to the initial deflationary worries about the idea of “shared” or “social” memory. The deflationary idea, recall, was that social factors could only prompt or reactivate preexisting and distinct memories held by the individual. But if we thus acknowledge that acts of remembering can be “triggered” both by inner and outer factors, it’s difficult to draw any principled distinction based on the location of the trigger which doesn’t beg the question in favour of individualism. Many internal triggers or cues, of course, can take considerable time, effort, or favourable circumstances to become successfully operative in prompting a memory; triggers which were once external can be more or less successfully internalized; and in many interesting cases the conspiring factors operative in some particular activity of remembering span inner and outer. In such cases we want to understand not only the nature and content of the occurrent memory (what I’m remembering now), but also the enduring or standing conditions or dispositions which have underpinned, grounded, and shaped this occurrence. Because, in common sense psychology, we accept that a person (dispositionally) remembers many things which they are not now (occurrently) remembering, we are often happy to ascribe to them various standing memories which might in fact take certain convenient coalescing constellations of causes to actualize. It’s a matter of degree and of pragmatics how far this constellation can TP
PT
TP
TP
PT
PT
90
JOHN SUTTON
stretch from including only that the person must be awake and relatively sober, for example, to requiring very specific factors about the presence and role of other people. In some cases the current “inner” components of the spread or distributed dispositional state or field are highly context-sensitive and action-oriented. For both the social scientist influenced by Halbwachs and the post-connectionist cognitive scientist, it’s just because memories are not stored fully-formed in independent atomic form at distinct locations between experience and remembering that we rely so heavily on the scaffolding provided by both external symbol systems and the interpersonal world which fills “the necessity of an affective community”.50 These last remarks are intended to confirm that this inchoate picture of shared and social remembering should be compatible not just with naturalistic materialism but also, preferably, with at least some post-connectionist versions of the computational theory of mind. In particular, we want where possible to identify the content which is carried across or by different vehicles or media, just so that questions about its transmission or distortion can be raised, as in both Hutchins’ account of distributed cognition and Sperber’s epidemiology of representations.51 A more ambitious metaphysics still, which would provide a firmer glue between the cognitive and the social sciences of memory, might perhaps come from the philosophical subfield of social ontology.52 One idea here is to apply Margaret Gilbert’s “plural subject theory” to “we remember” statements, along parallel lines to existing treatments of joint action, shared intention, common knowledge, and collective belief: what’s particularly useful about Gilbert’s framework for understanding some examples of shared memory is that it builds the features of mutual expectations and commitments in to the notion of a plural subject.53 For the analysis of, for example, “we remember x-ing” to offer something stronger than an aggregate of individual memories, we will look to cases in which each individual’s memory is incomplete: and thus, in turn, this conceptual framework will have to deal with the kind of empirical work I mentioned above, rendering the exchange between philosophy and psychology ongoing as we should hope. TP
PT
TP
TP
PT
PT
TP
PT
3. CONCLUSIONS My angle of entry to the two realms in the interdisciplinary study of memory discussed here has obviously been oriented to highlight the psychological relevance of factors outside the individual. Although enough problems about integrating levels and fields arise within the established subdisciplines of cognitive psychology and neuropsychology, the case of memory renders equally urgent the need to make contact with disciplines traditionally outside the purview of the cognitive sciences. My programmatic remarks may seem unnecessarily constructivist to some, problematically anti-reductionist to others. But, while I can’t justify the claim here, I believe that the frameworks I’ve outlined are entirely compatible at least with the quest for local and integrative reductions: when we really aim at hopelessly complex culturally and phenomenologically salient explananda, such as individual differences in autobiographical memory style, the fact that the prospects of any
91
INTEGRATING THE PHILOSOPHY AND PSYCHOLOGY OF MEMORY
straightforwardly reductive explanations are small doesn’t mean that we shouldn’t seek or that we won’t find specific microreductively relevant factors in particular key idiosyncratic explanatory contexts. More generally, philosophers may reasonably express some skepticism about even the ideal of interdisciplinary theory-construction with which I’ve recommended scientists of memory should flirt. Patricia Kitcher’s powerful analysis of parallels between interdisciplinary explanation in psychoanalysis and cognitive science, for example, pinpoints a number of “subtle and not so subtle dangers” in moving too fast between disciplines and discourses. My whirling optimistic sketches no doubt exemplify some of the troublesome temptations identified by Kitcher, such as trusting too easily in the resources of a neighbouring discipline, disregarding the seriousness of its internal problems; or taking the coherence and compatibility of two theories or frameworks as conclusive evidence for the truth of both.54 Those interested in trying to forge bridges across the many cultures of memory research must accept their vulnerability to such charges, and hope merely for some imperfect safeguards in the collaborative nature of such research and the spread of expertise: as I’ve written elsewhere, “a start must be made somewhere, and occasionally a messy preference for proliferation over prudence in difficult domains may pay off”.55 At least, as long as we remain uncertain of how to make sense of the fact that remembering is simultaneously a neural, cognitive, and social activity, there is unlikely to be a shortage of work in the interface between philosophy and cognitive science. TP
TP
PT
PT
NOTES 1
Sutton (2003, 2004). Sheets-Johnstone (1999); Gallagher (2005). 3 Philosophical: Auyang (2001, pp. 283-306); Rowlands (1999, pp. 119-147). Psychological: Engel (1999); Middleton and Brown (2005); Schacter (1996); Welzer and Markowitsch (2005). 4 On multidisciplinarity in the cognitive sciences, see von Eckardt (2001); Rogers, Scaife and Rizzo (2005). 5 Bickle (1998, 2003). 6 Machamer, Darden and Craver (2000); Craver (2005). 7 Foster and Jelicic (1999). 8 Craver (2002); Bickle (2003). 9 Such as Schechtman (1994). 10 For more on these terms and on forms of memory, see Sutton (2003). 11 Tulving (2002); Suddendorf and Corballis (1997). 12 Nelson and Fivush (2004). For related reviews see Reese (2002a) and the papers in Fivush and Haden (2003). 13 See, e.g., Thelen and Smith (1994). 14 Nelson and Fivush (2004, p. 487). 15 Fivush (2001, p. 51). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
92
16
JOHN SUTTON
Haden, Haine and Fivush (1997). Reese, Haden and Fivush (1993). 18 Fivush (1994). 19 Mullen and Yi (1995). 20 Sutton (2002). 21 Harley and Reese (1999); Reese (2002b). 22 Reese (2002b). 23 Bjorklund (2004, p. 344). 24 See Harley and Reese (1999); Griffiths and Stotz (2000). 25 See Campbell (1994, 1997). 26 Campbell (1997, p. 110). 27 See Campbell (1994, chapter 2). 28 Hoerl (1999, pp. 240-247). 29 See McCormack and Hoerl (1999, 2001, 2005); Hoerl and McCormack (2005). 30 See also Povinelli et al. (1999); Martin (2001). 31 Hoerl and McCormack (2005, pp. 267-270). 32 See Povinelli et al. (1999) and McCormack and Hoerl (2005). 33 Hoerl and McCormack (2005, p. 275). 34 Ibid., p. 283. 35 On distributed cognition, see Hutchins (1995); on the extended mind, see Clark (1997). 36 See Clark and Chalmers (1998). See also this volume, 8, pp. 15-16, and chapter 16 passim. 37 See Clark (2003); Sutton (2006). 38 See Klein (2000). 39 Engel (1999, p. 24). 40 See Schwartz and Schuman (forthcoming). 41 Halbwachs ([1950] 1980, pp. 71-76). 42 See Wilson (2004, chapters 11-12). 43 See Wilson (2005a). 44 See Hirst and Manier (1996); Hirst, Manier and Apetroaia (1997); Hirst, Manier and Cuc (2003). 45 See Gabbert, Memon and Allan (2003). 46 See Schacter (1995). 47 See Loftus (2003). 48 See Campbell (2003, 2004). 49 See Strange, Gerrie and Garry (2005). 50 Halbwachs ([1950] 1980, pp. 30-33). 51 See Hutchins (1995); Sperber (1996). 52 See Gilbert (1989); Schmitt (2003). 53 See Gilbert (2000). 54 Kitcher (1992, pp. 159-183). 55 Sutton (2004, p. 190). TP
PT
17 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
PT
TP
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 7
EMOTION AND COGNITION: A NEW MAP OF THE TERRAIN Craig De Lancey
Contemporary philosophy of emotion, and much of the psychology of emotion, has been dominated by discussion of the degree to which some emotions are “cognitive.” What it means to be cognitive has never been wholly clear, but the debate still has some importance to the philosophy of mind and to cognitive science: it would be extremely helpful to understand how emotions interact with, and perhaps depend upon, other mental capabilities. Until recently, it was possible to find many philosophers and some psychologists who expressed commitment to a view that emotions require complex cognitive states, such as propositional attitudes. As we gained greater scientific knowledge about emotions, these views became an easy target for a range of neuropsychological critiques. Most philosophers, like most psychologists, now agree on a notion of “cognition”, when speaking of emotion, which is so broad it cannot generate offense. At the end of the cognitivism about emotions debate, we discover that we are all cognitivists.1 But although there is little left here to fight over, the contrast remains sharp between self-described “cognitivists” and those who explain emotions from a biological perspective. “Cognitivists” still describe emotions and their purposes in terms of their representational content; those favoring a biological approach describe emotions in terms of their role in action. Thus, the exhaustion of the debate about cognitivism in part arises because of an emerging consensus, but also because it is becoming clear that the real, primary question underlying the debate has never been properly articulated. This primary question is: can the behavioral role of a kind of emotion be fully explained in terms of its normal2 kind of content, or should the normal kind of content of the emotion be explained in terms of the emotion’s behavioral role? In this chapter, I will review the emerging consensus for a weak form of cognitivism, show how this question of content and behavior reveals the more fundamental and still unanswered issue dividing “cognitivist” from biological approaches to emotion, and then argue for one answer to this question. TP
TP
PT
PT
1. THE MOVE TOWARDS WEAK COGNITIVISM There has not been a single form of cognitivism about emotion, but a range of views characterized by two extremes.3 At one extreme is the view that emotions are representational or otherwise “intentional” mental states. Thus, more than fifty years TP
PT
93 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 93–103. © 2007 Springer.
94
CRAIG DE LANCEY
ago, C.D. Broad argued that emotions are cognitions, where cognitions are experiences that “have an epistemological object”, or are “epistemologically intentional”.4 We can call this view weak cognitivism; weak because it entails just that emotions are or require representations of some kind. Broad and others identified an alternative to weak cognitivism only in the feeling theory of emotions; this is the theory that emotions are merely phenomenal experiences. Such an alternative illustrates how inoffensive weak cognitivism is: the feeling theory is a straw opponent, since one must go back nearly a century to find a defender of it.5 Weak cognitivism also dominates cognitivism among psychologists. Magda Arnold was most responsible for the current rebirth of cognitivism in psychology. She argued that emotions require appraisals, but allowed that these appraisals can be “direct, immediate, non-reflective, non-intellectual […] automatic […]”.6 There has been some divisive debate about cognitivism in psychology,7 but more and more it has become fine varieties of weak cognitivism that are being debated. Gerald Clore and Andrew Ortony have been staunch defenders of cognitivism about emotions, but recently answered all the standard objections to such cognitivism with an identification of cognition with representation: unconscious, fast, and subcorticallycontrolled emotional events are still cognitive, they argue, because they require some kind of representation.8 Thus, even classical conditioning is cognitive: “Classical conditioning involves a process whereby the meaning of one stimulus, the conditioned stimulus, is altered so that it comes to stand for the meaning of another, the unconditioned stimulus”.9 At the other extreme, much stronger views of cognitivism were articulated by a number of philosophers. The reasons for this are not difficult to identify: philosophers are frequently concerned with rationality, action, and the most advanced decision-making abilities of humans. These abilities seem to require complex and conscious cognitive contents, of a kind that are uniquely human and that may depend upon language. If our explanations of these abilities are to extend to emotions, it might seem that these emotions (or at least relevant instances of these emotions) require such cognitive contents. Anthony Kenny explicitly argued that emotions are propositional attitudes.10 Joel Marks argued that emotions are reducible to beliefs and desires.11 Donald Davidson’s interpretationism entails a similar form of belief-desire reductionism.12 Others have supported belief-desire reductionism in passing13 or have offered variations of it.14 Belief-desire reductionism is a very strong view if we have a robust theory about what beliefs and desires are—for example, if we believe they are propositional attitudes of which the agent must be conscious. Other forms of cognitivism include judgment theories, such as was defended in Robert Solomon’s seminal work The Passions. Emotions on this view may require a complex cognitive state of the right kind: “The relationship between beliefs and opinions on the one hand and emotions on the other is not a matter of causation or coincidence but a matter of logic. The emotion is logically indistinguishable from its object: Once its object has been rejected there can be no more emotion”.15 A reading of this as a commitment to a strong form of cognitivism was consistent with some of the things which Solomon later claimed, such as: “all TP
PT
TP
TP
TP
TP
TP
PT
PT
TP
TP
TP
TP
PT
PT
PT
PT
TP
TP
PT
PT
PT
PT
PT
95
EMOTION AND COGNITION
emotions presuppose or have as their preconditions, certain sorts of cognitions—an awareness of danger in fear, recognition of an offense in anger, appreciation of someone or something as loveable in love. Even the most hard-headed neurological or behavioral theory must take account of the fact that no matter what the neurology or the behavior, if a person is demonstrably ignorant of a certain state of affairs or facts, he or she cannot have certain emotions”.16 Belief-desire reductionism, and stronger versions of the judgment theory, are united in a commitment to explain emotions by way of cognitive states that relate to each other in ways that characterize human reason. These states may require both inferences and language. They also appear to be typically taken to be conscious and often uniquely human in their complexity. We can call these kinds of views strong cognitivism. The sharpest alternatives to strong cognitivism arose from biological approaches, which argued that emotions do not require propositional attitudes17 or do not require other complex cognitive states like discrete tokens of conscious symbols,18 and which stress in their explanations of emotions the purpose or behavioral role of these emotions and not their descriptive content. Solomon’s earlier claim that a person cannot be ignorant of the cause of some of their emotions, if meant to refer to basic emotions, is wrong. There is a vast body of evidence showing that we can be demonstrably ignorant of the object of some instances of some of our emotions.19 But if Solomon has sometimes described a view consistent with strong cognitivism, he has also well articulated the shift towards weak cognitivism. He has recently written that: “Thus the judgments that I claim are constitutive of emotion may be non-propositional and bodily as well as propositional and articulate, and they may further become reflective and selfconscious. What is cognition? I would still insist that it is basically judgment, both reflective and pre-reflective, both knowing how (as skills and practices) and knowing that (as propositional attitudes)”.20 Here “cognition” and its synonym “judgment” are as generous as, if not more generous than, Broad’s notion of emotions as representational. It is hard indeed to identify any mental event that does not fall under one of these kinds that Solomon identifies. If we accept a notion of cognition this broad, every serious scholar of emotions is a cognitivist. TP
PT
TP
TP
PT
PT
TP
PT
TP
PT
2. EXPLANATORY AND DESCRIPTIVE COGNITIVISM There is no doubt here something like progress: evidence has shown that some instances of emotions may have representational content that is not conscious or that is arising in structures for which there are homologs in non-human animals. A consensus has arisen that recognizes this variety, and weak cognitivism has become the dominant view. But the consensus is superficial, for the philosophy of emotion remains divided between those who describe emotions in terms of their content, their relation to decisions, and their role in rational thought, on the one hand; and the biologically-oriented naturalists who describe emotions primarily in terms of their role in action, their evolutionary heritage, and their homology, on the other hand.
96
CRAIG DE LANCEY
One might conclude that this is merely a difference in methods, but the persistence of this division reveals a deeper issue implicit in the earlier, albeit confused, debates for and against various flavors of cognitivism. This deeper issue is whether the behavioral roles of emotions are to be explained by the representational content of those emotions, or whether their representational content is to be explained by their behavioral roles. Let us call these positions respectively explanatory cognitivism, and descriptive cognitivism. In explanatory cognitivism, certain representational states come prior in our explanation of the behavioral roles of some emotions; in descriptive cognitivism, we use the behavioral role of certain emotions to explain the kinds of cognitive states that typically cause or in part constitute them. I will assume, as it appears do all the weak and strong cognitivists, that some emotions have as a normal purpose some behavioral role(s). I will also restrict myself to examples of fear, disgust, and anger, since I believe that these are basic emotions and I also believe there is no natural kind corresponding to all of the things we may call “emotions”. That is, the conclusions I draw here may not apply to all of the states we call “emotions”, but do apply to the most characteristic group of those states. If we were to look at a handful of emotions, it is not hard to form a likely story about the kinds of contents that a cognitivist might ascribe as their causes or constituents. The psychologist Richard Lazarus has offered one such list, which he calls the “core relational themes” of the emotions. Here are his themes for three of the emotions Lazarus identifies:21 TP
Emotion
Fright
PT
“Theme”
Facing an immediate, concrete, and overwhelming physical danger.
Disgust
Taking in or being too close to an indigestible object or idea (metaphorically speaking).
Anger
A demeaning offense against me and mine.
Many other such lists have been made by philosophers and psychologists, and for our purposes here they are all relevantly similar. One minor difference is that in philosophy Anthony Kenny developed an approach that treats situations like those Lazarus describes as kinds of (logical) objects. Kenny called such a general representational kind underlying an emotion its formal object. For example, one might claim that the formal object of fear will be the dangerous, of disgust the
EMOTION AND COGNITION
97
nauseating, of anger the demeaning, and so on. In terms of explaining behavior, these two ways to describe the content or object of the emotion are equivalent. The explanatory cognitivist account of the content and purpose of, say, fear would go something like this: an organism perceives that it is in danger, and it desires not to be in danger. A strong cognitivist would now identify fear with these very states. Weaker forms of cognitivism would see in this belief and desire necessary but perhaps not sufficient conditions for the emotion of fear. But what either approach may share is a presumption that there is something like a representational kind, danger, which will allow us to explain in turn the behavioral role of fear. Flight can (in some cases) satisfy the desire to no longer be in danger, and so it is the satisfaction of this desire that explains why we flee, and thus which explains the behavioral role. Similarly, if the behavioral role of anger is to (typically) attack, then an explanatory cognitivist might argue that the formal object of anger is some offense, and anger is in part constituted by the desire not to be offended. Attacking the thing that offends us may satisfy this desire by removing that thing. This is why attack is the behavioral role of anger. And so on—it is not hard to see how to give such an account for other emotions, where the behavior of the emotion is explained by the kind of content attributed to the emotion. Explanatory cognitivism is a compelling theory. It is consistent with weak cognitivism, and so in this regard consistent with our best scientific understanding of emotions. It allows us to apply reason in a common sense way to explain the kinds of actions that we associate with emotions: for example, it is right to avoid dangerous things, so the recognition that something is dangerous which supposedly accompanies fear explains in a simple and straightforward way why we flee the things we fear. I believe that it is this compelling simplicity of explanatory cognitivism, which fits so well with the highly intellectualist views of action that philosophers have developed, that is the true underlying motivation for the various traditional and stronger forms of “cognitivism”. However, the problem with explanatory cognitivism is that it assumes that there is something like a formal object for each emotion, which we can identify without reference to the behavioral role of the emotion (else we would be caught in a vicious circularity). This assumption is false. To show this, we need to look a bit more closely at the notion of a formal object or type of content for an emotion. The traditional debate about cognitivism would have focussed on the degree to which such contents must be conscious, under rational influence, or so on. The victory of weak cognitivism has settled this kind of issue, and there is consensus that something like these states can be unconscious, can have homologs in nonhuman animals, and so on. As a result, such characterizations as those quoted from Lazarus above appear perfectly innocent. But a deeper issue remains for the many weak cognitivists who are explanatory cognitivists. Quite simply: what is, for example, the dangerous? It is easy to find exceptions to Lazarus’s description of the content of fear—that what we fear is “facing an immediate, concrete, and overwhelming physical danger.” It is obvious that there are things that we know are dangerous and do not fear, just as there are things we fear and know are not dangerous (e.g., it would seem that some people may fear flying
98
CRAIG DE LANCEY
but not driving, even though they know that flying is safer than driving). Also, some dangerous things lead to other emotions: rotten food is a threat to our health, and could harm us, yet our reaction to such food is typically disgust. (To further the dissociation, there can be toxins that cause fear but not disgust: one might fear but not be disgusted by a tasteless, odorless liquid if told it contained a deadly poison.) And we fear for other people, and even for fictional people. Also, it is highly plausible that some threats cause not fear but anger. Similar kinds of difficulties will arise for attempts to describe the contents of other emotions. I will get angry if I hit my thumb with a hammer, but I cannot see how that demeaned or offended me. One can of course say that pain demeans, but obviously that is to propose an unfalsifiable theory. Such problems are easy to find, and they arise for all other lists of contents for the emotions that are available out there, and which do not significantly differ from the list by Lazarus. The operational terms of psychologists can be no help here. It is true that psychologists have a notion of noxious stimuli that are identified and used in fear conditioning and other kinds of experiments concerned with affects. Psychologists also recognize that we inherit predispositions for reactions to some of these stimuli (which are then called “unconditioned stimuli”). But the operational aspect of these stimuli for fear conditioning—that is, the condition used to identify them as noxious—is that they can successfully be used in fear conditioning. Thus, fear behavior is required to identify the stimulus as noxious; behavior precedes and identifies the relevant content in the explanation. The challenge for explanatory cognitivism is thus to give an account of the normal kind of content or formal objects of various emotions without viciously circular reference to the proper behaviors of those emotions. Imagine that we have a list of all the things that Jones fears. Jones will have certain predictable reactions when confronted with these things, including facial expressions and a motivation to withdraw. Some of the things that she fears she will openly flee from. The explanatory cognitivist wants to explain these behaviors by reference to the kinds of content that typify the things on this list. One needs then to explain what the things on this list have in common, without reference to the behavior of the emotion, on pain of pernicious circularity in our explanations. My hypothesis is that, at least for the basic emotions, this challenge cannot be met: there is no characterization of the fearful, the disgusting, the infuriating, or the formal objects of many other emotions, that does not require reference to the behaviors of the relevant emotions. The dangerous, for example, is a common sense notion and a useful one, but in this context it demands explanation and no explanation is forthcoming. This hypothesis is a negative claim, and cannot be established directly. However, it is easy to see that the problem with the kind of list Lazarus has given us is systematic. The contents of our basic emotions are not explained by descriptions that refer only to other contents, such as broadly construed goals and social roles. Given the plausibility of this hypothesis, an alternative explanation of what, if anything, these contents may share is preferable. An alternative and better explanation is that it is not a kind of content, but rather a kind of behavior, that
99
EMOTION AND COGNITION
should come first in our explanation. It is not the content that “defines” the emotion,22 but rather the behavior which primarily determines (or, in our evolutionary past, determined) all the other features of the basic emotion. I have called this view descriptive cognitivism because it allows that it may often be right to describe a basic emotion as having the relevant kind of content, but such a description will not properly explain the behavioral role of that emotion. For descriptive cognitivism, we explain the relevant content of an emotion in terms of the relevant purposeful behavior of the emotion. To return to the case of Jones and the list of things that Jones fears: the descriptive cognitivist claims that what this list shares is all and only that these are things that it is appropriate for Jones to flee. Thus, the guiding hypothesis of descriptive cognitivism is the claim that the right kinds of contents of a basic emotion can only be properly identified after we have some understanding of the appropriate behavioral role of the emotion. Fearful things are the kinds of things that it is appropriate to flee. Infuriating things are the kinds of things that it is appropriate to attack. Disgusting things are things it is appropriate to withdraw from and to vomit if ingested. There is nothing else that unites the many contents that may be fearful, infuriating, or disgusting. No unified account in terms of content alone will be possible. Only by reference to the relevant appropriate behaviors can we find any unity. TP
PT
3. WHAT IS “APPROPRIATE”? The claim of descriptive cognitivism—that the typical contents of some emotions are unified only in that they refer to the kinds of things it is appropriate to have that emotional response to—requires an explanation of what it means for the reaction to be appropriate. As representations, the contents of emotions will have to answer to the normative constraints to which all representations must answer.23 But emotional contents can be appropriate in an additional sense. To say that fear has a behavioral purpose is to say in part that it can succeed or fail. If the behavioral role of fear is to motivate flight from various kinds of things that it is beneficial to flee, then fear can fail: it might, for example, not lead us to flee things which it is beneficial to flee, or lead us to flee things it is not beneficial to flee. These normative constraints extend to the relevant contents. The kind of content that should cause fear is one that refers to an object or state of affairs that it is appropriate to flee. That is, the content can be appropriate or inappropriate based upon the behavioral role of the emotion. There will not be a single account of how it comes about that something is appropriate to flee and thus is appropriately fearful. At least four different kinds of capabilities can be involved. First, and most basically, it will be appropriate to flee those kinds of things that it was adaptive for our ancestors to flee. Of some of these things, some organisms may have inherited representations. This is so even for humans. Some humans are born inclined to be afraid of heights, or spiders, or snakes. It is reasonable that these are things that these individuals’ ancestors found it beneficial to avoid. TP
PT
100
CRAIG DE LANCEY
Second, we can learn to fear things from direct experience. Fear conditioning theory and basic learning theory are founded on the notion that there are certain unconditioned stimuli, such as pain, that will cause an emotional reaction in us. For fear, these unconditioned stimuli are likely inherited representations of fearful things; that is, they are the things that it was on balance beneficial for our ancestors to flee (thus, we inherit the disposition to fear—and thus the motivation to flee— pain). But we also learn to associate these unconditioned stimuli with other stimuli (and thus have a disposition to fear—and thus the motivation to flee—pain-causing things). (As noted above, unconditioned stimuli cannot then stand as universal contents of the fearful. They are identified and defined solely in terms of their ability to cause the appropriate reaction; there is not enough regularity in the unconditioned stimuli for them to offer much explanatory power alone; and, some learned fears and socially acquired fears are not explained by reference to unconditioned stimuli.) Third, we can learn to fear things from social learning. A child sees what her parents and peers fear, and learns to fear these things also. She does not have to suffer harm or pain herself to acquire this fear. The benefits of such learning are obvious. This does not disappear with maturity: we continue to learn appropriate reactions from our peers, and sometimes adopt these reactions. Fourth, we can decide to fear things. Humans have the ability to consciously consider their emotions and their consequent motivations, and use practical reason to generate new motivations. They can judge that it would be appropriate to fear something, and perhaps become afraid as a result. This list may not be exhaustive, but it illustrates how we can make sense of the notion of appropriate responses without relying solely on additional representations and their contents. In descriptive cognitivism, the bottom floor of our explanation of emotional content refers to inherited behaviors, dispositions, learning abilities, and social contexts. These allow us to identify the kinds of contents that are appropriate for these emotions. This defense of descriptive cognitivism is not a claim that we shall never find it beneficial to explain some emotional action in terms of the contents that the emotion may have, and other beliefs we may have. For some of the things we call “emotions”, explanatory cognitivism may be correct. Also, even for basic emotions, it will often be convenient to describe their actions in terms of their content. The central claim is rather that such explanations cannot be the ground floor of our account of the nature of the content of the basic emotions. At the bottom of a complete explanation of these emotions lies action and our heritage of emotional activity. 4. DESCRIPTIVE COGNITIVISM AND THE EXPLANATION OF EMOTIONAL ACTION I have argued that explanatory cognitivism about the basic emotions is false, because we cannot describe the formal objects of such emotions without reference to the appropriate behaviors they generate. Descriptive cognitivism is a viable alternative
EMOTION AND COGNITION
101
to explain fully the relation between behavior and content for some emotions, and to identify the kinds of content that the relevant kind of emotion may have. There is nothing here of behaviorism: the notion of behavior that one finds in behaviorism is very impoverished, and it is a false dilemma to propose that our alternative to explain action with cognitive contents is to eliminate content in favor of behavior. An important and neglected third way exists: to recognize the rich complexity of behaviors of the kind that typify certain emotions, and to treat these as events that the theory of mind must include—along with judgments and representations and inferences and so on—as among the fundamental explanatory kinds required to understand the mind. A theory of biological purpose will allow a treatment of these actions as purposeful.24 Thus, along with the benefit of being true, there is additional explanatory power to be gained from descriptive cognitivism. Two observations are relevant in this regard. First, descriptive cognitivism coheres well with a version of the affect program theory that is our best theory of the basic emotions. Second, descriptive cognitivism can play a role in a revised way of understanding action and the mind. The affect program theory is the view that the basic emotions are syndromes of coordinated bodily responses, such as autonomic changes, changes in perception, and action preparedness. A version of this theory has been defended in philosophy by Paul Griffiths, and in psychology by Paul Ekman, Jaak Panksepp, and many others.25 Versions of the view are widely held in psychology. Many scientists have argued that the affect programs have evolved from kinds of inherited action programs, such as flight for fear. An emended form of the affect program theory holds that the basic emotions have as part of their syndrome these action programs.26 Although the affect program theory can be seen as an anti-essentialist view of the basic emotions, since it is typically unproductive to ask which features of the affect program are necessary and which are not, on this version of the theory the action program is the most important such feature, and explains all of the others. This perspective has a great deal of parsimony to recommended it. Not least of these benefits is that the otherwise mysterious notion of motivation is in these cases explained: fear is a motivation to flight because the action program for flight is active. What needs to be explained instead is why such a program is often suppressed or redirected; but it is not hard to hypothesize how this may work, since advanced theories of such factors exist. In addition, the recognition that basic emotions necessarily motivate can explain a number of other phenomena, including some odd forms of expressive behavior.27 Combined with descriptive cognitivism, it is not hard to see how this emended affect program theory will explain the relevant kinds of emotional content. Basic emotions evolved from action programs, and ultimately must be explained in terms of the role of these action programs. Regardless of whether one is committed to a version of the affect program theory, there is a significant corollary benefit to descriptive cognitivism. The struggle between explanatory and descriptive cognitivism touches on many considerations of human action and motivation. Giving primacy of place to the behavioral purpose of basic emotions is consistent not only with a turn away from the intellectualist view of mind that characterizes strong cognitivism about emotions, but also opens the TP
TP
PT
PT
TP
TP
PT
PT
102
CRAIG DE LANCEY
door for a vastly richer discussion of motivations and their contents. Philosophical discussion of motivation has been scandalously impoverished, depending typically on a single general motivational state called “desire”. We have no reason to believe that there is such a generic state. Alternatively, using “desire” as a family resemblance term continually threatens to group together very distinct kinds of things. Discussions of moral psychology, of action theory, or of the philosophy of mind must recognize that there are many diverse kinds of motivations, which have very diverse relationships to other mental states, and which cannot be explained solely in terms of their content. Descriptive cognitivism encourages us to abandon these pernicious prejudices because we must recognize distinct mental kinds dependent upon distinct behavioral roles, and those behavior roles themselves should become fundamental explanatory posits of our theories of mind. What we can gain is a more accurate and predictive view of human thought and behavior. NOTES 1
For the sake of brevity, I will hereafter use the term “cognitivism” and its cognates to mean cognitivism about emotions. Philosophers and psychologists have many other views called “cognitivism”, and nothing I say here refers to those. 2 Throughout this paper, I will use the terms “normal” and “appropriate” in a teleofunctional sense, as opposed to a statistical sense. See Millikan (1984). 3 The various forms of cognitivism about emotions tend to tread close to a stipulative claim, and thus an unfalsifiable and non-empirical assertion—if not definition—that emotions are representational states of a certain kind. Such an a priori form of cognitivism would be a serious mistake, since all the phenomena in question are natural physical events. In this paper I refer only to cognitivism about emotion as an empirical claim. 4 Broad (1954, p. 203). 5 However, there is a tendency among some cognitivists about emotion to treat the feeling theory as the only alternative to stronger forms of cognitivism, and to assume that the only weak spot in cognitivism is that it does not easily account for the phenomenal experiences of various emotions. This is to set up a false dilemma. The biological approach to emotions stresses their behavioral role, and not their “feelings”. 6 Arnold (1960, p. 174). 7 See, e.g., in Zajonc (1980, 1984); Lazarus (1982). 8 Clore and Ortony (2000, pp. 41ff.). 9 Ibid., p. 47. 10 Kenny (1963). 11 Marks (1982). 12 Davidson (1976, 1984). 13 See, e.g., Lormand (1985). 14 Nash (1989). 15 Solomon (1977, p. 178). 16 Solomon (1993, p. 11). 17 Griffiths (1997). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
EMOTION AND COGNITION
18
103
DeLancey (2002). See, e.g., Zajonc (1980); Öhman et al. (1989, 1993). 20 Solomon (2003, p. 16). 21 Lazarus (1991, p. 122). 22 As Clore and Ortony (2000, pp. 41-42) claim. 23 That is, we can set aside in this discussion the problem of representational correctness. I may have a predisposition to fear spiders, and judge that an ant is a spider and be afraid. This fear is inappropriate because the representation is wrong. This is one sense in which an emotional content can be appropriate, but it is not the sense relevant to the discussion here. 24 I refer her to such accounts as one finds in Millikan (1984) or Schlosser (1998) and DeLancey (2006). 25 Griffiths (1997); Ekman (1980); Panksepp (1998). 26 DeLancey (2002). 27 Kovach and DeLancey (2005). TP
PT
19 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 8 CATEGORIZATION AND CONCEPTS: A METHODOLOGICAL FRAMEWORK Cristina Meini and Alfredo Paternoster1 TP
PT
The subject of concepts is impressively huge. Indeed it involves just about all areas of cognitive science. Requirements on a theory of concepts typically include such issues as compositionality, acquisition, prototypicality, intentionality, intensionality (or perspectivality), publicness and normativity.2 As it would be impossible to account for even only half of these issues in such a short chapter as this, we provide neither a survey nor a comprehensive theory of concepts. No less ambitiously, our aim is to propose a framework in which a scientific study of concepts could be fruitfully pursued. The crucial question of the inquiry is “Which mental endowments are required in order to attribute to a system the possession of concepts?”. TP
PT
1. A METHODOLOGICAL PREMISE Two oppositions can be found in the philosophical literature on concepts. First, there is a theoretical opposition between a characterization of concepts in terms of abilities (or capacities), and a characterization of concepts in terms of things, or particulars. The second dichotomy is meta-theoretical. On the one hand, theories of concepts are taken to provide individuation conditions; on the other hand, theories of concepts are supposed to provide possession conditions. This latter opposition concerns the kind of question a theory of concepts should answer: “What kind of entity is a concept?”, rather than “What is required in order to ascribe a concept to an agent?”. At first glance, one would say that endorsing either of the meta-theoretical options does not determine the theoretical way of characterizing concepts. For instance, one could start with the problem of establishing what the possession conditions of a concept are, and eventually reach one of two conclusions: concepts are mental particulars or they are a family of abilities. Indeed, generally speaking, it would appear strange that constraints on the form of a theory could also determine its content. However, as Fodor correctly points out, there is an historical-factual, though non-logical, link between the characterization of concepts in terms of abilities and the idea that the core problem of a theory of concepts is to determine possession conditions.3 In fact, this meta-theoretical option is usually motivated by skepticism TP
PT
105 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 105–116. © 2007 Springer.
106
CRISTINA MEINI AND ALFREDO PATERNOSTER
towards the possibility of identifying concepts with particulars, “things”. The classical reference is Wittgenstein (1953), whose explication of the notion of meaning in terms of use/competence derives mainly from his negative attitude as to the reification of meaning. A combination of skepticism and anti-metaphysical concerns is, therefore, the main reason for “reducing” concepts to abilities.4 According to Fodor, this attitude has unduly influenced cognitive science. It would even be the most serious error of cognitive science, preventing cognitive scientists from adopting the only plausible theory of concepts: informational atomism. According to informational atomism, concepts are unstructured mental symbols (this is the atomistic aspect), whose content is determined by a causalnomological relation with objects in the world (this is the informational aspect—the idea is that the concept, e.g., CAT is the mental symbol whose tokens are systematically caused by tokens of real cats in the world).5 In Fodor’s view, it is precisely because cognitive scientists take concepts to be abilities—rather than mental particulars—that they go wrong in regarding them as complex (non-atomic), that is, in considering inferential patterns as constitutive of conceptual content. In other words, Fodor’s claim is that cognitive science is flawed because it does not recognize that metaphysical questions concerning concepts are independent of and precede epistemological questions about concepts. For instance, one issue is what the concept CAT (or, the concept of a cat) is—namely, the mental symbol nomologically linked to real cats; quite another issue concerns the mental structures allowing us to recognize cats, or to infer that cats are animals. Thus, in particular, the inferential links are not constitutive of concepts or, to put it in a slightly different way, agents’ knowledge about cats is not relevant to establish what such a concept is. We shall persist in this alleged (metatheoretical and arguably theoretical) mistake. We do not a priori reject informational atomism; however, whichever theory of concepts turns out to be right, the priority accorded to the epistemological question and the related theoretical preference for a characterization in terms of abilities, are based on the following reasons: TP
TP
PT
PT
a) Being interested in concepts essentially from a psychological point of view— that is, in concepts regarded as those mental endowments which allow certain epistemic performances – it is quite natural to establish a close link between possession conditions and the manifestation of certain abilities. In other words, one cannot state what a concept is independently of what agents are able to do. b) In order to be considered as a concept, a mental representation must work in a certain way, and a mere description of the representation as such hardly constitutes an exhaustive description of how it works. As Ned Block nicely puts it, “How or what a representation represents is a matter of more than the intrinsic properties of the representation [...]; in particular, it is a matter of a complex relational property: how the representation functions”.6 TP
PT
c) To account for concepts in terms of capacities seems to fit scientific practice better, in the following sense. The approaches underlying “metaphysical”
CATEGORIZATION AND CONCEPTS
107
theories of concepts, including informational atomism, tend to answer the question of what concepts are (what concepts must be) on the basis of purely philosophical, a priori requirements or constraints, and only then to find the relevant, psychological empirical evidence. By contrast, a naturalistic, capacity-oriented approach takes into consideration from the start the empirical evidence concerning behavior and the mental endowments postulated by cognitive scientists. Or, at any rate, this is the kind of approach we want to pursue. To establish a priori what a concept must be is to put the horse before the cart, so to speak.7 TP
PT
In light of the above considerations, our strategy will be the following. We shall look for some abilities with prima facie relevance to concept attribution, beginning with the most basic, and gradually increasing in their degree of sophistication. For each individuated ability, we shall discuss whether it is required for concept possession. Finally, we shall argue that concept possession could be identified with a certain collection of these abilities. The relevant abilities can easily be characterized in computational terms. This approach is naturalistic insofar as it proceeds by assimilating human mental processes to non-human (proto)mental processes, rather than assuming a clear-cut distinction.8 We call this a “bottom-up” approach insofar as it starts from what is simpler, more basic (perceptual), and climbs, so to speak, to what is more sophisticated (inferential, linguistic).9 This strategy does not beg the question of which theory of concepts is right. There are good reasons to think that each theory of concepts on the market gives an account of some aspects of the problem, being unable to account for others. However, we expect our analysis to point out the merits of some theories and the shortcomings of others. TP
PT
TP
PT
2. CONCEPTS AND CATEGORIZATION ABILITIES Several mental abilities are relevant to concept attribution or, in brief, to concepts. Of course, the notion of mental ability can be picked out at different levels of analysis. Consequently, different kinds of mental abilities will be considered relevant. For instance, following a rough and popular classification schema, there are three macroabilities relevant to concepts: categorization, inference and language. We take categorization to be the core ability relevant to concept possession, and we shall distinguish different kinds of properties instantiated at different levels of categorization. We shall argue that, in order to ascribe concepts, some sophisticated aspects related to categorization must be present. As we shall see, these aspects involve inferences. In fact, the categorial mechanisms that we propose are sophisticated enough to support at least some kinds of inference. As far as language is concerned, it is not among the topics of this chapter: Since we shall argue that at least some concepts can be ascribed to non-linguistic creatures,
108
CRISTINA MEINI AND ALFREDO PATERNOSTER
language does not appear to be relevant in general, though it is clearly necessary to attribute certain concepts. By “categorization” we mean, at a first approximation, the process by which a natural or artificial system subsumes a stimulus under a class. As Medin and Aguilar nicely put it, categorization is “the process by which distinct entities are treated as equivalent”.10 Thus, an agent can be said to be somehow able to categorize if there is evidence that he or she takes some particulars to be put together under some aspects.11 Whatever pre-theoretical, intuitive notion of concept is assumed, it has to account for categorizing abilities; in other words, being endowed to a certain extent with this ability is a necessary condition in order to be able to conceptualize. Ability to categorize is a minimal requirement to ascribe concepts, since these are in the first place tools for putting together particulars for a variety of goals: giving appropriate behavioral responses to stimuli of a given kind, forming inductive predictions about properties that have to be applied to new (never met before) particulars, and so on. Indeed, all theories of concepts on the market provide accounts of categorization. These theories can be distinguished essentially by the fact that they offer a different explanation of how categorization works. At first glance, there are two crucial aspects involved in the formation of categories. The first is abstraction: to “build” a class, one abstracts from some features and focuses on some other features. Trivially, whether an apple is green rather than red is not important for an apple to be categorized as an apple. Please note that we are not talking about sophisticated “abstract” concepts, such as DEMOCRACY or ELEGANCE, but only about the ability to pick out some aspects, and exclude others. Without this ability no particular could be considered on a par with another particular under some aspect. The second aspect, which is somehow symmetrical to the previous one, is directedness, that is the ability of certain systems to carry information on, or represent, entities in the world. As Murphy puts it, carrying information on the world is the function of concepts.12 No matter how a category is built, a pattern is generated that is regularly associated with something in the world. Something mental mediates the stimulus and the behavioral response. This corresponds to a (very minimal) notion of representation. That is to say, concepts require representational abilities. There is some analogy between this requirement and what in the philosophical literature is often presented as a desideratum on a theory of concepts, namely, that concepts must have an intentional content. So, for instance, Prinz claims that “concepts represent, stand in for, or refer to things other than themselves”.13 However, we prefer to dispense with this way of speaking, which is too committing, for two reasons. First, the intentional jargon could suggest the idea that there are intrinsically content-bearer mental states, and, as we explained in the previous section, this does not fit well with our epistemic, rather than metaphysical, approach. Second, the attribution of intentional contents could suggest that the kind of mental state relevant to concept possession is highly sophisticated, maybe language-involving, and this would be question begging. TP
TP
PT
PT
TP
TP
PT
PT
109
CATEGORIZATION AND CONCEPTS
What we want to outline is rather a sort of bare, protoreferential ability, which is probably among the conditions enabling the instantiation of full-blooded referential relations. The idea is that categorization necessarily involves mechanisms to carry information on selected entities in the environment. Any kind of categorization involves the instantiation of some mechanism that has the function of indicating or standing for—representing—a certain collection of particulars. Yet this does not amount to a full notion of reference, which, at least according to some authors,14 requires a system of symbols each standing for something. That said, we concede that the ascription of conceptual abilities entails the ascription of some kind of content. In fact, however rough a categorization is, it involves a way of discriminating such and such things from such and such other things, and it is hard to see how this situation can be described without introducing a distinction between two contents, say A and B. Note, however, that it may not be easy to determine exactly which contents A and B are. In a given situation there seem to be many different thoughts and maybe concepts that can be attributed to a non-linguistic creature. It is difficult to say to what extent this indeterminacy can be eliminated. As we shall see later, it may be argued that contents (in general) can be attributed to non-linguistic creatures as well as to humans, and the difference in conceptual mastery is a matter of degree: Humans (thanks to language) cut things finer, that is, more contents and finer distinctions of content can be ascribed to them. Therefore, our notion of directedness does not suggest that language is necessarily involved in categorization. As we shall see below (subsection 2.1), many behavioral patterns, of different degrees of sophistication, show that a given stimulus has been subsumed under a certain category. For instance, in order to be ascribed with perceptual categorization, an animal need merely behave differently in response to different kinds of stimuli. TP
2.1
PT
Evidence from animal categorization: Rats
As stated above, categorizing a stimulus amounts to subsuming it under a class. Many experimental data provide evidence that even animals phylogenically distant from us are able to categorize stimuli. Take, for instance, an experiment aimed at establishing to what extent categorization in rats is accurate.15 If the experimenter feeds a rat on some coffee at t1 and causes it to have a stomach ache at t3 (by a lithium chloride injection), the next day the rat will be off coffee, i.e., the last thing it has eaten. If, however, the rat eats some sugar at t2 (t2 < t3), then it will avoid sugar rather than coffee. The results are shown in Table 1. TP
B
PT
B
B
B
B
B
B
B
B
B
t1
t2
t3
Behavior
Case 1
Coffee
____
Stomach ache
Coffee avoidance
Case 2
Coffee
Sugar
Stomach ache
Sugar (not coffee) avoidance
B
B
B
B
B
B
Table 1
110
CRISTINA MEINI AND ALFREDO PATERNOSTER
These data suggest that the rat is able to represent a category such as “stuffto-avoid” in a rather accurate way, discriminating bad food on the grounds of nontrivial cues such as the temporal order of events. The two criteria regarded as necessary for categorization, i.e., abstraction and directedness, are both satisfied: rats abstract from differences among particular foods and pick out the common attribute of noxiousness; it is this feature that determines behavior, implicitly establishing an informational link with bad food. A third feature of animal categorization has sometimes, and in our view correctly, been highlighted in the literature. According to Dretske, in order to ascribe a genuine representational ability to a system, the system must be endowed with multimodality, that is, the ability to pick out one and the same feature of a stimulus by multiple sensory channels.16 Multimodality allows us to establish what the content of a representation is. In fact, when there is only one sensorial channel, we have no basis to claim that the content of a representation is the external (distal) stimulus rather than, e.g., the proximal stimulus. This systematic ambiguity in content determination is removed in the multiple-sense case, because there are two distinct paths connecting the endpoints, that is, the relevant representation and its object.17 In other words, multimodality allows us to establish whether an animal is representing or misrepresenting something. Therefore, Dretske’s argument is fundamentally that a genuine representational ability also requires the ability of misrepresenting, which, in turn, requires multimodality. Multimodality seems to be present in many vertebrates, including rats. Therefore, we argue that bare animal categorization is characterized by abstraction, directedness and multimodality. Is this “bare categorization” enough to attribute concepts? Let us discuss another example, involving more complex animals, such as monkeys. As we shall see, monkeys’ behavior is probably easier to interpret, in the sense that the properties we have individuated so far are more apparent. TP
PT
TP
2.2
PT
Monkeys
The example we are going to discuss is taken from Cheney and Seyfarth (1990), who observed a group of vervet monkeys living free in a park. Vervet monkeys, which live in groups of 10-30 individuals, engage in complex vocal interactions. Each individual uses a set of 25-30 signals, which appear to be messages to conspecifics. The most frequent communication contexts are dangerous situations, search for food and sexual interaction. Each of these contexts is characterized by the use of highly specific vocal signals. For instance, three different alarm calls are produced in the face of different predators such as snakes, birds of prey (usually eagles) and big cats (typically leopards). The monkeys’ reaction upon hearing an alarm is also very specific: leopard alarms make vervets run into trees, whereas eagle alarms cause them look up or run into bushes; in response to a snake alarm, vervets keep still and peer at their surroundings.
CATEGORIZATION AND CONCEPTS
111
Note that the escape reaction is not triggered by a unique cause. In fact, even when it is not alerted by a conspecific alarm, a monkey runs away when it sees a leopard (over and above alarming its conspecifics). Therefore this is also a case of multimodal categorization, as recognition is performed through different sensory channels. 2.3
On-line and off-line processes
Although the behavioral pattern of vervet monkeys is quite sophisticated in comparison with the rats’ behavior discussed in subsection 2.1, we argue that this kind of categorization is still too coarse-grained. In fact, the behavior of vervet monkeys in the face of an alarm is a rigid reaction. Whenever a leopard is perceived, a monkey cries and escapes. On the other side of the communication channel, whenever a monkey perceives a leopard alarm, it escapes. That is to say, the stimulus “leopard alarm” only triggers escape reactions. Let us work out this notion of rigidity. In the examples discussed so far, animal behavior is guided by perceptual representations that trigger the relevant action. However, two limits are intrinsic to these kinds of behavioral patterns. First, the relevant representations are only triggered by an external event. This is exactly what makes them (stricto sensu) perceptual. As a consequence, the action is also rigidly linked to the perceptual stimulus, although it is mediated by representations. Second, one and the same action is necessarily triggered in the presence of a given stimulus: When vervet monkeys perceive a leopard alarm, they cannot but react, rigidly, by crying and escaping. This second kind of rigidity has indeed a dual aspect. On the one hand, the stimulus-driven representation necessarily triggers an action. On the other, the triggered action is always the same, or, at most, one among a very limited range of actions. Taken together, these constraints shape a working modality that could be said to be on-line in the sense that the representations involved are tied to environmental contingencies and to the animal’s current activity. Admittedly, rigid, species-specific patterns of behavior are not useless. On the contrary, being the product of natural selection, this rigidity is typically part of a good strategy to avoid mortal dangers. To use a slogan, acting is better than standing still. Rigid patterns of behavior are typically very cautious, in the sense that they also happen to be triggered by false positives: sometimes a danger is seen in a harmless situation. Again, this is a good strategy for survival in simple systems. However, an animal belonging to a species characterized by rigid patterns of behavior gets a selective advantage when it becomes able to inhibit an action: its cognitive system is less deceived by false positives and can “decide” when it is the case to act. To acquire the power to inhibit behavior constitutes a significant cognitive improvement for a biological system. The capacity to inhibit actions clearly weakens the rigidity of the categorization models discussed earlier, improving the degree of sophistication of
112
CRISTINA MEINI AND ALFREDO PATERNOSTER
the representation. A good way to account for the overcoming of rigidity is given by the notion of detachment, that is, the breaking of the connection between a stimulus—or a perceptual representation—and the action. This kind of detachment accounts for what we called earlier the second source of rigidity. In order to give up the first source of rigidity as well, it must be possible to instantiate a mental representation without the stimulus being there, that is, independently of perceptual contingencies. This is again an instance of detachment, insofar as the link between the perceptual stimulus and the representation (as well the action) is broken. A system exhibiting a kind of information processing characterized by this double possibility of detachment can be said to work in off-line mode.18 Therefore, the off-line modality can be defined as the conjunction of the following two features: TP
PT
1) The ability to activate representations in a top-down manner, without requiring the presence of a stimulus (this is the input-side detachment ability). 2) The ability not to trigger, in the face of a given stimulus, the (complex) action which is normally triggered by that stimulus (this is the outputside detachment ability, i.e., the capacity to inhibit an action).19 TP
PT
Both cases of detachment, and paradigmatically the output-side one, are instances of sensorimotor loop breaks. We maintain that off-line processing, together with the features so far associated with categorization, are necessary and sufficient to attribute conceptual capacity. Patently, off-line processing is necessary for concept attribution, since there is a very strong intuition that rigidity (as assessed here) is incompatible with the possession of conceptual skills.20 On the one hand, it is hard to deny that the possession of concepts is related to intelligent behavior. This, in turn, is characterized by at least a certain degree of flexibility or freedom, i.e., by the possibility of triggering action in a non-rigid way. As we saw earlier, flexibility in this sense can be achieved by the double kind of detachment which constitutes our notion of off-line processing. On the other hand, the ability to act on the basis of unperceived goals seems to be central for conceptual endowment. Arguably, it is the possession of concepts that allows us, for instance, to figure out a hypothetical danger, in order to plan a preventive behavior. We may say that there are concepts where there are mental representations which can really stand in for the stimulus, so that the relevant behavior can be guided by the mechanism, instead of the stimulus (whereas, as we saw earlier, perceptual representations are still linked to the stimulus). From a slightly different point of view, the pre-theoretical notion of concept seems to require a kind of generality which is over and above the kind of generality involved in bare (non-human) categorization—the notion of generality captured by the feature of abstraction. The idea is that one and the same representation can be TP
PT
113
CATEGORIZATION AND CONCEPTS
exploited in several different mental operations; in particular, we can use it in reasoning. The independence from sensorimotor constraints is a special case of generality: among the different possible uses of a representation, there are situations which are not constrained to the “here and now”. Following our definition of off-line processing, on the one hand representations can be activated in a top-down way, (i.e. by imagery processes operating on long-term memory data) and, on the other hand, this activation does not necessarily yield the execution of any motor plan21. More controversial is our claim that off-line processing is a sufficient condition for concept attribution. Some authors22 regard what we call the off-line processing ability as the mark of the mere representational capacity, insofar as the break of a sensorimotor loop requires the “insertion” of a non-merely stimulusreplicative representation in the loop—as would be the case in early vision representations—, but a rule-like representation. A representation is “rule-like” if it enables differentiated behaviors thanks to its conditional structure. This analysis is strictly tied to the possibility of representational error: both aspects of off-line processing are examples of misrepresentation. Of course, this proves nothing. It might be the case that the notion of concept and the notion of mental representation are one and the same. However, according to these authors, further constraints are required in order to ascribe concepts. Typically, the claim is that concepts require public language or at least a language-like vehicle, such as Fodor’s language of thought. This connection does not seem to us very convincing. Take, for example, Bermúdez (2003). He agrees that animals can think, but he dissociates animal thought from the possession of concepts, characterizing it as a sort of (sophisticated) perceptual thinking. His view rests upon Davidsonianstyle arguments according to which there is a necessary link between concepts and metaconcepts. These arguments are far from being conclusive.23 Other typical arguments aiming to impose further requirements on concept possession appeal to remarks on normativity. However, it is disputable that a psychological theory ought to account for the alleged normative nature of concepts. Moreover, a minimal normativity requirement, according to which there are correct and incorrect applications of concepts, can also be matched in naturalist approaches, as is showed by several accounts of representational error, such as Dretske’s and Millikan’s. In order to say something positive regarding the claim that off-line processing is also sufficient for conceptual competence, though in a minimal sense, let us discuss an example of off-line behavior. Imagine an animal that happens to move, for some reason, to a new environment. It has never been in this kind of environment, which is nonetheless compatible with its survival. For example, an animal usually sheltering in trees finds itself in a region where there are only short and thorny bushes. In order to find or build a shelter, the animal must consider alternative hypotheses, evaluating the consequences of each one. It could for instance evaluate whether it is better to find shelter in the vicinity of a rock, or, rather in a hole in the ground. The crucial point in off-line modality is the ability to trigger mental processes in absentia of external stimuli. These appear to be intelligent, rational and not species-specific behaviors, which are adopted in new, contingent situations. Is it TP
TP
PT
TP
PT
PT
114
CRISTINA MEINI AND ALFREDO PATERNOSTER
enough? Admittedly, it is largely a matter of stipulation. We are inclined to think that this is sufficient to ascribe conceptual competence because we cannot see why one should deny that a kind of mental imagery able to select “intelligent” behavior is a form of reasoning. At least, the burden of proof seems to fall in the opposite camp. Therefore, the claim that the off-line modality is sufficient for concepts depends crucially on the feature that representations can be activated not only by objects in the perceptual field but also by top-down processes such as imagery or long-term memory. In fact, the involvement of this kind of process appears to be evidence that a kind of reasoning is available. As we said earlier, this claim is to a large extent a matter of stipulation: the issue of establishing what counts as conceptual capacity is not purely empirical. Indeed our proposal should not be taken as a dogmatic thesis about what a concept is (in terms of possession conditions). Rather, empirical data seem to show that the relevant abilities (bare categorization, that is, abstraction and directedness; multimodal categorization; off-line processing) are distributed along a continuum, such that the boundary between the non-conceptual (or pre-conceptual) and the conceptual is fuzzy. We might say that proto-concepts are included in the leftmost area of the continuum, whereas full-blooded concepts belong to the rightmost area. In the middle, there is a gray area in which conceptual capacities begin to emerge. This would provide further evidence for the widespread thesis according to which the concept of concept is vague, prototypical. In a way, it is just for a sort of anthropological chauvinism that we are inclined to claim that concepts are only located in the rightmost part of the continuum. To sum up, a system can be ascribed with conceptual competence if and only if all of the following features are present: a) b) c) d)
abstraction directedness multimodality off-line processing
The instantiation of features (a), (b) and (c) corresponds to the possession of mental representations. The fourth requirement, which is crucial in order to qualify some mental processes as conceptual to a certain extent, is better regarded as a property of the use of representations. 3. CONCLUSIONS We have taken categorization as the core ability relevant to concept possession, and we have distinguished some different kinds of properties involved in different levels of categorization. Four constraints have been individuated in order to possess concepts: abstraction, directedness, multimodality and off-line processing. The last
115
CATEGORIZATION AND CONCEPTS
requirement, in particular, allows us to account for the popular (and plausible) thesis according to which inferences are necessary for concepts.24 TP
PT
NOTES 1
This is a thoroughly co-authored paper. See, e.g., Prinz (2002). 3 Fodor (1998b, chapter 1). 4 See Fodor: “[… P]eople who start with ‘What is concept possession?’ generally have some sort of Pragmatism in mind as the answer. Having a concept is a matter of what you are able to do, it’s some kind of epistemic ‘know how’ […]. The methodological doctrine that concept possession is logically prior to concept individuation frequently manifests a preference for an ontology of mental dispositions rather than an ontology of mental particulars” (1998b, pp. 34). Shortly afterwards Fodor suggests that skepticism is the skeleton in Dummett’s closet (p. 5). Indeed Dummett defended a similar position in the philosophy of language, identifying the theory of meaning with the theory of understanding). 5 Please note that to say that concepts are mental entities endowed with a content could be misleading, since concepts, whatever they are exactly, are already contents—they do not require further interpretation. In any case, the point should be clear: The causal-nomological link with the world is what determines concept contentfulness. In other words, a concept is intrinsically content-bearer or “self-interpreted”, because the mental representation which carries the concept is causally linked with the world. 6 Block (1986, p. 668). 7 However, as we shall see, a certain extent of apriorism will also be unavoidable in our inquiry—as well as, we argue, in every inquiry aimed at outlining a theory of concepts. 8 See Brandom (2000) for the non-assimilative strategy. 9 See this volume, p. 12. 10 Medin and Aguilar (1999). 11 We shall always use “category” as synonymous with “class of objects”. That is, our notion of category corresponds only to the extensional aspect of the notion of concept. Remember also that we are talking about subjective categories: We are not interested in the question of whether the categories formed by people are appropriate by certain public normative standards (naturalistic constraint). 12 Murphy (2002). 13 Prinz (2002, p. 3). 14 See, e.g., Deacon (1997). 15 McIntosh (1994). 16 Dretske (1986, p. 33). More precisely, Dretske claims that representations can also be ascribed to systems that are able to exploit in a single sense modality different external features of a stimulus. We shall restrict ourselves to multimodality. 17 However, as Dretske points out, multimodality as such still does not rule out the possibility that the content of a representation may be disjunctive, as could be the case when two different stimuli—say, an elm and a beech—have some sensorial properties in common. Dretske accounts for this problem by invoking the notion of indication function (see 1986, pp. 35-36). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
116
18
CRISTINA MEINI AND ALFREDO PATERNOSTER
The notion of off-line processing is interestingly worked out by Bickerton (1995) in the domain of language development. 19 To say that an action is “normally” triggered by a stimulus means that, given some boundary conditions (ceteris paribus clauses), that stimulus is followed by that action in a statistically relevant number of cases. 20 The neurophysiologist Vittorio Gallese (2003c) seems to deny this. He regards multimodal categorization as entailing a sufficient degree of generality, which, as we shall see later, is linked to the off-line modality. Thus, for him off-line processing—as we defined it—is not a necessary condition for concept possession, or, at least, for the possession of an intentional concept such as the concept of goal-oriented action. 21 The relevance of generalization abilities to concept possession is also highlighted by Sterelny (2003, chapter 4). 22 See, e.g., Bermúdez (2003, p. 9; 1998, p. 88) and Haugeland (1998, p. 172). 23 See, e.g., Jacob (1996). 24 We are grateful to Barbara Giolito, Françoise Longy, Diego Marconi, Daniele Sgaravatti, Alberto Voltolini and two anonymous referees of CogSci 05 for their helpful criticisms. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 9 ERRORS IN DEDUCTIVE REASONING Pierdaniele Giaretta and Paolo Cherubini
Without aiming at a general satisfying characterisation of reasoning, we will start with a short discussion of a feature which is usually taken as a necessary condition to qualify something as reasoning. There is no reasoning either in feeling pain, or in seeing a red light, or sensing rough. The usual explanation is that in such cases we only acquire sensorial information, while reasoning implies transformation of information already acquired. However, not every process of transformation of information already acquired is reasoning, as shown in the case of visual perception where threedimensional objects are constructed from two-dimensional information by processes which sometimes qualify as inferential. So there is a need for further specifying the transformational processes which constitute reasoning. Moreover, the fact, which is often taken for granted, that reasoning transforms information already acquired should be better specified. Reasoning could start from a freely chosen hypothesis which does not express any information which has been previously acquired and could be used to get semantic information concerning linguistic expressions. 1. A COMPUTATIONAL APPROACH Broadly speaking, reasoning is the transformation of mental representations according to certain rules. What kind of mental representations and what kind of rules? In general we can say that a mental representation occurs in a mental state and this occurrence constitutes a mental event. A piece of reasoning, produced by a certain individual at a given time, is a sequence of mental events of the appropriate kinds. It is a mental process, whose type is determined by the types of mental events which occur as its parts. In its turn, the type of a mental event is determined by the type of the mental state and by the type of the mental representation occurring in it. We also speak of a piece of reasoning as a type of such individual mental processes. As such a piece of reasoning can be identified by the sequence of corresponding types of mental representations which are involved in the individual mental process. And, usually, a piece of reasoning may be referred to without assuming that it has, or has had, actual instances; one does not need to think of it as the type of a mental process which has actually occurred. Not every sequence of mental representations is a piece of reasoning. Having successive images of two people or thinking of their names one after the other does not constitute a piece of reasoning. Reasoning occurs only when mental 117 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 117–130. © 2007 Springer.
118
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
representations are of a certain kind. In one view they have a linguistic or propositional nature; in another they are models, or diagrams, representing states of affairs. In the first view the rules describing the transformation of contents are rules of inference. In the latter they are rules for the integration and modification of models. Since these should be understood as codified diagrams, or as having some typical features of codified diagrams, even the rules for them cannot but have a syntactic nature. In both views they need not be universally valid in the sense of the deductive logic. They could be valid only in certain interpretations or they could sometimes fail. From a psychological point of view they might be activated only when dealing with certain kinds of objects or actions. In so far as they are governed by finite and syntactic rules, transformations of mental representations are computational and, as such, can be taken as descriptions of causal processes. The reason for this is very general. If a piece of reasoning is a computational process, it can be physically embodied and, like any physical realization, has a causal nature. This does not mean that the attribution of a certain chain of syntactic transformations is enough to causally explain the particular piece of reasoning produced by a particular individual. To get a causal explanation we should know under which conditions, given certain mental representations, a certain kind of transformational process is activated and whether, and how, its implementation is influenced by incoming data. The psychology of reasoning mostly focuses on the kind of representations and representational transformations suggested by the experimental results concerning performance in certain specific reasoning tasks. One of the main problems it deals with is to reconcile the fact that computational processes are responsive only to formal properties with the experimental evidence showing that actual reasoning is influenced by non formal aspects and psychological biases. Most general theories are able to make predictions which are based on the assumption that in some individuals the same transformational process does not keep going on, or deviates in specific ways. To explain these shortcomings and deviations, the theories based on the inferential rules appeal to the high number of rules or of their applications; those based on the construction of mental models appeal to the possible incomplete formation of such models, their number and the difficulty of their integration. However, it has been pointed out that sometimes difficulties and failures may not depend on the features of the hypothesised mechanisms, but rather on the understanding of the task, mainly the formulation of the problem or of the task requirements.1 As we said, the representations which are transformed in reasoning processes are supposed to be like sentences or diagrams. They do not need to be linguistically expressed through the sentences of a natural language; but, since they must be possible contents of beliefs, it must be possible to qualify them as true or false. For to believe “that p” presupposes that p is meaningful and implies that the truth of p is implicitly believed. However, even if a piece of reasoning is conceived as a process ending with the acquisition or the rejection of a belief, most of the cognitive explanations that have been attempted reduce it to computational transformations of mental representations. Accordingly, it does not seem that truth or falsity have a role
ERRORS IN DEDUCTIVE REASONING
119
from the point of view of the way in which information is computationally transformed. If the transformations have certain forms, it follows that truth is preserved, i.e. that the conclusion is true if the premises are true. Of course, that raises the philosophical question of why the transformations having truth preserving forms are taken as correct and, in general, as preferable. From a philosophical point of view attempts have been made to see preservation of truth as involved in, or suggested by, some special features of such transformations, which do not reduce to the functional role of the relevant logical operators.2 Two very different kinds of objections can be raised against the general project of a computational theory of reasoning. One is Fodor’s thesis that (1) in order to explain reasoning conceived as a process of fixation and revision of beliefs it must be possible to determine where to look in order to solve a problem, and what to revise when a conflict among beliefs has come to awareness, but (2) a computational treatment of this ability would be respectively blocked by isotropy, according to which anything can in principle be relevant to the solution of a problem, and holism, according to which nothing can be a priori excluded from revision.3 Arguably, Fodor’s objection is internal to the perspective of a representational computational theory of mind. The other objection is an external one and was raised by Searle. He holds that mental representations are intentional and that, in general, mental phenomena cannot be explained independently of the intentionality of the mental states, taken as a non computational feature of the mind.4 We do not discuss these objections in this context. Our main aim is to focus on some of the problems dealt with within a cognitive approach to reasoning. They are interesting independently of the success of the explanations which have been given. Even the weaknesses or the failures of the attempted explanations are interesting. They raise both specific and general questions, some of which do not reduce to the general objections by Fodor and Searle and might help deepen the analyses of those very objections. TP
TP
PT
PT
TP
PT
2. HOW TO IDENTIFY DEDUCTIVE REASONING Let us start with a preliminary question which is usually replied to in a rather simplistic way. Are there different kinds of reasoning? The usual answer is: “Yes, of course; reasoning can be deductive, inductive or abductive!” As a justification, it is usually pointed out that there are three corresponding different notions of validity. However, even granting that inductive and, especially, abductive validity are sufficiently specified, the distinction of the three notions of validity does not provide a partition of all sequences of mental events which it is natural to qualify as pieces of reasoning. What the difficulty consists of can be seen by considering how deductive reasoning can be identified. The following are some possible ways of identifying a deductive piece of reasoning:
120
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
1.
It is simply possible to evaluate it from the point of view of deductive validity according to which a piece of reasoning is valid if, and only if, the conclusions, even in the intermediate steps, follow from premises in the sense that it is impossible for the conclusions to be false and all their premises to be true.
2.
It is the kind of reasoning for which the appropriate notion of validity is the deductive one.
3.
It is deductive in the sense of being deductively valid.
4.
It is deductive if, and only if, its producer is willing to accept that the conclusion should be validly deduced.
There are various reasons for rejecting the first three characterisations of deductive reasoning. With reference to 1, all pieces of reasoning can be judged from the point of view of deductive validity. Proposal 2 is too indeterminate. Who should evaluate the appropriateness of the deductive validity as the notion of validity to be applied? The subject producing the piece of reasoning or a certain community to which she belongs or the experimenter? From 3 it follows that no piece of reasoning is deductive unless it is deductively valid, independent of which is the notion of validity driving its producer. So this kind of identification of deductive reasoning turns out to be just the opposite of the fourth, which assumes that the producer of a piece of reasoning possesses a pre-theoretical notion of deductive validity and requires only that its producer be willing to accept that the conclusion should be validly deduced. There are, indeed, some good reasons to prefer the fourth kind of identification. In many situations people are pretty sure that something follows necessarily from something else: that is, they behave as if they were trying to apply a correct concept of deductive validity. Nonetheless, in some situations some individuals do not manage to apply correctly their concept of deductive validity: they think that a conclusion is valid, when it is not, or else they think that a conclusion is not valid, when it is. Most often people falling for these kinds of errors are willing, at a later moment, to acknowledge their own mistake: in so doing, they certify that they were indeed trying to pursue deductively valid conclusions, even though they did not manage to do so. In these situations the psychology of reasoning posits that people are indeed reasoning deductively, whether or not their conclusion is valid. Some of these errors are quite frequent, and cast an insight into some possible constraints of human deductive reasoning. It will emerge that the initial, commonly agreed, characterisation of reasoning as transformation of mental representations according to certain rules needs to be somewhat revised or integrated in order to account for the available data about human reasoning. That is, the study of reasoning errors shows that not every violation of a rule can be seen as the correct application of another rule. This view is supported by the following tentative classification of deductive errors, based on their cognitive explanations.
ERRORS IN DEDUCTIVE REASONING
121
3. A TAXONOMY OF REASONING ERRORS We classify errors according to their cause—so the classification is etiological—and we do not claim that all errors are included—so the classification is not complete. In particular, we do not consider the case of a correct outcome obtained by applying a wrong rule or procedure5. In the following we will restrict ourselves to the errors having an outcome which is wrong, or which do not fit the purpose of the reasoning. The classification is based on the partition of errors in performance and competence errors, without assuming any precise notions of performance and competence. We simply mean that competence errors concern the level of availability of the rules that directly concern the reasoning steps and that performance errors concern the actual application of such rules. For each kind of errors two classes are distinguished; in both cases they are presented by exploiting an analogy with the chess game. Performance errors can consist in: A) applying badly a well known and correct rule, such as when a chess player knows which is the legal move for a knight, but, out of tiredness or distraction, she moves a knight in an illegal way; B) applying correctly the known rules, but “playing badly”, that is failing to see that a well known rule can be applied and is relevant in a given situation, such as when a player fails to see that her adversary has an available legal move to checkmate her, and does not parry it with a counter-move. A peculiarity of performance errors is that they are in contrast to the individual’s own competence; hence, they are spontaneously recognized as errors by the individual, if she realizes the alternative “correct move”. Another peculiarity of B-type performance errors is that they are objective-dependent: in the eyes of her adversary, a player who does not parry a checkmate is making an error, because the adversary assumes that the player’s objective is to win (or not to lose). However, a player pursuing a different objective, e.g. to stop playing as soon as possible, is not making an error by not parrying a checkmate. Competence errors can consist of: A) applying correctly a wrong rule, like a novice player who thinks that pawns can move backward, and accordingly moves a pawn backward; B) not knowing a correct rule, like a novice player who does not know the move to castle, and accordingly never castles the king. Competence errors cannot be recognized as errors by the individual, as long as she does not change her current level of competence, either by learning or by maturation. An external observer sometimes lacks the clues to distinguish between B-type errors of the performance sort, and B-type errors of the competence sort: that is, when someone fails to draw some important conclusion, is it because she failed to apply a known rule, or because she did not know the rule that was to be applied? In experimental settings, the question can be answered only by checking whether a rule that is not applied in a given context is usually applied, or not applied, in different contexts. The four kinds of errors can be schematically described in terms of the following parameters and corresponding values, which are to be specified with reference to a specific situation: TP
PT
122
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
Parameters applied rule set of the aim-satisfying rules being a rule aim-satisfying application of the rule
Values right/wrong (R/W) known/unknown (K/U) yes/no (Y/N) good/bad (G/B)
With reference to reasoning, rules are right if they always preserve truth and wrong if they are not right. So a rule is not considered as right or wrong by mere stipulation, as it might be misleadingly suggested by the analogy with chess. A rule is aim-satisfying in the specific contextually determined situation if it could be applied, in that situation, in order to satisfy the appropriate aim. For each situation it is assumed that the set of such alternative rules is fully specified, and that the set is known if all the rules belonging to it are cognitively available, and unknown otherwise. Then, bearing in mind that we limit our analyses to errors having a wrong outcome or one that does not fit what the reasoning is for, and using self-explanatory abbreviations we can define a simple error as a passage of one of the following four kinds:
A-performance error B-performance error A-competence error B-competence error
rul R R W
set K K U
a-s Y N N N
appl B G G
Notice that in the A-performance errors the applied rule belongs to the set of rules whose application serves to get the aim which is pursued. That is not so in the Bperformance errors and in the A-competence errors: in both cases the applied rule is not a rule which should be applied in the given situation. The difference—besides that concerning the correctness of the applied rules—is that in the B-performance errors a rule which should be applied is known, while it can happen that such a rule is not known in the A-competence errors. What is most specific of an A-competence error is that a wrong rule appears to belong to the individual’s competence. In the Bcompetence errors just the lack of a rule whose application is useful to get the solution of the problem is suggested by what is done or not done. It could be asked whether every error having an outcome which is wrong or does not fit the purpose of the reasoning can be described as a combination of simple errors. The question can have a more precise sense, after having exactly specified all the concepts involved in this kind of classification, and what and which the rules are. Here we will briefly mention the way in which rules are conceived in the psychological study of reasoning.
123
ERRORS IN DEDUCTIVE REASONING
The psychology of deductive reasoning usually proceeds by assuming that the rules of deductive reasoning are a fixed and well-defined set of transformational rules belonging to some logical formal system. The most developed theory of this kind has been provided by Lance J. Rips.6 Other theories are not so developed or, more vaguely, think of deductive reasoning as requiring logical resources which do not include logical rules only; however, they are not exactly specified and connected with people’s actual performances.7 Against the so called logicist approach it has been claimed by Philip N. Johnson-Laird that, in general, empirical data do not seem to be compatible with the availability of the logical rules of any logical formal system.8 He proposed that a set of rules for transforming diagram-based representations describe human deductive competence more appropriately than a set of rules for transforming propositionbased representations. His rules concern diagram-based representations and are such that their efficient application produces logically correct results. To cope with the conflicts with empirical data Johnson-Laird integrates his rules with other principles suggested by the ways in which people seem to reason. Other psychologists made a more explicit and wide recourse to the extrapolation of the set of rules from empirical data. On one hand they extrapolated rules from the observation of actual behaviour and, on the other hand, they assumed some set of normative rules, often of a non-deductive system, in order to get the highest possible match between the two sets of rules. In recent years this sort of theoretical transformations have occurred, for example, when some theories have proposed that human “deductive” reasoning is, at least partially, not deductive at all, in the sense that its normative standard is not deductive logic, but probability calculus, information theory, and expectedutility theory.9 TP
TP
PT
PT
TP
TP
PT
PT
4. EXAMPLES OF THE FOUR KINDS OF ERRORS The concept of rule is related to the concept of error in many different ways. In the following paragraphs we show some experimental examples of typical deductive errors of the four sorts outlined above; it will become clear that a seemingly identical error may sometimes originate from different causes, and accordingly can be sorted in different classes. Furthermore, it will become clear that classification is theory-sensitive: different theories advocate different reasons for the same errors, and some errors that are performance errors for a theory can be considered competence errors for another theory. We start with the competence errors which are, in a sense, the most difficult to diagnose. A-type competence errors (applying wrong rules): Wrong transformations in solving categorical syllogisms Arguably, the first deductive task ever to be studied by psychology was the solution of categorical syllogisms. For example:
124
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
1. All A are B All B are C Therefore, all A are C Some categorical syllogisms, like syllogism 1, are trivial. Some others are fairly difficult. For example, assuming that at least one B exists: 2. All B are A No B are C Therefore, some A are not C The conclusion of syllogism 2 is spontaneously produced or chosen, on average, by less than 10% of non-expert persons; the rate of acceptance is slightly higher when the conclusion is offered for evaluation. The conclusion accepted (or produced) most often is “No A are C”. Some instances of acceptance or production of the “No A are C” conclusion seem to be A-type competence errors: people appear to apply some spontaneous heuristic rules that are not logically valid. One of these heuristic rules asserts that the conclusion has to satisfy two principles which in literature are known as atmosphere effect:10 TP
PT
a) If both premises are universally quantified, then the conclusion is universally quantified; otherwise, it is particularly quantified; b) If both premises are affirmative, then the conclusion is affirmative; otherwise, it is negative. The two principles correctly solve some, but not all, syllogisms. When applied to syllogism 1 they help generating the correct conclusion. When applied to syllogism 2 they generate the incorrect conclusion “No A are C”. Evidence that other heuristic principles are applied by people was gathered by Wetherick and Gilhooly, and Rips.11 Gilhooly and colleagues found that 77% of individuals in their sample spontaneously used wrong rules of this kind when solving syllogisms. Although their estimates may be inflated,12 their data strongly suggest that a good proportion of people sometimes makes A-type competence errors in solving syllogisms: that is, they do “illegal” transformations by correctly applying wrong rules. This kind of explanations raises the following general problem: does the wrong rule whose application produces an A-type competence error belong to the individual’s competence? It appears to be so, but that conflicts with a normative notion of competence. TP
PT
TP
PT
125
ERRORS IN DEDUCTIVE REASONING
B-type competence errors (not applying a correct rule because it is not available): Modus tollens inferences and the selection task, abstract version. B-type competence errors are very common: when we cannot solve a problem that requires specialized knowledge that we lack, we are doing a B-type competence error. For example, if a person that never studied mathematics is asked to solve a differential equation, she will lack the competence to do it. However, mathematics can be learned, and the competence gap could then, sometimes, disappear. Psychologists seem to think that also spontaneous reasoning shows competence gaps. According to theories that posit that deductive reasoning is grounded on a set of inferential rules hard-wired in the brain,13 one such gap is the lack of an inferential schema to carry out conditional inferences of the “modus tollens” sort: TP
3.
PT
If A, then B not B Therefore, not A
It could be argued that modus tollens is not needed to derive its conclusion from its premises. One can draw not A by performing an “ad absurdum” proof: “by assuming A, B would follow; but, since B is negated, then A cannot be true”. One response has been that the ad absurdum proof involves rules that are, in general, available, but are not always seen as relevant: the lack of a rule to infer the modus tollens directly causes B-type performance errors because of the need to draw the inference by applying different rules. The consequence is the reduced rate of modus tollens with respect to modus ponens. Although there are different explanations for the inability to draw modus tollens inferences, suggesting that it is either a performance problem,14 or the result of considering conditionals as statements of a probabilistic association between antecedent and consequent,15 the competence-gap explanation has been classically applied to the abstract version of the famous Wason’s selection task. In its original version,16 participants were shown 4 cards like the ones in Figure 9.1. Peter Wason stated the conditional “If a card has the letter D on one side, then it has the number 3 on the other side”, and pointed to each card one by one asking the participant whether knowing the content of the other side of that card would be helpful in order to establish whether the statement was true or false. TP
TP
TP
PT
PT
Figure 9.1. Wason’s selection task.
PT
126
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
Impressively, when asked to evaluate the “7” card, most participants said that it was useless. Participants knew perfectly well that a card with a D on one side, and the number 7 on the opposite side would make the conditional false: in a different task, when they were asked to evaluate the conditional with respect to different cards, they always realized that cards with a D and without a 3 falsified the statement. Accordingly, not selecting the card with the number 7 could be the result of the lack of a modus tollens schema: people do not see easily that, according to the conditional statement, the card without the 3 should not have a D on the other side. In this perspective, the error would be a B-type competence error caused by the lack of the modus tollens inferential schema. There are many different interpretations of this error. If it is a B-type competence error, it essentially consists in not performing an inference as a consequence of the lack of the corresponding rule. From a more general point of view, not performing the inference simply depends on a gap in the inferential competence. This gap might be interpreted as lack of the rule directly delivering the inference or as a difficulty in retrieving and coordinating the basic rules that indirectly deliver the inference or, rather implausibly, as lack of some of the relevant basic rules. Neither can it be excluded that the gap concerns the very understanding of conditional statements. In such a case the level of semantic competence also would be involved. A-type performance errors (incorrectly applying correct rules): Errors in applying correct strategies to solve categorical syllogisms A-type performance errors consist of something that we usually do correctly, but, out of pressure, lack of attention, lack of memory, or whatever, we sometimes do in the wrong way. For example, we usually can do simple mental additions; but, sometimes, we can go wrong and calculate “77+149=216”. Ford described and found evidence for this kind of errors in syllogistic reasoning.17 Two logically sound strategies would be spontaneously used to correctly produce syllogistic conclusions: a spatial strategy, similar to a naive version of the Euler circles strategy, and a verbal strategy, involving principles that allow the substitution of an instance of an extreme term for the middle term of a premise. The latter strategy, when correctly applied, is equivalent to some inferential rules of the predicate calculus, but it is rather complex, and sometimes it is misapplied. For example, misapplication of the substitution strategy in syllogism 2 may generate the conclusion “Therefore, no A are C”, that is invalid. As we saw before, the same invalid conclusion may be generated by the application of wrong rules, e.g. the atmosphere heuristic: this is a good example of how the same error is sometimes taken as a competence error, and sometimes as a performance error. Other instances of A-type performance errors are shown by individuals using the spatial strategy.18 People adopting a spatial strategy strive to apply a correct set of rules: (1) build all the possible representations of the premises; (2) analyse all the possible relations between the extreme terms: conclusions are the relations that hold in all the representations of the premises. TP
PT
TP
PT
ERRORS IN DEDUCTIVE REASONING
127
Sometimes they apply both rules correctly: for example, they build the following representation of syllogism 2, keeping well in mind that area 1 may be empty, or area 2 may be empty, or none of them may be empty:
Figure 9.2. In so doing, they successfully apply the first rule. They can then go on to check all the possible relations between A and C, and find out that only “some A are not C” holds in all the representations. However, they may fail in two respects. First, they can misapply rule 1 and miss some representations of the premises, for example by representing only:
Figure 9.3. In doing so they would generate the wrong conclusion “No A are C”. Or else, they can correctly apply rule 1, but they can then misapply rule 2 by failing to check the relation “some A are not C”, and hence concluding “no conclusion follows”. In the first case we have a bad application of a rule which can be considered as an instruction for forming a finite number of combinations. Its bad application might be considered like a computational problem arising from a hardware defect and so is perfectly compatible with the computational notion of reasoning, since, in general, any computational process can fail or deviate because of the malfunctioning of the computing agent. The case of bad application of the second rule is more complicated, since it is not completely clear what is involved in the match between a diagram and a sentence.
128
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
B-type performance errors (not applying available rules): Errors in the “thog” problem. In the chess metaphor, B-type performance errors were equal to “playing badly”: a player correctly applies the rules, but does not see the moves that could make her win. In reasoning, they are equal to “reasoning badly”: circumstances in which we know all the legal transformations that are needed to solve a problem, but nevertheless we do other moves which are correct but do not allow us to solve the problem. Participants to an experiment devised by Wason and Brooks appear to fail in adopting or fully pursuing a strategy which eventually leads to solve the problem.19 Four geometrical shapes like the ones in Figure 9.4 are shown to the participants. TP
PT
Figure 9.4. The thog problem. The participants were told that the experimenter had secretly selected a shape (either square or circle) and a colour (either black or white), and that a figure was called “thog” if and only if it had either the selected shape, or the selected colour, but not both. They were also told that the black square was a thog. The participants had to decide whether or not there were one or more other thogs, and indicate them. Most people incorrectly conclude that the other thogs are “the black circle and the white square”. Only a small minority spontaneously concludes with the correct answer, that is “the white circle”. According to Vittorio Girotto and Paolo Legrenzi, this is because people incorrectly simplify the problem by assuming that the thogs must have one and one only of the features of the black square (that is, of the only thog that is known to them).20 This is like thinking that the experimenter selected the shape “square” and the colour “black”. The attribution of this selection can be seen as the result of an invalid inference from the data of the problem and very likely can be imputed to bad understanding. It can also be seen as a hypothesis to be checked. While they entertain it people correctly apply the definition of “thog” in selecting as thogs the white square and the black circle, because they are the figures which have one and only one of the two features. They do not go on and do not realise, probably because of not taking the selection of square and black as an hypothesis, that on the basis of the definition of “thog” it would follow that the black square would not be a thog, against one of the data of the problem. As a consequence they do not correct their initial wrong assumption, and so do not generate the correct response. TP
PT
ERRORS IN DEDUCTIVE REASONING
129
The analysis of the thog problem is particularly difficult. According to Girotto and Legrenzi’s perspective, the rules generating the correct answer are indeed available, and therefore people make a B-type performance error. However it could be argued that the rules generating the correct answer are always available. In this latter case people’s errors, or at least some of them, might be taken as B-type competence errors. 5. CONCLUSIONS It is generally agreed that reasoning is transforming mental representations and that the transformations it consists of can be described by rules. Therefore, reasoning has been explained as application of rules, even though they are conceived in very different ways by different theories. The many typical and sometimes systematic errors made by people in reasoning tasks are sometimes accounted for as the application of incorrect rules, or at other times as the misapplication or nonapplication of correct rules; indeed, we saw that in some instances a pertinent rule is not applied because it is not available to the individual reasoner. We assumed that reasoning competence implies the availability of the pertinent rules. However, the concept of availability of a rule, the very notion of competence in general and that of competence concerning reasoning are not yet associated with fully satisfying theoretical analyses and efficient criteria of identification and attribution. The following are open questions: 1) Is there a minimal number of resources, a core of necessary rules that are to be included in competence? If so, in what sense can we speak of “acquired competence”, i.e. of a competence including other additional or derived resources, and what are the implications of such a concept? 2) A possible explanation of the outcomes of some experiments, like the thog experiment, might be that the relevant rules are all available, but people are not able to coordinate their application on a quite complicated set of data. At a first level of analysis, this kind of error may be sorted as a performance error, because some known relevant rules are not applied or misapplied. However, at a deeper level of analysis, this kind of explanation would hint at a notion of competence as an ability to manage available resources: that is, by considering competence as inclusive of a set of meta-rules describing how to coordinate basic reasoning rules, these errors would be competence errors. Apart from the ubiquitous references to the computational limits of working memory, and to the role of the “central executive system” in dealing with them, there are no detailed attempts in the cognitive literature to describe how the basic reasoning resources are coordinated, and how the quality of this coordination affects competence. 3) At least some experimental outcomes depend on the understanding of the task, showing a relation between semantic competence and reasoning performance;
130
PIERDANIELE GIARETTA AND PAOLO CHERUBINI
this might suggest that semantic competence and inferential competence are not independent.21 TP
PT
4) There are many possible ways to conceptualise errors: yet, most of them presuppose the notion of validity. Hence, the theoretical apparatus by which the outcomes of experiments are described and explained hinges on the notion of validity. Does this notion have a role in the way outcomes are produced, that is in reasoning proper, and if so, what is it?22 PT
NOTES 1
See, e.g., Sperber, Cara and Girotto (1995); Politzer and Noveck (1991). P2T2T Some philosophers appear to think that certain transformations are provided with a somewhat normative force. For example, C. Peacocke qualifies them as compelling and P. Boghossian speaks of “blind but blameless” reasoning. These features seem to be taken as primitive and having some relation to the preservation of truth. See Peacocke (1992) and Boghossian (2003). 3 Fodor (1983, 2000). 4 Searle (1980, 1983). 5 See Shapiro (2001), who considers a case where a rule seems to be violated, but the apparent violation does not produce an inferential passage from true to false. 6 Rips (1986, 1994). 7 See Piaget (1956); Inhelder and Piaget (1958); MacNamara (1986). 8 Johnson-Laird (1983). 9 See, e.g., Oaksford and Chater (1998); Klauer (1999); Evans, Handley and Over (2003). 10 See Woodworth and Sells (1935); Beggs and Denny (1969). 11 Wetherick and Gilhooly (1990, 1995); Rips (1994). 12 See, e.g., Ford (1995) found a higher rate of recourse to logically sound procedures. 13 Rips (1994); Braine and O’Brien (1998). 14 See Johnson-Laird and Byrne (1991). 15 See Oaksford and Chater (2003); Evans and Over (1996). 16 Wason (1968). 17 Fodor (1995). 18 This strategy bears some resemblance to the model-based strategy suggested by JohnsonLaird and Bara (1984). 19 Wason and Brooks (1979). 20 Girotto and Legrenzi (1989, 1993). 21 In different ways such a thesis might be supported by views on meaning like those stated by Harman (1982), Block (1986) and Marconi (1997). 22 We are grateful to Massimiliano Carrara, María de la Concepción Martínez, Stephen McLeod, and Elisabetta Sacchi for having read and commented on a previous version of this chapter. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 10 LANGUAGE AND COMPREHENSION PROCESSES Elisabetta Gola
When analyzed from psychological or philosophical perspectives, language may take on very different and, at times, seemingly contradictory forms. On careful examination, however, the aspects of language brought under scrutiny in these two disciplines may turn out to be almost complementary. In simple terms, psychological language models focus above all on phonological and syntactic modules, on models of acquisition and memorization or “storage” of lexis and on the biological foundations of language. On the other hand, philosophical models concentrate mainly on the notion of meaning and rhetoricalpragmatic aspects of verbal communication. This gap, which has deep-rooted historical origins,1 still persists today in theories of language and in the approaches and methods by which such theories are formulated. It is not possible to deal with this issue in its entirety within the limits of this chapter and, indeed, it would probably be too complex a question for a more in-depth analysis which exploits all our current knowledge of the field. For this reason, attention is here focused on a specific problem, which will allow a more epistemological inquiry into the relationship in language sciences between philosophy and psychology, theory and empirical data. In particular, reference will be made to a theme which arises at the crossroads between philosophy and psychology and which is central to cognitive semantics: 2 the process of understanding. Formulating a theoretical hypothesis about the process which leads to the understanding of an utterance in communication involves two aspects. Firstly, the aspects of language linked to the recognition of the form of the utterance itself (phonology, morphology and syntax) Secondly, questions about how the meaning of what is understood can be defined, which are linked to the semantics and pragmatics of the communication process. These two aspects cannot be separated, and disciplines, in this case psychology and philosophy, that aim to analyze this problem are obliged to take both of them into consideration. Indeed, this is what has taken place in the field of cognitive science. Cognitive science is a multidisciplinary approach in which psychology and philosophy, along with information and communication technologies, AI, linguistics, anthropology and neuroscience, come into play in the redefinition of many questions which were previously the objects of either psychological or philosophical inquiry. The object of this chapter is neither to claim that the combination of philosophical and psychological disciplines will necessarily be advantageous, nor that it is desirable to limit the respective disciplines. Nevertheless, by using the communication process as a particularly pertinent example, we shall attempt to TP
PT
P
131 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 131–143. © 2007 Springer.
132
ELISABETTA GOLA
demonstrate the necessity of bridging the traditional gap between philosophy on the one hand and psychology on the other, and that more attention should, therefore, be focused on the ways in which these two fields can be reconciled. 1. THREE WAYS TO BRIDGE THE GAP BETWEEN THEORY AND PRACTICE: LANGUAGE AND COGNITIVE SCIENCE In cognitive science, the relationship between theory and practice is assumed to form an epistemological basis for the disciplines related to this field. Such a link is not, however, on only one level. In this section we shall refer to three—among many— ways in which theories in cognitive science are supported by observation, data collection and by investigations carried out on the basis of empirical-theoretical frameworks. The first approach is the observation of specific linguistic phenomena and is strongly associated with Chomsky: “Observations are generalizations that anyone with common sense can make by reflecting on what they have experienced; no-one needs knowledge of a theory—a language or anything else—to make them”.3 Observations under experimental conditions are included in this approach. To make such observations, a sample of subjects is required to carry out a particular task with the aim of measuring the accuracy of the response, the reaction time in giving it, the priming which is necessary in order to facilitate it, and so on. However, observation is not a neutral operation with respect to the interpretation of what is observed. Data collected in this way are read and interpreted within a well-defined theoretical framework. Indeed, as with other approaches, initial observations, as well as those which serve as a follow-up to the formulated theory, are selected by the theory itself. As an example, it is sufficient to bring to mind the debate concerning the influence of external stimuli on language development. Chomsky and those who take this position consider environmental stimuli to be weak, while such stimuli are considered sufficiently strong to explain the acquisition of language from a constructivist point of view.4 A second type of interaction between empirical method and psychological theories concerns the application of simulations in the process of testing a hypothesis. It should be stressed that the simulation of a physical phenomenon, for example a storm, is not identical to a real storm. Nevertheless, with regard to cognitive processes, the metaphor of the computer has led to the idea that the computational modeling of a process may provide a partial explanation of how that process works. The fact that a simulation is then able to provide a model which “works” also allows the possibility of “seeing it in action”, confirming it and correcting it. Cognitive scientists place a great deal more trust in the advantages of the computational approach, than philosophers, who: “have not welcomed this new style of philosophy with much enthusiasm”.5 It should, however, be noted that weak versions of AI, and in any case the possibility to use the computer as a tool for theory building, has allowed the analysis and the understanding of phenomena which would otherwise have been impossible to describe and study. Gerald TP
P
PT
TP
PT
PT
LANGUAGE AND COMPREHENSION PROCESSES
133
Edelman, for instance, although opposed to functionalist and computational approaches and the use of the computer metaphor in explaining mental processes, at the same time makes use of simulated robots in order to study interaction between neuronal maps.6 The final way in which observation is thought to relate to cognitive theory, one which has become increasingly important within cognitive science, has to do with the observation of physiological reactions linked to specific cognitive processes. This approach has, above all, been a prerogative of neuropsychology, in as much as this type of study is made possible almost exclusively by the presence of pathologies linked to the phenomena under investigation. The cases in which something fails to proceed ‘normally’ allow the examination of the function of physical mechanisms on which cognitive processes are based. Today, non-invasive techniques, such as PET and brain imaging, allow the observation of phenomena which occur at a physical level in the nervous system while specific tasks are carried out. Studies focusing on pathological conditions remain one of the fields in which it is possible to study some of the links between mental processes and physiological mechanisms most accurately. These three aspects are found in varying degrees within cognitive science and are equally representative of the interaction which takes place between philosophical theory and empirical-psychological investigation. Starting from these preliminary considerations, we shall now proceed to demonstrate how the application of these aspects may lead to the formulation of possible theories of understanding, albeit in ways which differ greatly according to the viewpoint taken. In particular, this paper will demonstrate that an observational approach, combined with simulations, has brought about a focus on overly idealized models of communication. This focus on idealized models in turn produces hypotheses which contrast with and at times contradict data provided by scientific approaches in which philosophy and psychology interact in a manner which is epistemologically more productive. P
PT
2. LANGUAGE AND COMPREHENSION PROCESSES Semantics considered from a merely philosophical viewpoint does not take into consideration the problem of the understanding of meaning. In fact, analyzing meaning signifies considering the logical relations among utterances in a natural language, that is to say, their truth conditions. In the wake of the tradition initiated by Frege, and the dream of the perfect language present in the thinking of the early Wittgenstein and of Russell, semantics is not concerned with what happens in the mind when something, such as an utterance, is understood. Indeed, it constitutes more a derivation of mathematics, rather than a branch of psychology. Thus, from this viewpoint semantics can be defined as the study of entities and abstract structures which are “objective”. These may be investigated with the same mathematical tools which are used in the theory of formal languages and not through the entities and mental processes which come into play in the understanding of a language.
134
ELISABETTA GOLA
However, semantics in cognitive science, at least in part, may also become an object of psychological study. It is claimed that the theory of meaning should be a theory of understanding. This means that it should provide an answer to the question: “At the level of construction and mental process, what happens when a phrase is understood?” In reality, however, the philosophical assumptions underlying cognitive science are based on conceptions which find their roots in a post-Fregean perspective. This is the very position from which cognitive science aimed to distance itself and whose underlying assumptions condition ideas on the mind and language: “(i) […] minds were disembodied symbol processors […]; (ii) […] language, as modeled on symbolic logic, was first a matter of literal reference and only secondary a matter of figurative construal; (iii) metaphor and other figurative language were isolated acts of the creative imagination”.7 This has led to a paradoxical situation in which, despite the undoubted merit of establishing the need to deal with comprehension processes, syntax predominates in the study of semantics. It is not by chance that the linguistic counterpart to this hypothesis, which also partly constitutes the foundation of theories of cognitive semantics,8 is a formal theory of grammars, namely the notion of Universal Grammar (UG) developed by Noam Chomsky in the 1960s. Indeed, according to the Chomskyan approach, language is considered to be a cognitive device whose working principles are autonomous of environmental stimuli, the limited nature of which cannot provide an explanation for the richness of the expressive power of languages.9 According to this largely dominant paradigm, the production of the logical form of utterances along with the functional aspects of meaning provide conditions which give rise to the understanding of utterances. Thus, a living or artificial being with the capacity to produce symbolic representations computationally may potentially understand phrases and texts formulated in a language like English or Italian. This perfect dovetailing of theories which mutually support each other has brought theoretical tendencies (both within linguistics and philosophy) closer to the empirical studies in cognitive psychology and AI. Since machines are essentially formal and syntactic devices, as long as the language is described on the basis of its symbolic-formal properties, in principle there are no objections to the possibility of simulating linguistic behaviors through the computer. In cognitive science, therefore, research into comprehension processes has been drawn towards a vision which is wholly detached from the workings of the mind. Reason, intelligence and mind have been considered: “substrate neutral, that is, independent of any specific embodiment”, and the brain has consequently been seen “as the computational device which ran the algorithms collectively known as mind”.10 From this perspective, let us now briefly examine the main elements and functions which come into play in the process of understanding. According to the most minimalist versions of Chomskyan generative linguistics, the different levels of linguistic competence involved in understanding work on the basis of compositional or generative principles: more complex structures depend on the application of rules to more elementary structures. Indeed, these are structures which are, in principle, atomic. Moreover, the same principles represent the sounds of a language, to the point that they also concern the rules for P
PT
TP
TP
P
PT
PT
PT
LANGUAGE AND COMPREHENSION PROCESSES
135
word formation, both for pure syntax and also for the knowledge relative to meanings and concepts. These factors all contribute to the structures of systems based on certain elementary and recurrent notions along with generative principles. Chomsky has the undoubted merit of highlighting the link between the language faculty and the properties of the mind/brain (competence) with the aim of describing processes of understanding meaning. However, the use of language (performance), including such factors as pragmatic knowledge, textual knowledge and world knowledge are not considered to be relevant from a Chomskyan point of view.11 In a certain sense a part of this knowledge is included in the language faculty. To what extent is syntax part of deciding what a phrase means? Let us consider, for example, the following utterance: TP
PT
1) John is a good cook. In order to understand this utterance, a minimal objective is the ability to recognize the “correct” sense of the word “cook”. With this aim, the first step would consist of ascertaining that the word “cook” could not be considered a verb, in that the resulting sequence of parts of speech could not be one of the possible sequences allowed by English grammar: (noun, auxiliary, determiner, adjective, *verb). The second step is to construct a semantic representation of the phrase. The heart of such a process is representing those aspects of meaning which are independent of the context, or representing the logical form of the phrase. In other words, the second step is to identify the possible word senses and the possible semantic relations among the words and the phrases. The different senses of the words are the simplest unit of meaning. They are, therefore, the atoms or the constants of semantic representation. These constants are of different types according to the “things” to which they refer. They are terms if they describe objects and they are predicates if they describe relationships and properties. A proposition, in terms of logical form, comprises a predicate followed by one or more appropriate terms. The logical form thus specifies the relationships among all the elements in play. For example, the logical form of Phrase 1 must specify that John has the property of being a good cook and not, for example, that each good cook’s name is John. This, which is intuitively an obvious and banal process, in fact requires the application of knowledge relative to the meaning of the words which, computationally, resides only in the relationships among their respective symbolic representations. The way in which this is achieved is, in turn, a form of syntax. Thus, the idea is that the logical form is an intermediate representational language. It is thus more general with respect to language itself, in which more interpretative variations are condensed. The comprehension process selects these interpretative options and makes them explicit, drawing on mechanisms within the message and within the language competence devices of the speaker. Once again, somewhat paradoxically, a reaction to the syntacticism of semantics proposed by Chomskyan cognitivism comes from AI. In particular, from a field of research called Natural Language Understanding (NLU).
136
ELISABETTA GOLA
AI, in the sense intended here, is not viewed as a simple technology, but as one of the fundamental, and, indeed, historically fundamental, disciplines of cognitive science, in which the contributions of psychology, linguistics and philosophy converge and contribute to the development of computational models of language understanding. Indeed, NLU has the object, among others, of examining how natural language comprehension research fits into the study of language in general, what it means for a system to understand language and how natural language systems are generally organized. This requires not only the formulation of a theory of understanding, but also, with regard to the development of such a theory, that AI may provide a contribution through the development of computational models. According to Allen, who refers to theories largely accepted by cognitive scientists: “there are two motivations for developing computational models. The scientific motivation is to obtain a better understanding of how language works”.12 Indeed, it is the computational model which makes possible the formulation of a complex theory of how language works. This is something which traditional methods are not able to realize in the form of a scientific theory: “But we may be able to realize such complex theories as computer programs and then test them by observing how well they perform. By seeing where they fail, we can incrementally improve them”.13 Substantially, simulation through computational models becomes the same tool for the mind sciences as the experimental method in physical sciences: “Computational models may provide very specific predictions about human behavior that can be explored by the psycholinguist. By continuing in this process, we may eventually acquire a deep understanding of how human language processing occurs”14 This aim is the basis of cognitive science and builds the bridge between analytical theories and empirical data. It is within this area of research that the need to analyze the cognitive content of words has emerged in attempts to create artificial systems able to understand natural language; such attempts have required the detailed examination and definition of the question: “What is intended by the representation of meaning?” Answering this question meant examining an area of semantics which is usually neglected by artificial approaches, namely lexis. AI has put forward various wellknown structures in order to give substance to lexical knowledge. Essentially, these can be recognized in two ways of representing meaning: the first predicative, in which we find such notions as semantic framework, frame and script, and which, above all, attempts to identify the system of connections which exist among the words of a language;15 the second is a procedural model which with certain limitations attempts to understand the capacity of languages to form links with objects and events in the world.16 P
P
TP
PT
PT
PT
P
PT
TP
PT
3. MEANING “DOESN’T WORK THAT WAY” Although cognitive science and cognitive semantics aim at the formulation of an empirical theory of natural language understanding, the particular choice of method and role of experimentation, i.e. the observation of psychological data and their
LANGUAGE AND COMPREHENSION PROCESSES
137
simulation, lead us to doubt whether alternative constructs to model-based semantics put forward up to now by cognitivists have been much more empirically solid. And the assumption on which such models of language understanding are based—that providing a theory of comprehension processes means concentrating on “how the brain’s representations and the world literally matched up: the world outside the brain was thought to be represented inside the brain by a series of discrete state symbols”17—proves to be false, as we shall see below. The question with which we once again take up the “philosophy and psychology” of language therefore becomes: “What is necessary in real terms in order that cognitive semantics should become cognitive, and thus become in all respects a hypothesis about the mind, about language and about relative comprehension procedures?” To sum up, considering the perspective examined up to this point, the understanding of language consists of a symbolic production process which coincides with a decoding process. Assuming that a sender has constructed a message starting from a particular thought, consisting of some kind of mental representation or intentional state, expressed in a language of thought, the coding process takes place through the translation of the communicative intent into something which can be transmitted and communicated through the application of generative devices that associate a semantic representation to a phonological form. What is transmitted is a physical object: an acoustic signal which is carried by a physical means, namely the air. In this process some factors, which are still physical, can in some way disrupt the communication (noise, for instance), but apart from possible problems of this nature the process of understanding the message will occur symmetrically with those described for the production of the message. In other words, a decoding process takes place: the perceived sounds correspond to the phonological forms of the phrases, these forms correspond to a semantic representation of the phrase (i.e., its logical form) which in turn is assumed to correspond to a thought communicated by the utterance. In the words of Dan Sperber, however: “[…] the common view of verbal communication is false and […] coding and decoding are just ancillary components of what is essentially a creative inferential process”.18 This view of meaning, comprehension processes and communication has been and, indeed, still is the object of criticisms even within the disciplinary paradigm it comes from. Such criticisms touch many areas which, if considered seriously, are problematic for the idea of comprehension as decoding a message. Here, to illustrate the point, the specific case of the understanding of metaphor will be used in order to show one of the possible ways in which the interaction of theoretical construction and experimentation may provide a suitable path to arrive at a theory of meaning which is in a real sense cognitive. Quantitative, qualitative and neurophysiological experimental data highlight the fact that metaphor is problematic for theories based on computational processes: “Of the many factors which contributed to the paucity of research on figurative language comprehension, the instantiation hypothesis is perhaps the most onerous”.19 More specifically: TP
P
TP
PT
P
PT
138
ELISABETTA GOLA
as the mind was considered to be that kind of software program running on that kind of hardware, the lack of attention given to figurative language comprehension followed from an obvious source. Figurative language comprehension was considered a mere afterthought to solving the problem of literal language comprehension because Turing had shown that all such computational processes must ultimately decompose into those of literal comprehension. The mantra of this dogma was clear: Solve first how language and mind symbolically represent the world as a series of discrete states and the problems posed by metaphoric and figurative language comprehension will inevitably solve themselves.20 TP
PT
But this explanation comes into conflict with numerous studies carried out recently on the comprehension of metaphor, with statistical data, with findings in experiments in priming and comprehension tasks and, last but not least, the working of mechanisms in the brain involved in carrying out such tasks. The possibility of dealing with the study of the mind and language through the embodiment hypothesis is perhaps the novelty that, in the next century, will change cognitive science most promisingly and certainly most dramatically: “Whereas previously cognition was considered to be primarily symbolic manipulation taking place only in the head, we are asking about the roles played by the body and the environmental condition”.21 The first step towards this change requires the adoption of approaches based on real communication situations, of which theories may be partial but plausible models of the cognitive mechanisms at work. P
PT
4. DATA INFORMS THEORY: FIGURATIVE LANGUAGE Figurative language provides an enlightening example of the methodological problems encountered by cognitive science in the study of language. In the specific case of metaphor, we can first note that they share with the semantic analyses carried out in the cognitive field the problem of being largely examined from a top-down perspective, or at least founded on limited observations and examples based on the individual intuitions of the linguist. Examples are made up, and presumably reflect fundamental aspects of the “idealized” speaker-hearer, and not how people ordinarily speak or write in everyday discourse. These factors, on the one hand: “do not reflect the true range and distribution of the phenomenon of figurative language use”;22 on the other hand they give rise to the fact that theories of meaning are theories of literal meaning, so that “metaphor is a simply roundabout way to express literal semantics”.23 The entry of empirical investigations into the formulation of theories has, as a first positive effect, challenged certain dogmatic principles regarding the relationship between literal and figurative language. One of the first to be challenged is the proposal by Grice24 and, later, Searle25 that the comprehension of metaphor is a three-phase process. According to this proposal, the literal meaning is computed in TP
PT
TP
PT
TP
PT
TP
PT
139
LANGUAGE AND COMPREHENSION PROCESSES
the first phase and rejected in the second phase; in the final phase, the non-literal version of the meaning of the phrase is constructed (standard pragmatic model). However, this view appears to be compatible neither with quantitative analyses,26 which show the centrality of figurative uses in real texts, nor with numerous experimental data which compare aspects of the comprehension of figurative language with literal language, in particular studies which examine reaction time: Some tests have revealed that non-literal expressions, such as idioms, required less time to be understood than corresponding literal expressions.27 Nevertheless, such cases of non-literality were peculiar in that they were markedly conventional and familiar. Indeed, other studies have shown a longer reaction time for the comprehension of referential and predicative metaphors, as compared to literal phrases. In particular, in one of the studies which is most widely referred to, Gibbs (1990) found that in general “readers can easily make sense of figurative referential descriptions of people, given appropriate contextual information”.28 But that nonetheless “[r]eaders take more time to comprehend the figurative meanings of metaphoric and metonymic referential description than they do to process literal referring sentences”.29 Gibbs tested speed of response in tests relative to the comprehension of figurative reference, i.e. those phrases which are used to identify some entity giving it a description which can be demonstrative (e.g., “that”), literal (a proper noun) or figurative (e.g., “that butcher”, referring to a surgeon). Another study on predicative phrases obtained similar results: Gildea and Glucksberg showed that “sentences of the form Some/All X are Y take longer to judge as literally false when they have readily interpretable non-literal meaning [e.g., ‘Some jobs are jails’] than when they do not [e.g., ‘Some roads are jails’]”.30 More recent studies, too, have repeatedly falsified the prediction that “literal analysis must be completed before metaphoricity can even be entertained” and hence: “a metaphorical use of an utterance should inevitably take longer to understand than a literal use of the same sentence”.31 Indeed, the longest reaction times obtained in a number of experiments were simply not statistically significant;32 in other cases they depended on factors external to the distinction between literal and figurative, such as the type of test or the context in which the stimulus was provided, with respect to which there was nothing specific to the figurative meaning which made its comprehension more effortdemanding. Thus, the literal comprehension (three-steps theory) is only supported by reaction time studies if one assumpes that longer reaction times are due to the time it takes to recognize violations of the literal strategy. If the perspective is moved away from compositional theories in semantics and coding theories in communication processes, we arrive at completely different conclusions. When a speaker or writer uses any expression, literal or metaphoric, that manifestly has more content than necessary to efficiently pick the referent, there is a hint of extra effect. Take for example, the choice between “the author of Hamlet” and “William Shakespeare”. The extra effort in processing metaphorical referring expression is just a consequence of a more general pragmatic property of referring expressions, and is not specific to metaphor. 33 TP
PT
TP
TP
TP
PT
TP
TP
PT
TP
TP
PT
PT
PT
PT
PT
140
ELISABETTA GOLA
A step forward in the exploration of the question of the relationship between literal and figurative language at this point requires the examination of the cognitive processes in the task of understanding. Empirical evidence of the presence of an extra effort (which to the researcher appears as a longer reaction time RT>0) should not be interpreted as proof of the veracity of the three-stage process, in that the reaction times are too fast to be compatible with the idea which accompanies the standard pragmatic model. Indeed, in this model, literal language comprehension would be accomplished automatically, while the comprehension of metaphoric language would require attended processing. Moreover, recent studies on the activation of brain processes show that the lexical and cognitive resources activated during the carrying out of tasks linked to the comprehension of literal and metaphorical phrases do not proceed separately and in parallel and are not organized in series. Indeed, such an idea would require the identification of two areas of the brain, activated independently and simultaneously, with prevalence in the possible final phase of the deactivation process of one of the two. In particular, such a view has been associated with the possibility that the metaphorical process is lateralized in the right hemisphere of the cerebral cortex. However, the very first experiments 34 showed the importance of interaction between the two hemispheres in the process of understanding forms of figurative language, while highlighting the positive contribution of the right cerebral hemisphere. This occurs both in studies of right hemisphere-damaged (RHD) patient populations and also in experiments carried out non-invasively through the recording of event-related brain potentials (ERPs). The ERP technique in cognitive neuroscience measures the brain’s electrical activity in response to an internal or external stimulus, as, for example, pictures, words, and sounds. This method allows scientists to observe human brain activity that it is assumed to reflect specific cognitive processes. It appears as a waveform containing a series of deflections that are visualized as positive and negative peaks. Such peaks are often referred to as components, and they are characterized by their polarity (positive or negative), their latency, the time point where the component reaches its largest amplitude, and their scalp distribution. The N400, a component for instance, is assumed to reflect the degree of ease in different language tasks. To give an example, priming usually facilitates the answer in semantic integration tasks, thus the effect is a decrease in N400 amplitude. Going back to our topic, let’s examine what happens in the ERP when the subject has to respond to literal and metaphorical sentences. According to the three-phase model, the difference in the effect of ERP should, in fact, constitute an initial effect, peaking at N400, which reflects the evaluation of the initial incongruity. Then there should be a further peak when the metaphorical information is accessed. This possibility has been tested comparing the reaction with respect to metaphorical phrases (“Those fighters are lions”) and the corresponding literal control phrases (“Those animals are lions”). The results of this experiment showed that there was effectively a greater initial peak of N400, conforming to predicted standard model expectations, but, contrary to the expected outcome, there was no successive peak. TP
PT
141
LANGUAGE AND COMPREHENSION PROCESSES
The data obtained seemed more congruent with the model which claims that the processes which allow access to literal and metaphorical meanings are parallel. Thus: “if N400 amplitude reflects the difficulty of comprehending literal meanings, it should also reflect the difficulty of comprehending metaphorical meanings”.35 Instead, in this model the score of N400 is linked to the difference of comprehensibility of familiar metaphors (“Those fighters are lions”) vs. those which are not familiar (“Those apprentices are lions”). This is, indeed, what happened, but not when the phrases were presented out of context. Both studies on reaction times and also data on the activation of brain processes lead to the conclusion that, if the standard pragmatic model of meaning and metaphor is not compatible with the empirical data obtained, the experimental findings do not offer a clear and uncontroversial theoretical alternative. Therefore, the task of theoretical interpretation does not simply consist of the formulation of a descriptive model which reflects the data, but it should on the one hand give a meaning to the empirical findings and, on the other, provide a basis for future research. And the empirical investigations outlined here lead us to reflect on metaphor and on meaning in general in terms of positions in which the cognitive activity of understanding is strongly guided by contextual factors. In order to test the effects of contextual factors (context-dependent hypothesis) Pynte et al. have measured ERP while subjects participating in the experiment read phrases containing metaphors which were relevant, familiar and non-familiar (e.g., for the lion example, “They are not cowardly”) or irrelevant (“They are not idiotic”).36 It was found that: “metaphor familiarity did not affect the ERPs, the relevance of context did”.37 Therefore, to conclude this section, both empirical data and the discussions arising from them lead us to hypothesize that comprehension processes, usually defined on the basis of a compositional process used for literal interpretations, should in reality be reconsidered in the light of the fact that literal comprehension processes are neither a cognitively primary nor prior in interpretation. Indeed, metaphorical meanings “are available quite early in processing” and “there does not seem to be much evidence to support the view of metaphor comprehension as involving a qualitatively distinct processing mode”,38 and on the contrary: “Figurative and literal language are processed simultaneously and share much substructure”.39 From a theoretical point of view, what seems promising with a view to constructing models of meaning closer to the reality of communication and the constraints of the embodiment hypothesis means, in our opinion, modifying attitudes in at least the following ways: Firstly, with respect to theories of figurative language, the adoption of positions which do not accept “metaphoric language as a single monolithic category”. Secondly, with respect to meaning in general, apply approaches which go beyond the traditional view that words reflect what they represent, in favor of the idea that links are established among associated experiences and that, therefore: “Language comprehension involves simulating the situation being described”.40 Thirdly, with respect to communication, make reference to more extensive processes than the analysis of single words, an analysis PT
TP
TP
PT
TP
TP
PT
TP
PT
PT
PT
142
ELISABETTA GOLA
which does not allow the full evaluation of the contribution made by context. Fourthly, with respect to neurophysiological theories, we are in agreement with Coulson, according to whom: as we progress through the twenty-first century, it will be important to move beyond the traditional question of the right hemisphere’s role in metaphor comprehension to address the particular cognitive and neural underpinning of this complex process. By combining information from the study of the brain injured patients with behavioral, electrophysiological, and neuroimaging data from healthy participants, it is possible to learn a great deal about the neural substrate of particular cognitive processes.41 TP
PT
The analysis, from different perspectives, of situations where effective communication takes place, suggests that the context plays a crucial role in comprehension processes, a role greater even than the difference between the literal and non-literal meaning examined here This is reflected in differences in reaction times and variations in effort which are evident in interpretation processes. Consequently, there is an increasing need for the integration of syntactic, semantic and pragmatic factors within theoretical models of the comprehension process. NOTES 1
See Engel (1996). Cognitive semantics has, in fact, revolutionized the “post-Fregean” approach to the study of meaning, passing from the study of the truth conditions of utterances to the description of their cognitive contents. 3 McGilvray (2001, p. 5). 4 See, e.g., two recent studies which “measure” in opposite terms the amount of information contained in linguistic stimuli available to children in the developmental phase of language acquisition: Pullum and Scholz (2002); Reali and Christiansen (2003). 5 Dennett (1998, p. 262). 6 See Edelman (1987). 7 Rohrer (2001, p. 38). 8 Here cognitive semantics is intended as the approaches oriented towards the study of meaning as a comprehension process, and not only the approach to the study of meaning put forward by George Lakoff and the group of researchers at Berkley who initiated, in open opposition to generative linguistics, studies into cognitive linguistics, in which the area defined as cognitive semantics occupies one of the widest sectors. 9 See Chomsky (1975). 10 Rohrer (2001, p. 28). 11 Although AI foresees a role for these levels of analysis, they are considered separate from syntax and semantics. In any case in these models similarly to phonology, syntax and TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
LANGUAGE AND COMPREHENSION PROCESSES
143
semantics, pragmatics is considered a mental device devoted to a distinct level of linguistic competence (see Sperber and Wilson 1986). 12 Allen (1995, p. 2). 13 Ibid. 14 Ibid. 15 See Rich (1983). 16 See Winograd (1973). 17 Rohrer (2001, p. 28). 18 Sperber (1994b, p. 180). 19 Rohrer (2001, p. 27). 20 Ibid. 21 Ibid., p. 38. See also this volume, pp. 14 ff. 22 Peters and Wilks (2003, p. 161). 23 Rohrer (2001, p. 5). 24 See Grice (1975). Grice suggests indeed that people implicitly agree to cooperate when talking with one another. In communicating people implicitly assume that statements are both true and informative. 25 Searle (1993). 26 See Gola (2005). 27 Gibbs (1980). 28 Gibbs (1990, p. 64). 29 Ibid. 30 Glucksberg, Gildea and Bookin (1982, p. 94). 31 Gerrig (1989, p. 236); see also Gerrig and Haley (1983). 32 Gerrig (1989). 33 D. Sperber, contribution to the discussion on Metaphor and Effort, available at: http://www.phon.ucl.ac.uk/home/robyn/relevance/relevance_archives/0163.html. 34 Winner and Gardner (1977). 35 Coulson (2005, p. 22). 36 Pynte et al. (1996). 37 Coulson (2005, p. 23). 38 Ibid., p. 28. 39 Rohrer (2001, p. 27). 40 Coulson (2005, p. 29). 41 Ibid., p. 35. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
PART III DIMENSIONS OF AGENCY
A. Self-Knowledge
CHAPTER 11 THE UNCONSCIOUS Giovanni Jervis
1. The concept of unconscious has long been associated with the range of theories in psychoanalysis that have emerged in the past century and which are fundamentally derived from the ideas first propounded by Sigmund Freud. The term “unconscious” (commonly used as both an adjective and a noun, “the unconscious”) covers a wide semantic area encompassing notions that are now widespread in popular culture. Freud’s status as an icon of wisdom remains evident today in terms such as psychic repression and—of course—Freudian slip, which directly bears his name. However, the past few decades have witnessed changes and developments in methodological approaches, which have lead to a reappraisal of the subject in question. The science of psychology has de-constructed the traditional meaning of “conscious” and has indirectly focussed attention on what can be termed “nonconscious”, or “unconscious”. In particular, the concept of cognitive unconscious, first introduced in the 1980s, has put the idea of awareness into a novel context that has challenged the traditional confines of the notion.1 The source and current of ideas that gave birth to the notion of unconscious was quite different to those that have influenced modern day psychology. The first experimental research undertaken around 1880 aimed to investigate the inner workings of the human mind, whose secrets could, it was believed, be uncovered through introspection that would then lead to an understanding of the series of attributes that were thought to constitute human consciousness. Thus, psychology was seen to be a kind of phenomenological investigation of subjective selfawareness. It was only in the second decade of the twentieth century that the behaviourist revolution defined psychology as the science of (objective) behaviour, a definition that remained common currency right up to the first systematic studies of cognitive processes, during the 1950s and 60s. Throughout his working life, Freud’s thinking was of course influenced by the knowledge and ideas current in the last years of the nineteenth century. Yet rather than viewing psychology as an introspective investigation of human consciousness, Freud’s novel contribution to the field of empirical, clinical and intuitive psychology was to introduce the idea of unconscious in relation to his theory of sexuality. He considered the unconscious to be an area of the mind that existed beneath the “visible” layer of consciousness and which was manifestly TP
PT
147 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 147–158. © 2007 Springer.
148
GIOVANNI JERVIS
responsible for certain traits of human behaviour. The term he suggested for his theory of the mind, “meta-psychology”, derives from his conviction that psychology ought to go beyond the traditional precept of the conscious as it had been defined in earlier research. The avant-garde philosophical and literary movements that emerged in the last decades of the nineteenth century were highly critical of the classical conception of the person (see Nietzsche in particular) but could not dislodge the deep-rooted conceptions of the mind and human nature prevalent at the time. Such beliefs were derived from Cartesian philosophy, which drew a clear distinction between the body, considered to be a machine, and the mind, which was identified with the soul. Since Descartes was heavily influenced by Christian beliefs, he saw and identified the mind with its most noble faculties, a characteristic found only in human beings. To nineteenth century philosophers, consciousness (synonymous with moral awareness) was inseparable from human language and free will. Being in stark opposition to these conceptions, Freud’s affirmation that thought, dreams and daily routine actions were influenced and controlled by sexual instinct was to most people both innovative and even scandalous. In the fields of philosophy and empirical psychology, the terms subconscious and unconscious were already being widely used in the last decades of the nineteenth century.2 Although avant-garde at the time, the novelty in Freudian philosophy must be seen in the context of the on-going cultural debates of his time. In the first decades of the twentieth century, psychoanalysis was associated by the general public with an anti-conformist movement claiming the right to express sexual freedom. In fact, although Freud was much more of a moral traditionalist than his followers believed, many observers perceived the rift with tradition in the fact that he linked the idea of the mind with a naturalistic conception of human nature. In particular though, what distinguished Freud was his theorising that the most significant feature of the unconscious was that by its very nature and owing to the (unconscious) activating of our defensive psychological mechanisms, a significant part of the mind had been actively excluded from awareness as such. The idea inherent in the Freudian term “unconscious” then, was that some of our desires, fears, contents of memory and fantasies had been repressed from consciousness, but were still present in the hidden frame of many of our thoughts and actions. In this sense, Freud’s idea is more innovative than its association with Pierre Janet’s term “subconscious” might suggest. Indeed, Janet merely took up the idea already explored by philosophers such as Leibniz, that non-conscious psychological automatisms exist in all of us.3 There was evidence of ambiguity in the theoretical underpinnings of Freudian thinking. However much a materialist (and fundamentally an anti-dualist) he may have been, he ultimately considered the unconscious to be an autonomous entity where instinctive forces were at work. The combination of such forces, together with the manifestations associated to them and the structure of the mind as a whole (initially referred to by him as the Conscious, Preconscious, and Unconscious, and later the id, the ego and the super-ego) were described as being a separate world with its own independent structure, whose relationship with the workings of the TP
PT
TP
PT
THE UNCONSCIOUS
149
brain and the biological system in general was initially elaborated in purely hypothetical terms that were subsequently discarded. Far from being seen as mere metaphors, the unconscious and interior censure, the id, the ego and the super ego were dealt with as real entities. In spite of its materialist origins, to the eyes of the inexpert public psychoanalysis still had to justify its dualistic conception of human nature. For the most naive adherents to Freudian philosophy, the dynamics of human psyche were viewed as autonomous and detached from the physical body. This misinterpretation was heightened by the language register used by Freud in his writings; the mind, the conscious and the unconscious were described in a lively but non-scientific register, which deliberately favoured everyday vocabulary commonly used in the context of daily life. In so doing, Freud’s ideas were easily accessible to anybody with an average level of education. The most widely spread interpretations of his sexual theories generated popular practical approaches in pedagogy and in problems of mental health. For almost a century, these interpretations formed the basis of a highly appealing form of intuitive psychology that was comprehensible even to the profane and uninitiated. 2. For a number of years now, Freud’s ideas have been the object of largely justified criticism, which has tended though to undermine the impact they have had on twentieth century culture. Today, there is little doubt that his ideas were not founded on any solid methodology.4 Indeed, not only did the birth and development of psychoanalysis as a discipline have no real links with the experimental research being undertaken at the time, but it was also excessively dependent on empirical observations in the psychoanalyst’s office. Like most medical studies in the nineteenth century, it was effectively limited by its reliance on gathering subjective and piecemeal observations randomly made by doctors treating their own individual patients. Even in its theorisations, Freudian psychoanalysis was structured more on analyses of single clinical cases, on impressions and hunches, than on systematic ideas. Only in recent years has it fully emerged how much psychoanalysis was in practice fashioned and formed on a fragile theoretical base. Above all, the concept of the mind’s energy-driven instinct (one of the principal constructs of psychoanalytical doctrine) was shown to have little concrete scientific foundation. Other aspects of Freudian teachings were subsequently shown to be inconsistent, most notably his ideas on the emotional development of infants, which were consistently challenged by research undertaken from the 1940s-50s onwards. Fundamentally, it can be said that the Freudian concept of the unconscious incorporates both a strong and a weak element. The strength of Freud’s idea lies in the hypothesis that every one of us has a hidden, or even undeclared interest to outwardly manifest potentially spurious characteristics of ourselves, our thoughts and actions. What counts here is not so much the pure and simple presence of the structure and contents of the mind that are TP
PT
150
GIOVANNI JERVIS
inaccessible through introspection, but the existence of a self-apologetic defensiveness or rather, a systematic tendency towards self-deception within our everyday thought processes. Thus the concept of unconscious introduces a critical theme explicitly present in Freud—i.e. the tendency of the mind to create illusions. At the same time, the weak element in Freud’s concept of the unconscious is the suggestion that the inner mind contains some important secret to be revealed, a kind of significant personal or supra-personal truth, or even a trove of wisdom. Unlike Jung, who believed people collectively shared secret aspects of the soul, it should be remembered that Freud considered the unconscious as a set of essentially personal psychological dynamics regarding a part of the mind that dealt with one particular individual’s problems and concerns. Yet there was a certain ambiguity in his theory on this very point; the Freudian “unconscious” apparently contained messages and emotions that were more keenly alive and real than an individual’s consciousness might be capable of producing. What is interesting to note, though, is that this way of conceiving the unconscious has always proved a highly appealing popular form of psychology. It is somehow gratifying to believe that in the depths of our minds lies a secret waiting to be uncovered. The fate of another hypothesis is quite significant with regard to this latter point. Historically, the concept of the unconscious is rooted in the understanding of trauma as something akin to a pathogenic secret. When Freud first began to elaborate his ideas, he believed he had discovered that traumatic events in infancy, namely sexual abuse by adults, were wilfully removed from the conscious mind, only to exert a negative influence later on in life on an individual’s psychological stability. Although he was aware of it early on, there was one aspect of hypnosis techniques and free association in psychoanalysis to which he paid little attention: by working on uncontrolled mechanisms of self-suggestion, there was the danger they might generate false memories, or imaginary descriptions of events that never occurred, rather than evoke real ones. This uncertainty seemed compatible with the hypothesis that in many other cases there might exist hidden episodes of real violence (sexual or otherwise) on pre-pubescent minors. All this would not be worth mentioning if in the course of the last two decades of the nineteenth century there had not been a widespread concern in some countries about the effects of psychological traumas in infants. Such concerns were generated by socio-cultural factors such as the feminist movement, new worries regarding child abuse, and the popularisation of psychoanalysis among social classes who had no prior knowledge of the subject. Although in part certainly justified, this alarm was further increased by cases of false recollections induced by psychotherapy and barely competent hypnotists. Even as recently as the 1980s, few people realised how easily children can be influenced by the prompting of teachers and psychologists to report unfounded episodes of sexual molestation. Likewise, there was initially a lack of awareness of how easy it is to make even adults under therapy “remember” distant episodes with great precision and sincerity—events that not only never happened but would have been highly improbable. This phenomenon has its importance in the field of cultural psychology because it shows the continuing influence of Freudian philosophy, albeit simplified and somewhat distorted. Also,
151
THE UNCONSCIOUS
through the study of false recollections, it raises the question of the connection between unconscious processes and memory. Indeed, false recollections are one of numerous examples that suggest how the workings of the unconscious are more pervasive than Freud himself believed.5 We must also take account of modern research on traumatic events in general. Exceptionally traumatic states resulting from experiencing extreme conditions (torture, war shock, etc.) can in some cases produce long-lasting negative effects on an individual’s mental stability. This is because the traumatic event is not repressed from the memory but instead leaves permanent traces of anxiety and can be relived on an intense emotional level. Moreover, it is often the case that people suffering from various forms of neurosis will defensively project and attribute the cause of their anxieties to a single episode that was undoubtedly negative but not an intensely emotional one (e.g., a bereavement or abandonment). Such episodes are then focussed on as being the sole reason for subsequent mental suffering and disturbances, which in reality originate from more complex and not easily identifiable causes. It has been frequently observed that when people try to come to terms with their unresolved past problems, and look for ways to interpret and explain them, they tend to attribute these problems to one particular event (e.g., the death of a parent, an attempt at seduction, a frightening experience) even when there is no evidence of that episode playing a crucial role in the pathogenesis of their disturbances. There is a strong tendency in many to rationalise and interpret problems from the past by projecting responsibility on and being vindictive or persecuting towards others for being the cause of their mental suffering. It should also be noted that, contrary to what Freud firmly believed, there is no proof that removing or cancelling the memory of an unpleasant event from our conscious minds is in itself sufficient to give rise to suffering and long-term mental pathologies of any kind. Evidence of this exists in particular with regard to attempts at seduction and instances of sexual molestation in childhood.6 Although the decline of the myth surrounding “repressed traumas” led to a re-dimensioning of the most naively mechanistic aspects of the Freudian concept of “unconscious” and did undermine one of the main principles of Freudian theory, it has apparently not diminished the widespread interest in psychoanalysis and its attempt to establish a theory of the unconscious. The subject of psychic trauma is a valid example of a two-sided theme, one of which is all the naive aspects in Freud’s theory of repressed memory, today no longer acceptable; the other is the continued interest in further exploring the question of distorted recollections, an issue that has prompted investigation into a more general theme of Freudian origin: self-deception. From within the psychoanalysis movement itself, various writers active in the last decades of the twentieth century noted how the processes that excluded certain contents of the mind from the area of consciousness are vaster and more complex than Freud surmised with his theory of repression. The person credited with being the most influential in introducing this idea into psychoanalysis was John Bowlby, who coined the term “selective exclusion of information”. It is not the author’s intention here to investigate how the Freudian concept of the unconscious has influenced research and writings, the ethics of daily life, the TP
PT
TP
PT
152
GIOVANNI JERVIS
concept of infancy, pedagogy, the evolution of sexual morals, sociology, as well as the various branches of psychology as a science, in particular social psychology. However, it is legitimate to hypothesise that the success of Freudian theory has been instrumental in changing the concept of human nature in the entire field of twentieth century culture. It could also be claimed that among the intellectual classes in particular, Freud’s idea of the unconscious indirectly contributed to opening the way for the development of psychology as a science. European and American culture had to eventually accept a new conception of how thought processes worked—one which stressed how human consciousness cannot be considered a primary entity because it is ultimately conditioned by biological factors. 3. The methods and approaches used in modern psychology today are in stark contrast to those used by Freud. The basic procedure in Freudian psychoanalysis is to start by focusing on ordinary human consciousness and then to investigate how truthful individuals’ statements and declarations really are, reminding them that there may be other factors beyond their control that affect the actions of their daily lives. Thus, his starting point is to examine the complexities within the banalities of everyday life—especially in personal relationships—and to try and establish how other simpler factors can influence them. Freud’s erudite literary description of the unconscious is fundamentally introspective and appeals to the reader’s good sense and natural spirit of criticism, using a form of rhetoric that aims more to convince than to explain.7 Contrariwise, modern psychology takes the scientific approach of wanting to demonstrate rather than convince, and does so not by invoking introspective consciousness but by applying techniques of objective scientific analysis. By taking a methodical “bottom up” approach, it examines how our most basic psychological mechanisms (akin to the learning processes in relatively simple organisms) can be gradually revealed and provide us with the information we need to understand and identify ourselves as thinking, conscious beings. Unlike in Freudian analysis, where consciousness is considered as if it were material data and the basis on which to question individuals on the hidden presence and influence of unconscious factors in their lives, a basic premise of modern psychology is that the essential workings of nervous systems in organisms in general conceal nothing of what has habitually been termed consciousness. Only subsequently may the question be raised of how, and to what degree of appropriateness, the term consciousness might be applied to more complex beings, and naturally to man. What comes to light in consideration of the above premise is how much naive psychology there was in Freud’s phenomenology. His understanding of the human mind was anthropocentric, adult-centred (even in his approach to examining the minds of infants8) and he was ultimately unable to break away from his view of consciousness as a primary quality of the mind. Although Freud did attempt to TP
PT
TP
PT
153
THE UNCONSCIOUS
describe and locate several important dynamics of consciousness/unconsciousness and significant mechanisms of self-deception, he only partially reviewed his a priori vision of what constitutes consciousness. Conversely, the modern scientific concept of consciousness starts with the hypothesis that there is a fundamental characteristic common to both human beings (new born babies and to a large extent adults as well) and animals. It is that our mechanisms of learning, the functioning of memory, the acquisition of skills as well as our cognitive learning processes in general, are practically impervious to selfmonitoring. Therefore, excluding isolated cases of partial analysis, they cannot be objects of introspection. Historically, before the full development of studies in cognitive sciences, the course of enquiry followed in modern psychology did not produce any noteworthy results with regard to gaining a better insight into consciousness and the unconscious mind. These studies helped bring to light the way in which animals interact with their environment and are conscious (i.e. vigilant, when not asleep) inasmuch as they continuously process a flow of sensorial information to elaborate and constantly modify their mapping of the immediate environment and their daily operational plans of action. They avail themselves of a form of consciousness that can be defined as primary. Of course, this concept is by no means new: “simple consciousness” as a direct awareness of the outside world, was the term first used by Bertrand Russell in 1912 to characterise animal consciousness as being quite distinct from the intrinsic human quality of being self-aware and “knowing that we know”.9 Moreover, with the exception of man, only very few species such as chimpanzees, orang-outangs and dolphins can be said to reach a state of partial selfawareness, or one in which they are able to make a clear distinction between their own physical bodies and the surrounding environment. This occurs, for example, when a chimpanzee recognises itself in the mirror or uses a mirror to examine parts of the body not normally visible, like the inside of the mouth, or when it recognises itself recorded on film.10 This is a phenomenon that can also be observed in small children over the age of about 15-18 months11 and demonstrates the beginning of physical self-monitoring, or the focussing of attention on the material agent as the (physical) executor of actions. Physical self-monitoring takes place when actions are carried out in full objective awareness of the separate existence of the body itself, and is indicative of a fundamental awareness of self. Clearly, this is quite different from the basic interactive monitoring of the environment that characterises the conscious awareness of all species of animal. Yet it is evident that this is still not enough to constitute true self-awareness. It is only in human beings after the age of about three that the object on which we consciously focus attention is not only the external environment or the physical body, but our very consciousness itself. In reality, thoughts, memories and dreams are all objects that we reflect on, or rather monitor and perceive as being open to definition and description because they form part of our interior, virtual “theatre of the mind”. This is human consciousness in the full and traditional sense as Freud also understood it; in other words, complete self-awareness that is the first key to describing the workings of the inner mind. TP
TP
PT
TP
PT
PT
154
GIOVANNI JERVIS
When this concept of “complete self-awareness” is put under scrutiny using the instruments of modern cognitive psychology, it is found to be a more precarious and disconnected series of functions than Freud had surmised. Achieving full possession of the self enables us to engage in conscious thought, to remember and dream, and also to describe and convey emotional states such as euphoria and fear; but it requires a good command of verbal skills as well as an aptitude to developing higher cognitive abilities, which is of course only possible in children over the age of three or four. Introspection is further conditioned by cultural factors, such as education. People with a low level of education are often unable to conceptualise anxiety, and tend to experience it as an objective physical complaint, unrelated to the mind. Likewise, studies have shown that children aged three-to-five and people in primitive cultures do not always perceive dreams as the product of their own minds, but rather see them as visions originating from outside.12 The idea that mental phenomena can be perceived because they are virtual objects of our vigilant attention (inasmuch as they are susceptible to inward looking self-monitoring) is associated with the hypothesis that all mental phenomena, like dreams, are the product of our own individual minds. Studies on memory have been important in revealing the precarious nature of introspective consciousness, since memory is a central component of consciousness itself.13 If we reflect on how we view both recent and remote past events, it becomes clear that memory does not function in a uniform manner. The function of memory is parallel to that of “knowing how”, which is infused with implicit knowledge.14 Yet this is not quite akin to the working of explicit memory, or the ability to express, describe and declare meaning. In other words, knowing “how” to do something is not the same as describing “what” you are doing, so may not be transmittable knowledge.15 For example, I might be aware that I am adept at using a computer keyboard even though I would never be able to say (or draw on paper) if the Z or the P are at the right or left hand side of the keyboard. Moreover, if we look closely at the part of our memory that deals with describing, referring and communicating details of events, it becomes clear that rather than being tantamount to an objective consciousness, its function is more often one of enabling us to engage in cultural mediation and narrative storing of daily experiences. The narrative function belies the conventional aspects of our referential memory. Similar considerations can be made with regard to emotions. We can question whether or not we are able, by means of conscious self-introspection, to readily access intense, short-lived emotions such as euphoria or fear. For example, we can experience fear without realising it. We commonly tend to describe and “reconstruct” emotions rather than directly perceive them through introspection. Indeed, in order for us to knowingly monitor our emotions, we probably have to perceive them indirectly by taking third position. In practise then, being able to observe “from the outside” that we are showing fear or acting nervously seems to be a necessary condition for us to realise that we do in fact “feel” nervous or afraid.16 Just as it is not feasible to draw a clear boundary between “knowing” and “not knowing” (for example a computer keyboard), we can equally well affirm that, in general, what we call “awareness of self” comprises elements of self-monitoring TP
TP
PT
PT
TP
TP
PT
TP
PT
PT
155
THE UNCONSCIOUS
which are not only affected by fluctuations of attention but are also subject to modes of behaviour that are determined by culture and language. In short, what we might call “non conscious”, or “the unconscious” incorporates a wide series of disjointed phenomena. When used to mean “unable to access the self” (fundamentally, “lack of self-awareness”) the term “non conscious” covers a far wider range of meaning than Freud believed. Events experienced through the unconscious include those controlled by our nervous systems, which have little or nothing to do with the way we formulate knowledge as such (for example developing automatic patterns of behaviour, such as the physical movement of walking or—to a large extent—acquiring skills like riding a bicycle). But they also include both the building of cognitive mindscapes inaccessible to consciousness (see the example above of the computer keyboard which we might use equally well with one hand or two, or with our feet or nose, but which we cannot verbally describe or draw) as well as forms of alertness and interaction with our surrounding environment which do not require introspection. An example of the latter might be a cat that pounces on a mouse in a manner we deem demonstrates intelligence, even though we have no indication that the cat can make a clear distinction between its own body and the surrounding environment, or even that it has any conception of its own existence. It is not easy to summarize how forms of exchange and mediation come about between the “non-accessible” (or rather objectively monitor-impervious) areas of the mind and the “accessible regions”, which are open to description and as such can be mapped within the sphere of self-consciousness. What sometimes occurs is that rigid barriers spontaneously form to block conscious access to memories of, for example, an embarrassing episode that is subsequently cancelled from the mind (Freud’s “repressed memory”). More often though, our implicit recollection of a particular moment transforms into the explicit recollection of another event. Essentially, it is these fluctuations that prevail. In practice, distractions, forgetfulness, partial awareness or temporary forgetting of learned knowledge constitute the vital fabric of our minds. Introspection is a random, partial and unstable phenomenon. Only rarely do we have occasion to describe the functioning of our personal thought processes. To some extent, one such instance is when people are required to describe the stages followed in problem solving exercises such as tests in logical reasoning. But even here, close investigation shows that self-controlled reasoning is often lacking, and implicit operational modules take over. As research by Alfred Tversky and Daniel Kahneman has shown,17 even in the apparently most rational decision-making processes, there are irrational procedural modules at work whose existence we are unaware of, but which will influence outcomes. It is therefore worthwhile to take a fresh look at the Freudian concept of the conscious/unconscious mind. Full consciousness, even in cases where it can be said to exist, is more socially conventional than it seems18 and is often of a purely selfdefensive nature. For instance, when individuals perform a simple action, they may be perfectly able to explain why (in their sincere opinion) they have done it, but the explanation given does not necessarily concord with the actual sequence of events. TP
PT
TP
PT
156
GIOVANNI JERVIS
Following on from the first experiments in post-hypnotic suggestion, there have been a number of further experiments showing how our behaviour can be determined by motivational factors inaccessible to the conscious mind. The best known example used to illustrate this is when persons are made to listen to two recordings (dichotic listening) through separate ear-pieces; after a brief moment of confusion, they will spontaneously direct their attention towards one of the voices, and repress all awareness of the other recording. Still, it is also fairly easy to demonstrate that when explicitly asked for the purposes of an experiment to follow the instructions provided by the recordings, people still spontaneously take heed of instructions not only from the attended ear but also from the repressed one.19 This is one of several instances (observed in experimental conditions or otherwise) that demonstrate how it is a combination of both inaccessible motivating factors (e.g., subliminal cognitive inputs) and factors that are accessible through introspection which ultimately determine our behaviour and everyday choices. Further well known clinical observations that have challenged the traditional notion of the boundary between “knowing” and “not knowing” include Weiskrantz’s blindsight, a phenomenon noticed in patients who experience discrete visual perception even though a damaged occipital lobe means that, on a normal conscious level, they are totally blind.20 It ought to be stressed that there is a wide discrepancy between the motives people offer to explain their actions and the real motivations (or more precisely, the multiple causes) behind the actions themselves. In the experiment on dichotic listening, it is easy to direct motivation on the subject of the experiment in order to guide him/her towards making specific choices. Yet what appears most surprising is that people have no hesitation in claiming their behaviour is motivated by rational thinking, which is almost always demonstrably false. Likewise, in group experiments it is fairly easy to direct all the participants bar one to persuade and bring pressure on the unknowing participant by means of suggestive induction. The unsuspecting victim will always attempt to justify her/his choices with sincere but imaginary motives, intent on demonstrating that his choices are dictated by his own absolute and rational autonomy, unaffected by any possible external influences. Social and group psychology has done the most to further experimental investigation into the discrepancy between the root causes that shape our routine behaviour and the explanations commonly given for such behaviour by people subject to these experiments. The most typical cause of this discrepancy is the (socalled) mechanism of rationalisation, the best known empirical observation of which is found in the classic fable of the fox and the grapes, which nicely illustrates the rationalisation of dis-engagement. Experimental research in various analogous situations has highlighted how the phenomenon of cognitive dissonance demonstrates that people need to find artificial motivation and sense if they are to come to terms with long, frustrating tasks that often seem pointless in undertaking. These are all clear cases of rationalisation of engagement.21 Following this line of thinking, it would seem legitimate to claim “explaining oneself” (“knowing why”) is more a justification than a description of one’s actions. Take, for example, the most basic of questions: “Why are you here?” If any TP
TP
PT
TP
PT
PT
157
THE UNCONSCIOUS
individuals find themselves in a specific place at a certain time, it is unlikely they can pinpoint the interplay of factors or complex series of motives that have led them to be in that exact place at that precise time. But they will certainly have no hesitation in providing convincing explanations to justify their actions. In short, people can seldom say why they are there, but can always assert that it is right for them to be there. In brief, the heavy criticism levied at the inherent conceptual naivety in certain ways of describing consciousness appears to want to “revenge the unconscious” and challenge the very notion of consciousness itself. Of course, the concept of consciousness cannot be summarily discarded, for instance because valid and apposite references can be made to the (partial) phenomenon of verbally accessing the self in certain mental processes. Typically, this occurs when we give formal, “step-by-step” syllogistic accounts of our reasoning. We cannot deny that these phenomena exist even if we can legitimately question their accuracy and consistency. Still, we can advance the more radical hypothesis that human consciousness is not merely a continuous cognitive state but is chiefly characterised by the capacity to explain one’s actions ex post. According to this latter hypothesis, all mental processes are fundamentally unconscious or, as it were, automatic, but we tend subsequently to (partially) describe and continuously account for them with the aid of conventional mental constructs, such as “thought”, “choice”, “inspiration” and “will”. This gives rise to an even more radical, sceptical hypothesis which claims that elaborating choices is by no means the linear process which traditionally relied on the supposition that the human mind is endowed with informed rationality. Such a sceptical perspective calls into question the very meaning of the expression “free will”. Although convinced that we are fully aware of all our decisional processes, it may be that they are the probabilistic result of a series of diverse motivational factors, in large part inaccessible to introspection. In this respect, it could be claimed that on no occasion do we make a fully conscious choice between two or more options.22 This was a doubt first raised before by David Hume.23 According to the most concise formulation of this idea (Wittgenstein 1953, §§ 611-660), we usually (and naively) suppose that the “voluntary” actions we perform stem from rational and self-aware procedures; but this is perhaps not so. On closer examination, says Wittgenstein, what we call “voluntary” is any action which we are not surprised to have performed. In such cases we believe it to have been determined by our own free will. In this sense, it can be asserted that the unconscious reigns over our whole mental existence. TP
PT
TP
PT
158
GIOVANNI JERVIS
NOTES 1
Kihlstrom (1987). Whyte (1960); Ellenberger (1970). 3 Ellenberger (1970). 4 Grünbaum (1984); Holt (1989); Macmillan (1991). 5 Loftus and Ketcham (1994); Ofshe and Watters (1994). See also this volume, p. 88. 6 McNally (2003). 7 Spence (1994). 8 Peterfreund (1978). 9 Russell ([1912] 1967). 10 Gallup (1977). 11 Lewis and Brooks-Gunn (1979). 12 Dodds (1951); Lurjia (1976). 13 Edelman (1989). 14 Polanyi (1966). 15 James ([1890] 1984); Ryle (1949). 16 Bem (1962); Nisbett and Wilson (1977). 17 Kahneman et al. (1982). 18 Nisbett and Wilson (1987). 19 Lackner (1973). 20 Weiskrantz (1986). 21 Festinger (1967). 22 Wegner (2002). 23 Hume ([1739-40] 2000). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 12 SELF-DECEPTION AND HYPOTHESIS TESTING Alfred R. Mele
According to a traditional view, self-deception is an intrapersonal analogue of stereotypical interpersonal deception.1 In the latter case, deceivers intentionally deceive others into believing something, p, and there is a time at which the deceivers believe that p is false while their victims falsely believe that p is true. If selfdeception is properly understood on this model, self-deceivers intentionally deceive themselves into believing something, p, and there is a time at which they believe that p is false while also believing that p is true. In Mele (2001), I criticize the traditional view and defend an alternative, deflationary view, according to which self-deception does not entail any of the following: intentionally deceiving oneself; intending (or trying) to deceive oneself; intending (or trying) to make it easier for oneself to believe something; concurrently believing each of two explicitly contradictory propositions. I also argue that, in fact, ordinary instances of self-deception do not include any of these things. Of course, simply falsely believing that p in the absence of deception by anyone else is not sufficient for self-deception. If it were, we would be self-deceived whenever we make unmotivated arithmetical mistakes. That is why motivation figures prominently in the literature on self-deception. This chapter provides a synopsis of my view of self-deception and of some of the support that it finds in empirical work on lay hypothesis testing. TP
PT
1. KINDS AND MEANS OF SELF-DECEPTION Elsewhere, I have distinguished between what I call straight and twisted cases of self-deception.2 In straight cases, which have dominated the literature, people are self-deceived in believing something that they want to be true—for example, that their children are not using illegal drugs. In twisted cases, people are self-deceived in believing something that they want to be false (and do not also want to be true). For example, an insecure, jealous husband may believe that his wife is having an affair despite having only thin evidence of infidelity and despite his not wanting it to be the case that she is so engaged. Some illustrations of ways in which our desiring that p can contribute to our believing that p in instances of straight self-deception will be useful.3 Often, two or more of the phenomena I describe are involved in an instance of self-deception. TP
PT
TP
159 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 159–167. © 2007 Springer.
PT
160
ALFRED R. MELE
1. Negative Misinterpretation. Our desiring that p may lead us to misinterpret as not counting (or not counting strongly) against p data that we would easily recognize to count (or count strongly) against p in the desire’s absence. For example, Rex just received a rejection notice on a journal submission. He hopes that the rejection was unwarranted, and he reads through the referees’ comments. Rex decides that the referees misunderstood two important but complex points and that their objections consequently do not justify the rejection. However, the referees’ criticisms were correct, and a few days later, when Rex rereads his paper and the comments in a more impartial frame of mind, it is clear to him that this is so. 2. Positive Misinterpretation. Our desiring that p may lead us to interpret as supporting p data that we would easily recognize to count against p in the desire’s absence. For example, Sid is very fond of Roz, a college classmate with whom he often studies. Because he wants it to be true that Roz loves him, he may interpret her declining his invitations to various social events and reminding him that she has a steady boyfriend as an effort on her part to “play hard to get” in order to encourage Sid to continue to pursue her and prove that his love for her approximates hers for him. As Sid interprets Roz’s behavior, not only does it fail to count against the hypothesis that she loves him, it is evidence that she does love him. This contributes to his believing, falsely, that Roz loves him. 3. Selective Focusing/Attending. Our desiring that p may lead us to fail to focus attention on evidence that counts against p and to focus instead on evidence suggestive of p. Beth is a twelve-year-old whose father died recently. Owing partly to her desire that she was her father’s favorite, she finds it comforting to attend to memories and photographs that place her in the spotlight of her father’s affection and unpleasant to attend to memories and photographs that place a sibling in that spotlight. Accordingly, she focuses her attention on the former and is inattentive to the latter. This contributes to Beth’s coming to believe—falsely—that she was her father’s favorite child. In fact, Beth’s father much preferred the company of her brothers, a fact that the family photo albums amply substantiate. 4. Selective Evidence-Gathering. Our desiring that p may lead us both to overlook easily obtainable evidence for ~p and to find evidence for p that is much less accessible. For example, Betty, a political campaign staffer who thinks the world of her candidate, has heard rumors from the opposition that he is sexist, but she hopes he is not. That hope motivates her to scour his past voting record for evidence of his political correctness on gender issues and to consult people in her own campaign office about his personal behavior. Betty may miss some obvious, weighty evidence that her boss is sexist—which he in fact is—even though she succeeds in finding less obvious and less weighty evidence for her favored view. As a result, she may come to believe that her boss is not sexist. Selective evidencegathering may be analyzed as a combination of hyper-sensitivity to evidence (and sources of evidence) for the desired state of affairs and blindness—of which there are, of course, degrees—to contrary evidence (and sources thereof).
SELF-DECEPTION AND HYPOTHESIS TESTING
161
In none of these examples does the person hold the true belief that ~p and then intentionally bring it about that he or she believes that p. Yet, assuming that these people acquire relevant false, unwarranted beliefs in the ways described, these are garden-variety instances of self-deception.4 Rex is self-deceived in believing that his article was wrongly rejected, Sid is self-deceived in believing certain things about Roz, and so on. We can understand why, owing to her desire that her father loved her most, Beth finds it pleasant to attend to photographs and memories featuring her as the object of her father’s affection and painful to attend to photographs and memories that put others in the place she prizes. But how do desires that p trigger and sustain the two kinds of misinterpretation and selective evidence-gathering? It is not as though these activities are intrinsically pleasant, as attending to pleasant memories, for example, is intrinsically pleasant. Attention to some sources of unmotivated biased belief sheds light on this issue. Several such sources have been identified,5 including the following two: TP
PT
TP
1)
PT
Vividness of information. A datum’s vividness for us often is a function of such things as its concreteness and its sensory, temporal, or spatial proximity. Vivid data are more likely to be recognized, attended to, and recalled than pallid data. Consequently, vivid data tend to have a disproportional influence on the formation and retention of beliefs.6 TP
2)
PT
The confirmation bias. People testing a hypothesis tend to search (in memory and the world) more often for confirming than for disconfirming instances and to recognize the former more readily.7 This is true even when the hypothesis is only a tentative one (and not a belief one has). People also tend to interpret relatively neutral data as supporting a hypothesis they are testing.8 TP
PT
TP
PT
Although sources of biased belief apparently can function independently of motivation, they also may be triggered and sustained by desires in the production of motivationally biased beliefs.9 For example, desires can enhance the vividness or salience of data. Data that count in favor of the truth of a proposition that one hopes is true may be rendered more vivid or salient by one’s recognition that they so count. Similarly, desires can influence which hypotheses occur to one and affect the salience of available hypotheses, thereby setting the stage for the confirmation bias.10 Owing to a desire that p, one may test the hypothesis that p is true rather than the contrary hypothesis. In these ways and others, a desire that p may help produce an unwarranted belief that p. TP
TP
PT
PT
2. A THEORY OF LAY HYPOTHESIS TESTING An interesting recent theory of lay hypothesis testing is designed, in part, to accommodate self-deception. I explore it in Mele (2001), where I offer grounds for
162
ALFRED R. MELE
caution and moderation and argue that a qualified version is plausible.11 I call it the FTL theory, after the authors of the two articles on which I primarily drew, Friedrich (1993) and Trope and Liberman (1996). Here, I offer a thumbnail sketch. The basic idea of the FTL theory is that a concern to minimize costly errors drives lay hypothesis testing. The errors on which the theory focuses are false beliefs. The cost of a false belief is the cost, including missed opportunities for gains, that it would be reasonable for the person to expect the belief—if false—to have, given his desires and beliefs, if he were to have expectations about such things. A central element of the FTL theory is a “confidence threshold”—or a “threshold”, for short. The lower the threshold, the thinner the evidence sufficient for reaching it. Two thresholds are relevant to each hypothesis: “The acceptance threshold is the minimum confidence in the truth of a hypothesis,” p, sufficient for acquiring a belief that p “rather than continuing to test [the hypothesis], and the rejection threshold is the minimum confidence in the untruth of a hypothesis,” p, sufficient for acquiring a belief that ~p “and discontinuing the test”.12 The two thresholds often are not equally demanding, and acceptance and rejection thresholds respectively depend “primarily” on “the cost of false acceptance relative to the cost of information” and “the cost of false rejection relative to the cost of information”. The “cost of information” is simply the “resources and effort” required for gathering and processing “hypothesis-relevant information”.13 Confidence thresholds are determined by the strength of aversions to specific costly errors together with information costs. Setting aside the latter, the stronger one’s aversion to falsely believing that p, the higher one’s threshold for belief that p. These aversions influence belief in a pair of related ways. First, because, other things being equal, lower thresholds are easier to reach than higher ones, belief that ~p is a more likely outcome than belief that p, other things being equal, in a hypothesis tester who has a higher acceptance threshold for p than for ~p. Second, the aversions influence how we test hypotheses—for example, whether we exhibit the confirmation bias—and when we stop testing them (owing to our having reached a relevant threshold).14 Friedrich claims that desires to avoid specific errors can trigger and sustain “automatic test strategies”,15 which supposedly happens in roughly the nonintentional way in which a desire that p results in the enhanced vividness of evidence for p. In Mele 2001 (41-49, 61-67), I argue that a person’s being more strongly averse to falsely believing that ~p than to falsely believing that p may have the effect that he primarily seeks evidence for p, is more attentive to such evidence than to evidence for ~p, and interprets relatively neutral data as supporting p, without this effect’s being mediated by a belief that such behavior is conducive to avoiding the former error. The stronger aversion may simply frame the topic in a way that triggers and sustains these manifestations of the confirmation bias without the assistance of a belief that behavior of this kind is a means of avoiding particular errors. Similarly, having a stronger aversion that runs in the opposite direction may result in a skeptical approach to hypothesis testing that in no way depends on a belief to the effect that an approach of this kind will increase the probability of avoiding the costlier error. Given the aversion, skeptical testing is predictable TP
PT
TP
TP
PT
PT
TP
TP
PT
PT
SELF-DECEPTION AND HYPOTHESIS TESTING
163
independently of the agent’s believing that a particular testing style will decrease the probability of making a certain error. The FTL theory applies straightforwardly to both straight and twisted selfdeception. Friedrich writes: a prime candidate for primary error of concern is believing as true something that leads [one] to mistakenly criticize [oneself] or lower [one’s] self-esteem. Such costs are generally highly salient and are paid for immediately in terms of psychological discomfort. When there are few costs associated with errors of self-deception (incorrectly preserving or enhancing one’s self-image), mistakenly revising one’s self-image downward or failing to boost it appropriately should be the focal error.16 TP
PT
Here, he plainly has straight self-deception in mind, but he should not stop there. Whereas for many people it may be more important to avoid acquiring the false belief that their spouses are having affairs than to avoid acquiring the false belief that they are not so engaged, the converse may well be true of some insecure, jealous people. The belief that one’s spouse is unfaithful tends to cause significant psychological discomfort. Even so, avoiding falsely believing that their spouses are faithful may be so important to some people that they test relevant hypotheses in ways that, other things being equal, are less likely to lead to a false belief in their spouses’ fidelity than to a false belief in their spouses’ infidelity. Furthermore, data suggestive of infidelity may be especially salient for these people and contrary data quite pallid by comparison. Don Sharpsteen and Lee Kirkpatrick observe that “the jealousy complex”—that is, “the thoughts, feelings, and behavior typically associated with jealousy episodes”—is interpretable as a mechanism “for maintaining close relationships” and appears to be “triggered by separation, or the threat of separation, from attachment figures”.17 It certainly is conceivable that, given a certain psychological profile, a strong desire to maintain one’s relationship with one’s spouse plays a role in rendering the potential error of falsely believing one’s spouse to be innocent of infidelity a “costly” error, in the FTL sense, and more costly than the error of falsely believing one’s spouse to be guilty. After all, the former error may reduce the probability that one takes steps to protect the relationship against an intruder. The FTL theory provides a basis for an account of both straight and twisted self-deception.18 It is often held that emotions have desires as constituents. Even if that is so, might emotions contribute to some instances of self-deception in ways that do not involve a constituent desire’s making a contribution? Suppose that Art is angry at Bob for a recent slight. His anger may prime the confirmation bias by suggesting an emotion-congruent hypothesis about Bob’s current behavior—for example, that Bob is behaving badly again—and it may increase the salience of data that seem to support that hypothesis.19 There is evidence that anger tends to focus attention selectively on explanations in terms of “agency”, as opposed to situational factors.20 Perhaps Art’s anger leads him to view Bob’s behavior as more purposeful and more TP
P
TP
PT
PT
P
TP
PT
164
ALFRED R. MELE
indicative of a hostile intention than he otherwise would. If anger has a desire as a constituent, it is, roughly, a desire to lash out against the target of one’s anger. Possibly, anger can play the biasing roles just mentioned without any constituent desire’s playing them. If an emotion can do this, perhaps an emotion may contribute to an instance of self-deception that involves no desires at all as significant biasing causes.21 It is conceivable, perhaps, that Art enters self-deception in acquiring the belief that Bob is behaving badly now, that the process that results in this belief features his anger’s playing the biasing roles just described, and that no desires of his have a biasing effect in this case. If it is assumed that Art believes that Bob is behaving badly despite having stronger evidence for the falsity of that hypothesis than for its truth, an FTL theorist will find it plausible that Art had a lower threshold for acceptance of that hypothesis than for rejection of it, that the difference in thresholds is explained at least partly in terms of relevant desires, and that this difference helps to explain Art’s acquiring the belief he does. But this position on Art’s case is debatable, and I leave the matter open. TP
PT
3. A PROTO-ANALYSIS OF SELF-DECEPTION AND THE IMPARTIAL OBSERVER TEST Although I have never offered a conceptual analysis of self-deception, I have suggested the following proto-analysis: people enter self-deception in acquiring a belief that p if and only if p is false and they acquire the belief in a suitably biased way.22 The suitability at issue is a matter of kind of bias, degree of bias, and the nondeviance of causal connections between biasing processes (or events) and the acquisition of the belief that p.23 I suggest that, as self-deception is commonly conceived, something along the following lines is a test for a level of motivational or emotional bias appropriate to a person’s being self-deceived in acquiring a belief that p: Given that S acquires a belief that p and D is the collection of relevant data readily available to S during the process of belief-acquisition, if D were made readily available to S’s impartial cognitive peers and they were to engage in at least as much reflection on the issue as S does and at least a moderate amount of reflection, those who conclude that p is false would significantly outnumber those who conclude that p is true. Call this the impartial observer test.24 It is a test for a person’s satisfying the suitable bias condition on self-deception. A person’s passing the test is evidence of bias suitable for self-deception. By “cognitive peers,” I mean people who are very similar to the person being tested in such things as education and intelligence. Cognitive peers who share certain relevant desires with the subject—as one’s spouse may share one’s desire that one’s child is not using illegal drugs—may often acquire the same unwarranted belief that the subject does, given the same data. But the relevant cognitive peers, for present purposes, are impartial observers. At least a minimal requirement for impartiality in the present context is that one neither share the subject’s desire that p nor have a desire that ~p. Another plausible requirement is that one not prefer TP
PT
TP
PT
TP
PT
SELF-DECEPTION AND HYPOTHESIS TESTING
165
avoidance of either of the following errors over the other: falsely believing that p and falsely believing that ~p. A third is that one not have an emotional stake in p’s truth or falsity. The test is a test for a level of motivational or emotional bias appropriate to self-deception. I take the suitability of the impartial observer test—or something similar, at least—to be implicit in the conceptual framework that informs common-sense judgments about what is and is not plausibly counted as an instance of self-deception. 4. HARD CASES? Some readers may be thinking that even if many instances of self-deception may be accounted for along the lines sketched here, the really interesting instances feature agents who intentionally deceive themselves or who unconsciously know or believe the truth. Space constraints preclude a detailed assessment of this thought, but something should be said about it here.25 Amelie Rorty has offered a putative example of self-deception that may seem to speak strongly in favor of the presence of unconscious true beliefs in some cases of self-deception.26 Dr. Androvna, a cancer specialist, “has begun to misdescribe and ignore symptoms [of hers] that the most junior premedical student would recognize as the unmistakable symptoms of the late stages of a currently incurable form of cancer.” She had been neither a particularly private person nor a financial planner, but now she “deflects [her friends’] attempts to discuss her condition [and] though young and by no means affluent, she is drawing up a detailed will.” What is more, “never a serious correspondent, reticent about matters of affection, she has taken to writing effusive letters to distant friends and relatives, intimating farewells, and urging them to visit her soon.” If I had read Rorty’s vignette out of context, I would have been confident that Androvna knew – consciously – that she had cancer but did not want to reveal that to others. That hypothesis certainly makes good sense of the details offered. Even so, it is conceivable that Androvna is self-deceived. If she is, what explains the detailed will and the effusive letters? Some will suggest that, “deep down,” Androvna knows that she is dying and that this knowledge accounts for these activities. Assuming that it is conceivable that Androvna does not consciously believe that she has cancer in the circumstances that Rorty describes, is it also conceivable that she does not unconsciously believe this either? Yes, it is. Androvna’s not believing, unconsciously or otherwise, that she has the disease is consistent with her consciously believing that there is a significant chance that she has it, and that belief, in conjunction with relevant desires, can lead her to make out a will, write the letters, and deflect questions. (Notice that she may be self-deceived in believing that there is only a significant chance that she has cancer.) Given Rorty’s description of the case and the assumption that Androvna lacks the conscious belief that she has cancer, is it more likely (1) that she believes “deep down” that she has the disease (has a “type 1” cancer-belief) or (2) that she consciously believes that there is a significant chance that she has cancer without TP
TP
PT
PT
166
ALFRED R. MELE
also believing, deep down or otherwise, that she has it (has a “type 2” cancerbelief)? Base rate information is relevant here. My students know that there are a great many more blue collar workers than lawyers. Yet, when I ask them whether a man wearing a nice suit and a tie is more likely to be a lawyer or a blue collar worker, most of them answer, “a lawyer”—at least until the relevance of base rates is made salient. What are the relative frequencies of type 1 and type 2 beliefs (i.e., “deep down,” unconscious beliefs and beliefs that there is a significant chance that p that fall short of being beliefs that p)? 27 Until one has at least a partial basis for an answer to this question that would help underwrite the judgment that Androvna believes deep down that she has cancer, one is not entitled to be confident that she has such a belief. Plainly, we have and act on a great many type 2 beliefs. For many of us, such beliefs help to explain why we purchase home insurance, for example, or take an umbrella to work when we read in the morning paper that there is a thirty percent chance of rain. If there is anything approaching comparably weighty evidence of frequent type 1 beliefs, I am not aware of it. One may ask why, if Androvna believes that there is a significant chance that she is stricken with cancer, she does not seek medical attention. Recall that she knows the type of cancer at issue to be incurable; she may see little point in consulting fellow cancer specialists. Setting that detail aside, procrastination about seeking medical attention is, unfortunately, an all too familiar phenomenon, and it does not require type 1 beliefs. People often wait too long to act on their type 2 beliefs in this sphere. Even a story like Androvna’s—one designed to make it very plausible that a crucial unconscious true belief is at work—can be accommodated by the view of self-deception that I have sketched.28 TP
PT
TP
PT
NOTES 1
For citations of this tradition in philosophy, psychology, psychiatry, and biology, see Mele (2001, p. 125, n. 1). Stereotypical interpersonal deception does not exhaust interpersonal deception 2 Mele (1999, 2001). 3 See Mele (2001, pp. 26-27). 4 If, in the way I described, Betty acquires or retains the false belief that her boss is not sexist, it is natural to count her as self-deceived. This is so even if, owing to her motivationally biased evidence-gathering, the evidence that she actually has does not weigh more heavily in support of the proposition that her boss is sexist than against it. 5 See Mele (2001, pp. 28-31). 6 See Nisbett and Ross (1980, p. 45). 7 See Baron (1988, pp. 259-265). 8 See Trope et al. (1997, p. 115). 9 I develop this idea in Mele (1987, chapter 10, 2001). Kunda (1990) develops the same theme, concentrating on evidence that motivation sometimes primes the confirmation bias. Also see Kunda (1999, chapter 6). TP
PT
TP
PT
TP
PT
TP PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
SELF-DECEPTION AND HYPOTHESIS TESTING
10
167
For motivational interpretations of the confirmation bias, see Friedrich (1993) and Trope and Liberman (1996, pp. 252-265). 11 See Mele (2001, pp. 31-49, 63-70, 90-91, 96-98, 112-18). 12 Trope and Liberman (1996, p. 253). 13 Ibid., p. 252. 14 Whether and to what extent subjects display the confirmation bias depends on such factors as whether they are given a neutral perspective on a hypothesis or, instead, the perspective of someone whose job it is to detect cheaters. See Gigerenzer and Hug (1992). 15 Friedrich (1993, p. 313). 16 Ibid., p. 314. 17 Sharpsteen and Kirkpatrick (1997, p. 627). 18 See Mele (2001, chapter 5). 19 There is evidence that “emotional states facilitate the processing of congruent stimuli” and that “attentional processes are involved in [this] effect” (Derryberry 1988, pp. 36, 38). Gordon Bower and Joseph Forgas review evidence that emotions make “emotionally congruent interpretations of ambiguous stimuli more available” (2000, p. 106). 20 See Keltner et al. (1993). 21 See Mele (2003). 22 Mele (2001, p. 120). The requirement that p be false is purely semantic. By definition, one is deceived in believing that p only if p is false; the same is true of being self-deceived in believing that p. The requirement does not imply that p’s being false has special importance for the dynamics of self-deception. Biased treatment of data may sometimes result in someone’s believing an improbable proposition, p, that happens to be true. There may be selfdeception in such a case, but the person is not self-deceived in believing that p, nor in acquiring the belief that p. On a relevant difference between being deceived in believing that p and being deceived into believing that p, see Mele (1987, pp. 127-128). 23 On deviant and nondeviant causation in this connection, see Mele (2001, pp. 121-123). 24 This is a modified version of the test suggested in Mele (2003, p. 164). Discussion with Charles Hermes and Brian McLaughlin motivated the modifications. On a problem that some delusional beliefs may raise for this test, see Mele (forthcoming). 25 See also Mele (2001). 26 Rorty (1988, p. 11). 27 Those who prefer to think in terms of degree of belief should read such expressions of mine as “S believes that p” as shorthand for “S believes that p to a degree greater than 0.5 (on a scale from 0 to 1)”. 28 This article derives from earlier work of mine, primarily Mele (2001 and forthcoming). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
TP
PT
PT
TP
TP
TP
PT PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 13 AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY Eddy Nahmias
Autonomous agents, like autonomous nations, are able to govern themselves. They are not controlled by external forces or manipulated by outside agents. They set goals for themselves, establishing principles for their choices and actions, and they are able to act in accord with those principles. Just as deliberative democracies legislate so as to balance competing interests, autonomous agents deliberate to reach some consistency among their competing desires and values. And just as good governments create their laws in the open without undue influence by covert factions, autonomous agents form their principles for action through conscious deliberation without undue influence by unconscious forces. Autonomous agents are self-controlled not weak-willed, self-aware not self-deceptive. Given this description, it would be nice to be an autonomous agent. Indeed, we believe we are, for the most part, autonomous agents.1 However, there are threats to this commonsense belief. Some philosophers argue that if causal determinism is true then we lack free will and hence are not fully autonomous or responsible for our actions.2 One might also worry that if certain explanations of the mind-body relationship are true, then our conscious deliberations are epiphenomenal in such a way that we are not really autonomous. Philosophers also analyze political freedom and various socio-political threats to people’s autonomy. But other threats to autonomy are less often discussed, threats that are not metaphysical or political but psychological. These are threats based on putative facts about human psychology that suggest we do not govern our behavior according to principles we have consciously chosen. For instance, if our behavior were governed primarily by unconscious Freudian desires rather than by our reflectively considered desires, we would be much less autonomous than we presume. Or if our behaviors were the result of a history of Skinnerian reinforcement rather than conscious consideration, our actions would be shaped by our environment more than by our principles. Since the influence of Freud and Skinner has waned, we might feel we have escaped such threats to our autonomy from psychology. But, as I will explain below, more recent and viable theories and evidence from social psychology pose significant threats to autonomous agency. TP
TP
PT
169 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 169–185. © 2007 Springer.
PT
170
EDDY NAHMIAS
1. AUTONOMOUS AGENCY In this section I explain more fully what I mean by autonomous agency. In the following section, I outline the relevant research in social psychology, the ways it threatens our autonomy, and some responses to these threats. On most conceptions of autonomous agency it clearly requires freedom of action, which is the ability to act on one’s desires without external constraint. But such freedom is not sufficient for autonomy, since agents may also be internally constrained or influenced by external factors in ways more subtle than constraint or coercion. For instance, agents may act on addictions, phobias, or even strong passions that they would prefer not to move them. Or agents may be influenced by subtle manipulations (e.g., advertising) or by unrecognized situational factors (e.g., peer pressure) that, had the agents known about them, they would not want to influence them. These cases suggest that autonomy requires more than simply being free to act on one’s desires; it also requires some measure of internal consistency among one’s desires and values and some capacity to understand oneself and one’s situation. I will discuss these requirements in terms of principles and knowledge.3 TP
PT
1.1 Autonomy requires reflectively chosen principles We adult humans are autonomous in ways that young children and animals are not. This is in part because we are often moved not simply by our strongest urge in a given situation; we are also able to consider our immediate desires in terms of our long-term goals, including our moral and social obligations.4 However, it would be inefficient, if not impossible, to reflect on our goals and obligations every time we act. Rather, we tend to deliberate about such matters calmly and reflectively to establish the ways we hope to respond without much reflection when faced with the relevant situations.5 This deliberation may take the form of conditional reasoning: when I am in situations of type X, I should do Y. For instance, if I find myself confronted with a person in need, I should respond by helping him; as I consider job applicants I should ignore irrelevant information and focus on the information I deem important. The details may be left somewhat vague, but the goal is to establish principles for action, reasons that will guide you to act in particular ways in certain types of situations so that you act consistently with your reflectively considered preferences, even when you do not or cannot consciously deliberate at the moment of action. Hence, autonomous agency requires the ability to form and act on principles. The formation of these principles should occur through conscious deliberation without the influence of any unconscious motivations the agent would reject if she knew about them. And the principles should be as internally consistent as possible so that the agent does not betray some of her own principles by acting on others. Given this conception of autonomy, we can see that an agent’s autonomy is threatened to the extent that she is ignorant of factors that lead her to act against her principles—i.e., were she to recognize these factors, she would reject them. TP
TP
PT
PT
171
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
Similarly, an agent’s autonomy is threatened by rationalization: cases in which the agent finds herself acting against her consciously chosen principles but retrospectively comes up with reasons to justify her action. Here, actions inconsistent with prior principles are explained (to herself and others) in terms of post hoc principles adjusted to fit the actions.6 Of course, we are all subject to cases of ignorance and rationalization. But are we subject to these challenges to our autonomy to a greater extent than we realize? TP
PT
1.2 Autonomy requires knowledge These considerations further suggest that autonomy depends on the agent’s ability to know her principles and to know how to act on them. Ideally, an autonomous agent can articulate to herself and others her principles for action. At a minimum, she can recognize whether she is acting on reasons she would accept were she to consider them. An agent’s knowledge of her principles is unhelpful if she is unaware of the motivational states or external factors that lead her to act against them. Ignorance of such factors hinders the agent’s ability to counteract their influence in order to act consistently with her principles. Finally, the agent has to know how to get herself to act according to her principles, even when she feels more motivated to act against them. Hence, autonomy requires some capacity to introspect accurately on one’s motivations for action and to know why one acts as one does. To the extent that we do not know ourselves and our situations, we lack autonomy. 1.3 Autonomy comes in degrees Notice that the above conditions are not meant to provide an analysis of autonomy; I do not know what the complete set of necessary and sufficient conditions for autonomy would be. Rather, I have described these conditions in terms of degrees of satisfaction. Intuitively, different agents seem to possess more or less autonomy, and an agent seems to exercise more or less autonomy in different decisions or actions. The degree to which agents are autonomous seems to align nicely with the degree to which they know their own principles for action and know how to act on those principles (though there are likely other conditions I have neglected). Accordingly, autonomy is compromised to the degree these conditions are compromised.7 Seeing autonomy in this way accords with our practices of attributing moral responsibility (e.g., praise and blame): we attribute responsibility to varying degrees depending in part on (1) the degree to which agents possess the relevant capacities to form and act on principles and (2) the degree to which agents have the opportunity to act on their principles. For instance, we hold children responsible to the degree that they have matured to understand their reasons for acting, and we hold adults responsible to the degree that they are in a position to know their obligations and to control their actions accordingly.8 Finally, this view allows us to see that empirical challenges to autonomy come in varying degrees. For instance, information about human psychology may TP
TP
PT
PT
172
EDDY NAHMIAS
suggest that we possess less autonomy than we think, without thereby suggesting we are entirely subject to forces beyond our control. Hence, one way to read the rest of my discussion is this: To the degree that social psychology’s theories and experimental results suggest limitations to the capacities required for autonomy, to that degree our autonomy is compromised. My main goal is to bring attention to these largely unnoticed empirical threats to autonomy and to examine their depth and scope. These points will reinforce an underlying theme of this chapter: that autonomy should be investigated empirically as well as conceptually and that exploring empirical challenges to autonomy and responsibility is at least as illuminating as debating would-be global threats such as determinism.9 TP
PT
2. THE THREAT OF SOCIAL PSYCHOLOGY We have seen, then, that an agent lacks autonomy to the extent that she is unable to know her own motivations or reasons for action or to know what situational factors are influencing her to act against her principles. Furthermore, an agent lacks autonomy to the extent that she acts in conflict with principles she has adopted or acts on reasons she would reject if she were to consider them. Research in social psychology over the past few decades suggests significant limitations to these conditions of autonomy.10 Specifically, some social psychologists have interpreted their research as demonstrating three interrelated theses: TP
PT
(1) The principle of situationism: Our behavior is influenced to a significant and surprising extent by external situational factors that we do not recognize and over which we have little control. These factors are often ones we would not want to have such influence on us if we knew about them. (2) The disappearance of character traits: Internal dispositional states are not robust or stable across various situations; traditional character traits are not good predictors of behavior. Hence, consistent principles we endorse or aspire to develop tend to be ineffective given the power of certain situational factors. (3) The errors of folk psychology and introspection: We generally do not know about the first two theses, and hence our explanations of our own and others’ behaviors are based on mistaken folk theories or inaccurate introspection. Our introspection does not give us privileged access to what motivates us to act. 2.1 Experiments and implications In order to clarify these theses, I will summarize some experimental results.11 The most common experimental paradigm is simple. The psychologists manipulate certain factors that we would not expect to influence our behavior. But the experimental group, which is exposed to the manipulated factor, behaves significantly differently than the control group, indicating the influence of that factor TP
PT
173
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
on behavior. Meanwhile, behavior within each group is consistent enough to suggest that other factors—including personality traits—play no significant role in determining behavior. Often, subjects are then asked to explain why they behaved as they did. They do not mention the manipulated factor as having played any role; rather, they mention other, more “principled” reasons for their choices and actions. Some of these experiments involve relatively trivial behavior, in which case subjects may feel obliged to come up with rationalizations for behaviors that are not generally guided by reflectively chosen principles.12 However, other experiments involve situations in which most people presumably seek to act in accord with their reflectively considered principles. I will focus on experiments involving morally relevant behavior. TP
PT
1) In 1964 when Kitty Genovese screamed for help for half an hour while being stabbed and raped in Queens, the forty people who witnessed the event did not help or even call the police. Social psychologists began testing whether this lack of intervention may have been due to a situational factor—the number of bystanders present—rather than, as the media explained it, the inherent apathy and callousness of New Yorkers. Numerous experiments showed that increasing the number of people who witness an emergency or a person in distress decreases the chances that anyone will intervene. For instance, when subjects heard a woman take a bad fall, 70% of solitary subjects went to help, but if subjects sat next to an impassive confederate, only 7% intervened.13 A plausible explanation is that when we are around others, our perception of the situation alters; perceived responsibility to act is diffused by the possibility that someone else will (or might) take action.14 Confounding the problem, if no one does take action, we construe the situation as less serious—if no one is reacting, it must not be so bad after all.15 But people do not recognize these group effects: “We asked this question every way we knew how: subtly, directly, tactfully, bluntly. Always we got the same answer. Subjects persistently claimed that their behavior was not influenced by the other people present. This denial occurred in the face of evidence showing that the presence of others did inhibit helping”.16 Rather, when asked, subjects, like the media, refer to dispositional traits (e.g., apathy or altruism) to explain their own and others’ behavior—traits that do not significantly correspond with people’s behavior—or they refer to their perception of the situation, which is skewed by a situational factor they do not recognize as influencing them. Presumably, people’s refusal to admit the influence of group effects is an indication that they do not accept it as a legitimate influence. Rather, the principles they articulate refer to dispositions to respond to those in need based primarily on how great the need is. Hence, it appears that people can fail to act on their principles because of situational factors they don’t recognize or accept as reasons to act. TP
PT
TP
TP
TP
PT
PT
PT
2) In another experiment Princeton seminary students were asked to prepare a lecture either on the parable of the Good Samaritan or on their job prospects. Some subjects were told they were late getting to the lecture hall while others were not. En route, they came upon a man slumped in a doorway, coughing and groaning (as in the Biblical story). While 63% of the “early” subjects offered help, only 10% of the
174
EDDY NAHMIAS
“late” subjects assisted the man in need. No significant correlations were found between the subjects’ helping behavior and their self-reported personality traits or the subject matter of their lecture.17 The “hurry” factor influenced some subjects by changing their perception of the situation: “because of time pressures, they did not perceive the scene in the alley as an occasion for ethical decision”.18 Again, people are not aware of the influence of this situational factor on their perception or their behavior.19 Even if people consider themselves altruistic, even if they prefer not to be affected by factors (like being in a hurry) that they view as irrelevant to helping those in need, it is difficult to see how they can consciously override the influence of factors they do not believe influence them. TP
PT
TP
TP
PT
PT
3) A study by Isen and Levin showed that subjects who found a dime in a payphone, and hence got a “mood boost”, were then fourteen times more likely to help a passerby pick up dropped papers than subjects who did not find a dime.20 Again, no one predicts or desires that their helping behavior is influenced by such seemingly irrelevant factors. TP
PT
4) Finally, the well-known Stanley Milgram obedience studies consistently found that about two-thirds of subjects will shock a man into unconsciousness during a learning experiment.21 Situationists suggest that the small incremental increases of the shocks make it difficult for subjects to find a justifiable point at which to question the authority of the experimenter.22 People certainly do not predict of themselves that they would continue well past the point that the learner appears to go unconscious to the 450-volt switch marked “Danger: XXX”. We assume our principles would preclude us from performing such actions. And no one predicts that so many others would do it either.23 TP
PT
TP
TP
PT
PT
In each of the above experiments people’s explanations for their own and others’ behavior refer to character traits or principled reasons while ignoring the situational factors that in fact make the significant difference. That is, people are ignorant of significant causes of their behavior and, if asked to explain their behavior, they tend to offer rationalizations, confabulating reasons to try to make sense of their behavior.24 Psychologist Roger Schank summarizes the general idea: “We do not know how we decide things […]. Decisions are made for us by our unconscious; the conscious [sic] is in charge of making up reasons for those decisions which sound rational”.25 To the extent that situational factors play a large role in determining our behavior, differences in people’s internal dispositions appear to play a relatively small role. That is, if a particular situational factor (such as finding a dime) can elicit similar behavior from most subjects, then, it is claimed, differences between individuals’ characters are correspondingly insignificant in producing their behavior. The question of whether this work in social psychology implies an elimination of character traits is controversial both within social psychology and in philosophical reactions to it.26 I will not try to adjudicate that debate. I will simply suggest that even if elimination is not warranted, to the extent that character traits play less of a TP
PT
TP
TP
PT
PT
175
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
role in our behavior that we ordinarily suppose, this would threaten our autonomy to the extent we do not then act on consistent principles we endorse.27 In addition to experiments like those described above, the evidence for the “disappearance of character traits” comes from experiments that test for correlations in subjects’ behavior across various situations designed to elicit trait-relevant responses (e.g., honesty, extroversion, impulsivity). For instance, Hartshorne and May tested students for honesty by examining their willingness to steal money, lie, and cheat on a test.28 While the subjects behaved similarly in repeated cases of any one of these situations, they did not behave consistently across these situations; for instance, knowing that a particular student cheated on a test did not reliably indicate whether he would steal or lie. In general, people predict a very high correlation between character trait descriptions and behaviors in relevant “trait-eliciting” situations, but such correlations in fact hit a very low “predictability ceiling”.29 In other words, if we want to understand why an agent does what he does in situation X, we are better off either looking at his past behavior in situations just like X or at the way most people behave in X than we are considering what we take to be the agent’s relevant character traits. Such conclusions raise a problem for autonomy because the principles we adopt for our own behavior are usually not so situation-specific; rather they look more like character traits that will dispose us to feel and respond to a wide range of situations in an appropriate way. To act on a principle of honesty requires overcoming inclinations to lie or cheat across a range of situations where such behavior is inappropriate. We do not usually form situation-specific principles, such as the desire to help people specifically when we are not in a hurry.30 Rather, we aim to develop consistent tendencies to respond to the aspects of situations that call for traits such as altruism, generosity, courage, or diligence. The more these tendencies can be disrupted by unrecognized variations in situational factors that we don’t want to influence us, the less consistent we will be. And the less we recognize how easily these tendencies can be disrupted, the less we can make conscious efforts to shore them up in order to act consistently.31 To the extent that the dispositional traits we identify as our principles do not in fact correspond with consistent behavior, we are identifying ourselves with constructed concepts rather than actual motivational states. This limits the influence of our principles on our actions. If future behavior cannot be accurately predicted based on the possession of a character trait, then adopting a principle to act on that trait appears fruitless. I may reflectively endorse being generous but doing so appears to be ineffective to the extent that (1) I cannot predict how I will be influenced by situational factors that vary across different “giving” opportunities, and (2) there is no such trait as generosity to cultivate. Research in social psychology suggesting the disappearance of character traits thus threatens our autonomy to the extent that it suggests limitations on our capacities to form consistent principles for action and to know how to act on our principles. Such research also suggests a more systematic limitation to our knowledge. If we tend to explain human behavior in terms of character traits while being ignorant of the influence of unnoticed situational influences, then this suggests that our TP
TP
PT
PT
TP
TP
TP
PT
PT
PT
176
EDDY NAHMIAS
explanations and predictions of our own and others’ behavior will often be inaccurate. Furthermore, on this view, our introspective reports about why we feel and act as we do are based on our (largely inaccurate) folk theories rather than any direct access to our own mental processes. Social psychologists argue that our folk psychology suffers from the fundamental attribution error, “people’s inflated belief in the importance of personality traits and dispositions, together with their failure to recognize the importance of situational factors in affecting behavior”.32 Then, they suggest, when we introspect on why we feel or act as we do, we grasp for the most plausible folk psychological explanation, rather than accurately introspecting on our own mental states: “The accuracy of subjective reports is so poor as to suggest that any [introspective] access that may exist is not sufficient to produce generally correct and reliable reports”.33 There are numerous social psychology experiments suggesting that our folk psychology and our introspection are inaccurate. 34 I’ll describe just one to illustrate the threat to autonomy. Nisbett and Bellows asked college students to assess a candidate, Jill, for a counselling job. 35 Subjects judged Jill according to four criteria (“likeability”, “intelligence”, “sympathy”, and “flexibility”) after they read a three-page application file with information about her life, her qualifications, and a prior interview with her. Five factors in Jill’s file were manipulated across different sets of subjects: (1) whether Jill was described as attractive, (2) whether she had superior academic credentials, (3) whether she had spilled coffee during her interview, (4) whether she had recently been in a car accident, and (5) whether the subject was told he or she would later meet Jill. After judging Jill on the four criteria, subjects were then asked to introspect about how much each of these factors had influenced their judgments on each of the criteria. 36 The results show that the “actor” subjects’ introspective reports rarely correlated with the actual effect the various factors had, as determined by between-subject comparisons (with one exception: academic credentials did influence judgments of intelligence just as subjects reported). (See Figure 13.1). For instance, subjects reported that whether Jill had been in an accident had the greatest effect on their judgments of her sympathy and that her academic credentials had the greatest effect on how much they liked her, but when the manipulated factors were compared across subjects’ judgments, it turned out that these introspective reports were inaccurate; instead, believing they would meet Jill had, by far, the strongest effect on subjects’ judgments of her sympathy (as well as her flexibility) and reading that Jill spilled coffee in her interview had the strongest effect on how much they liked her! TP
TP
PT
TP
TP
TP
PT
PT
PT
PT
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
177
Figure 13.1. Nisbett and Bellows’ (1977) experiment. Bold line represents the actual effects each of the five manipulated factors (on X-axis) had on judgments (Y-axis) as measured across “actor” subjects; circle line represents introspective reports by “actor” subjects about the effects each of the factors had on their judgments; and triangle line represents predictions by “observer” subjects about the effects the factors would have on their judgments. Reproduced from Nisbett and Bellows (1977, 619).
A second group of “observer” subjects was asked simply to imagine they were judging some unspecified candidate for an unspecified job and, without actually viewing any information, they rated how important the five factors (appearance, academics, etc.) would be to their judgments on the four criteria (likeability, sympathy, etc.). These subjects’ predictions were statistically identical to—and hence as inaccurate as—the reports by the subjects who actually considered the information about Jill (see Figure 13.1). The authors interpret these results as showing that the introspecting subjects did not actually make their judgments based
178
EDDY NAHMIAS
on the reasons they reported. Rather, they were unaware of how various factors influenced most of their judgments, and they retrospectively theorized about what influenced them in the same way the uninformed subjects theorized about what factors would influence them. And in both cases, the theories were generally mistaken. In the one case where the theories were accurate, it is because the connection between academic credentials and intelligence fits plausible cultural norms about such judgments: the authors explain, “[v]erbal reports should be correct wherever there exists a correct causal theory, correctly applied in the particular instance”,37 but “introspection played little part in subject reports”.38 Social psychologists hence suggest that our introspection about why we do what we do looks more like theoretical reasoning about what someone might do in our circumstances. Nisbett and Wilson conclude from such experiments: “It is frightening to believe that one has no more certain knowledge of the workings of one’s own mind than would an outsider with intimate knowledge of one’s history and of the stimuli present at the time the cognitive processes occurred”.39 Indeed it is frightening, because our autonomy is compromised if it turns out that what we think we are doing when we introspect on the reasons we act does not in fact involve reliable access to our principles, but instead involves “just so” stories derived from our inaccurate folk psychology. The threat appears graver still if psychologists develop theories to predict our own behavior better than we ourselves can, since prediction goes hand in hand with control. Let me be more precise about the threats to autonomy posed by such research. In these experiments, when subjects offer reasons for their attitudes and actions, they are usually doing two things: (1) claiming that those factors have causally influenced them, and (2) explaining the factors they think justify their attitudes and actions. For example, when subjects report that knowing Jill has been in an accident made them more likely to see her as sympathetic, they are presumably explaining not only the influence of that factor but also their view that its influence is legitimate (e.g., she’ll understand suffering better). Conversely, when subjects report that Jill’s spilling coffee was not a factor in their rating of how much they liked her, they are reporting that they think it had no influence on them and also that they think it should have no influence on them—clumsiness is not a good reason to like (or dislike) Jill.40 In such cases, the reasons subjects report as the basis of their judgments often accords with principles they accept (or would accept). However, these experiments indicate that factors the subjects see as irrelevant are in fact influential while most factors they see as important are not. To the extent such results can be generalized, it seems that the reasons we offer as explanations for our behavior look more like retrospective rationalizations. Furthermore, such experiments suggest limitations on our knowledge of how to influence our actions to accord with our principles. For each unrecognized effect on our motivations and actions, our deliberations about how to influence our future actions are correspondingly restricted. For instance, if you’re hiring someone for a job, you’d likely deliberate about which criteria are important to you and how you will determine if candidates meet those criteria. But it seems your judgments may TP
PT
TP
TP
TP
PT
PT
PT
179
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
often be affected by unrecognized factors that you would not want to be influential, such as the candidate’s appearance or whether they spill coffee at the interview.41 The less accurate your knowledge of which factors might influence you, the less you can control the influence of those factors you want to make a difference—that is, the less you can act on your principles. TP
PT
2.2 Responses to the threat of social psychology I have presented these experimental results and interpretations from the perspective of the social psychologists in order to highlight the challenges they pose to autonomous agency. But I will now suggest some avenues for responding to their interpretations so that we might defend our autonomy against these threats.42 My main goal has been to show why the degree to which we are autonomous agents is, in large part, an empirical question. But I will now suggest that it remains, in large part, an open question. First of all, the extent of human actions to which the social psychology evidence applies remains unknown. For instance, these experiments usually involve complex experimental set-ups, designed precisely to “trick” subjects, who are doing things without being asked to attend to what they are doing. In some cases there is little reason to think subjects care about the activities being studied, so they may not really have what they see as reasons for what they do—they only come up with reasons when asked to.43 Perhaps some of our more considered actions are less subject to situational effects. And some of the situational effects will surely involve the salient aspects of the situations, the very ones we get ourselves into by consciously choosing to do so. For instance, when someone deliberatively settles on becoming an ER doctor, they are knowingly putting themselves in situations that will lead them to help people in distress. A reflective decision to become a professional philosopher includes a decision to be in many fewer such situations. Furthermore, in the experiments subjects are asked to explain their judgments or responses only after they complete them. When the subjects report their experiences, they are not introspecting but retrospecting on processes they performed earlier during the task. Perhaps subjects’ poor memory of the thoughts they had accounts for some of the problems ascribed to their poor introspection.44 None of these experiments ask subjects to consider what principles they want to influence them before they act, in order to determine whether such deliberation can counteract the disconnect between subjects’ reported reasons and their actions. It would be helpful to test what would happen if subjects were asked to consider their principles of action before they engage in the relevant behavior.45 Experiments that test for the effect of prior introspection are needed to determine the influence of conscious deliberation about one’s principles. In fact, some experiments have tried to determine the effect of one type of prior introspection on one’s behavior. These experiments, however, suggest that introspecting on the reasons for our attitudes can be disadvantageous. Specifically, they imply that when subjects introspect about why they feel the way they do about TP
TP
PT
PT
TP
TP
PT
PT
180
EDDY NAHMIAS
something, they may not access the actual reasons for their feelings but instead come up with what they think are plausible reasons, and this disrupts the consistency between the feelings they then report and their subsequent actions, and they may even make choices they later regret. The upshot, according to Timothy Wilson, is that “self-reflection may not always be a beneficial activity”; indeed, “at least at times, the unexamined choice is worth making”.46 For example, one experiment asked dating couples to report their feelings about their relationship, including how long they thought it would last. But one set of subjects was first asked to introspect on the reasons for their feelings about their relationship, while control subjects just reported their attitudes without introspecting on them. The correlation between subjects’ reported attitudes (e.g., how long they thought the relationship would last) and their behavior (i.e., whether they were still dating several months later) was significantly lower (.10) for those subjects that introspected than for the controls (.62).47 As Wilson interprets it, the introspecting subjects came up with what they saw as plausible reasons for their feelings but they did not have direct access to the actual reasons they felt as they did. These subjects then adjusted their reported attitudes to match the reasons they had thought up so that their behavior, motivated by their “real” attitudes, did not then match their introspective reports. The problem raised by this experiment (and others like it48) is not only that it provides more evidence that we are often mistaken in our explanations for our actions and attitudes, but also that it suggests we often act on motivations we have not considered and which, in fact, we might not accept as principles for action. Luckily, in this case, there is actually some experimental evidence that limits the scope of such research. In the dating study and others like it, when the experimenters controlled for subjects’ knowledge (e.g., whether the partners knew each other well), they found that knowledgeable subjects did not face the problem of inconsistency after introspecting on their attitudes. That is, the behavioral measure aligned much more closely with the attitudes subjects reported based on their introspected reasons.49 Such studies suggest that when subjects know and care about the relevant issue, they seem to have already considered the principles they want to motivate their behavior. So, unlike unknowledgeable subjects, they need not adjust their attitudes to match reasons they come up with on the spot. They already have good reasons and their attitudes and behaviors reflect these reasons. In such cases, it seems we have deliberated about our reasons for feeling and acting as we do, and these deliberations have “sunk in” so that our actions align with our principles. Our introspection on our reasons does not disrupt our attitudes—rather, we revisit a pattern of reasoning we have already made our own. This is consistent with my account of autonomy, because it suggests that increased knowledge of the world and ourselves increases our ability to act in accord with our principles.50 And it suggests that reflective consideration of one’s principles is the first step in getting oneself to act on them. When unknowledgeable subjects introspect on their reasons, they are trying to locate their reasons for the first time, trying to justify their attitudes; it is not surprising that their behavior does not immediately match these reasons. But such introspection may initiate the process TP
TP
PT
PT
TP
TP
PT
TP
PT
PT
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
181
of disrupting and overcoming habitual behavior and unconsidered motivations that don’t accord with the principles the agent would accept.51 My reason for examining the social psychology research has not been to conclude that it shows we are not autonomous agents. Rather, my aim has been to bring attention to some implications of social psychology that have not been fully examined and to illustrate that the scope of our autonomy can and should be examined empirically. While the research I have examined does suggest important limitations to our autonomy, more experiments are required to learn when and how conscious consideration of our principles makes a difference in our behavior. Too few experiments have dealt with attitudes and actions we care about, whose outcomes are of direct relevance to our significant interests and goals. And too few have examined how prior deliberation about how we want to be influenced affects what we in fact do. But regardless of these shortcomings, social psychology offers useful paradigms and starting points for the empirical investigation of what has for too long been designated a merely conceptual issue, the nature and scope of our autonomy.52 TP
PT
TP
PT
NOTES 1
Below I explain why autonomy is best understood as a property agents may possess and exercise to varying degrees. 2 The connections between the concepts of autonomy, free will, and moral responsibility are complicated, in part by the variety of ways philosophers use each of them. Though I will not defend it here, I believe an adequate account of autonomous agency will be sufficient as an account of free and morally responsible agency, and my discussion may be read in this way. See Taylor (2005). 3 For accounts of autonomy that suggest some of the features I outline, see Frankfurt (1988; 1999), Dworkin (1988), Taylor ([1977] 1982), Wolf (1990), Watson (1975), Mele (1995), Christman (1991), Bratman (1987) and Fischer and Ravizza (1998). Some of these accounts, however, are described in terms of conditions required for free agency or morally responsible agency rather than autonomous agency (see n. 2). 4 That we often take on such obligations without coercion suggests, somewhat paradoxically, that autonomy can include being governed by a form of external control so long as the agent autonomously accedes to such control. 5 Regret facilitates such deliberation about how to act differently next time around. Watson (1975) discusses the importance of acting on one’s values, “those principles and ends which [the agent]—in a cool and non-self-deceptive moment—articulates as definitive of the good, fulfilling, and defensible life” (p. 105). 6 Such rationalization may be difficult to distinguish from cases where the agent modifies her principles in light of her actions or their outcomes. One way to test whether an agent is rationalizing her actions with principles she does not really hold is to see whether or not she accepts those principles at other times and in relevantly similar situations. 7 Cases become complicated when the agent knowingly chooses to do something that will compromise his opportunity to satisfy these conditions in his later actions. For instance, he may autonomously take drugs knowing it will compromise his ability to act autonomously. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
182
EDDY NAHMIAS
Hence, we sometimes attribute responsibility to an agent’s actions that were not autonomously performed (e.g., when he lacks knowledge or control), because those actions are “traceable” to actions or choices that he did perform autonomously (e.g., many cases of drunk driving). 8 This explains why we mitigate an agent’s responsibility when he is ignorant of his obligations or the consequences of his actions (where such ignorance is not itself culpable— see previous note), or when he is under extreme emotional duress or cognitive load. 9 Elsewhere I examine similar empirical challenges to autonomy (or free will) from other sciences that explore human nature, such as evolutionary psychology or neurobiology, each of which suggests certain limitations on our knowledge of and control over our motivations and behaviors. See, e.g., Libet (1983), Wegner (2002), and Nahmias (2002). 10 I should note that social psychology is a diverse field; the work I discuss represents selected elements within it, notably the “situationist” camp led by researchers such as Lee Ross, Richard Nisbett, Timothy Wilson and their collaborators. For objections to the situationist paradigm, see, e.g., Sabini, Siepmann, and Stein (2001), Krueger and Funder (2004), and Cotton (1980). For more detailed discussions of the implications of this research for free will and responsibility, see Nahmias (in prep.), Doris (2002) and Nelkin (in press). 11 For extensive reviews of such experiments, see Nisbett and Wilson (1977), Ross and Nisbett (1991), and Wilson (2002). 12 For example, the position effect: presented with identical products, consumers tend to select the ones to the right, but they reject the role of position in their decision, instead coming up with reasons why they think the selected product is better than the other (identical) ones. See Ross and Nisbett (1991, pp. 30-32). See Spinner (1981) for discussion of the distorting effects of demand conditions (i.e., subjects’ being asked to explain their actions). 13 See Latane and Darley (1968, 1970) and Ross and Nisbett (1991, p. 42). In another experiment, when solitary subjects heard an experimenter feign an epileptic seizure, 85% intervened; when subjects believed there was one other subject listening, 62% intervened; when they believed there were four other subjects, 31% intervened. And in all these experiments, interventions occurred faster when there were fewer subjects. 14 A woman who heard Genovese screaming explained, “I didn’t let” my husband call the police; “I told him there must have been 30 calls already”. 15 Post-experiment interviews suggest this interpretation: subjects in groups describe the emergencies in different terms (e.g., the fall victim’s “cries” become “complaints”) and notice them more slowly than subjects who are alone. Subjects also may want to avoid embarrassing themselves by taking action when no one else seems to think something should be done, though in some experiments subjects could not even see how others were reacting. 16 Latane and Darley (1970, p. 124). 17 Darley and Batson (1973). 18 Ibid., p. 108. 19 See Pietromonaco and Nisbett (1982); they describe to subjects the Good Samaritan study and then ask them to predict the outcomes. Subjects predict that the majority of seminary students would stop to help in all conditions, but that 20% more would help if their religious calling was based on a desire to help others. Subjects thought that being in a hurry would make no difference to whether the seminary students helped. 20 Isen and Levin (1972). See Miller (2004, appendix) for some confounding factors regarding these experiments. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
21
183
See Milgram (1969). See Ross and Nisbett (1991, pp. 56-58). This interpretation is strengthened by the fact that of the subjects who do stop the experiment, most do so when the “learner” appears to go unconscious (at 300 volts), a point where they can offer a justification for stopping. Many other factors have been shown not to correlate with subjects’ behavior in the experiment, including gender, age, socioeconomic status, national origin, and various personality measures. 23 After having the experiment described to them, psychologists predicted that only 2% of the subjects would continue to the end. 24 However, in some cases subjects act so inconsistently with their principles that they disassociate themselves from their behavior, expressing surprise and dismay at their actions. This happened with some of Milgram’s subjects, and in another famous situationist study, the Zimbardo prison experiments, where subjects given the role of prison guard became so cruel and aggressive that the experiment had to be terminated. Though some guards offered justifications for their behavior, one self-described pacifist said of his force-feeding a prisoner, “I don’t believe it is me doing it,” and another said, “I was surprised at myself. I was a real crumb” (Doris 2001, pp. 51-53). Comparisons with the behavior of American soldiers at Abu Ghraib seem apt (see Nelkin, in press). In fact, Zimbardo recently testified in their defense and wrote that the prison guards “had surrendered their free will and personal responsibility during these episodes of mayhem […]. [They] were trapped in a unique situation in which the behavioral context came to dominate individual dispositions, values and morality to such an extent that they were transformed into mindless actors alienated from their normal sense of personal accountability” (www.edge.org). 25 Quoted at www.edge.org. 26 See, e.g., Ross and Nisbett (1991, chapter 4); Doris (1998 and 2001); Harman (1999); Flanagan (1991, chapter 13); Merritt (2000); Miller (2003); Kamtekar (2004). 27 Despite some similarities between the situationists and the more radical behaviorist tradition—notably the shared claim that environmental conditions play a significant role in human behavior—the situationists do recognize the importance of internal cognitive states, namely people’s perceptual and motivational construal of their situations. However, these dispositions to perceive situations in certain ways are relatively specific and do not support the attribution of recognized character traits. 28 Hartshorne and May (1928). 29 Ross and Nisbett describe this “maximum statistical correlation of .30 between measured individual differences on a given trait dimension and behavior in a novel situation that plausibly tests that dimension [as] an upper limit. For most novel behaviors in most domains, psychologists cannot come close to that” (1991, p. 3). 30 This is not to say that we don’t aim to have principles that are open-ended and flexible, but we aim for them to be responsive to factors whose influence we accept or would accept, not to ones whose influence we don’t recognize and would not accept if we did recognize it. 31 It is this challenge to character traits that leads some philosophers—notably, Doris (1998 and 2001) and Harman (1999)—to suggest that social psychology challenges virtue ethical theories since they appear to require the possibility of robust character traits. 32 Ross and Nisbett (1991, p. 4). That we tend to refer to character traits in explaining behaviors—especially others’ behaviors—is a claim informed by experimental research, not just intuition (See ibid., chapter 5). The basic idea is that we attend to other people—dynamic TP
PT
22 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
184
EDDY NAHMIAS
and interesting as they are—and not to their environments—seemingly static and boring as they are—so that we attribute causal power to agents (the word “agent” itself suggests this) rather than to the situational factors that influence agents. 33 Nisbett and Wilson (1977, p. 233). Their explanation for our successful cases in predicting others’ behavior is that we usually interact with people in similar situations over time. And our explanations of our own behavior are accurate “due to the incidentally correct employment of [folk] theories” (ibid.). 34 See Nisbett and Wilson (1977); Ross and Nisbett (1991); Wilson (2002). 35 Nisbett and Bellows (1977). 36 Since each subject was exposed to four of the five factors, each offered 16 self-reports (one for each of the four factors’ affect on each of the four judgments about Jill—i.e., likeability, intelligence, sympathy, and flexibility). 37 Nisbett and Bellows (1977, p. 614). 38 Ibid., p. 623. 39 Nisbett and Wilson (1977, p. 257). 40 Reported causal influences will not always be offered as justifications; for instance, some people may recognize that attractiveness influences their judgments of “likeability” but not accept that influence as a good reason. 41 Ross and Nisbett (1991, pp. 136-138) discuss the “interview illusion”: whereas subjects believe interviews will correlate with performance at a rate of 0.6, the actual correlation between judgments based on interviews and later job performance is usually below 0.1. 42 One response I will not explore here is that this research is empirically or conceptually flawed. For some suggestions that it is, see references in n. 10. 43 Many studies deal with consumer choices, or puzzles, or quickly made choices. In fact, I have found only one experiment, a dating study discussed below, in which subjects are reasoning about something that will have any direct impact on their own lives. 44 Ericsson and Simon’s Protocol Analysis (1984) shows that concurrent introspection is much more accurate than retrospection. 45 For instance, in the job interview study, presumably, many subjects had at some point thought about the influence academic credentials should have on judgments of intelligence, and perhaps—just as they report—those thoughts did play a role in those judgments. 46 Wilson and Schooler (1991, p. 192). See also Wilson (2002). 47 Wilson et al. (1989). One might suggest that we do not expect or care to be able to offer accurate reasons for our romantic feelings (that the idea of justifying one’s love seems wrong). However, we do offer reasons (to ourselves, friends, and family) for why we are in the relationships we are in, and presumably, we want those reasons to bear some relation to reality, and we do not want the act of coming up with such reasons to disturb our actual feelings. 48 For instance, a study on voting behavior showed that subjects who introspected on why they liked or disliked candidates before reporting their attitudes thereby disrupted their reported attitudes so that their behavior correlated with their reported attitudes significantly less than it did in controls who did not introspect: -0.43 vs. 0.46 (Wilson et al. 1989). 49 In the dating study, for introspecting subjects who knew their partner well, behavior correlated with reported attitudes at 0.56, compared with -0.19 for subjects who had been dating a short time (Wilson et al. 1984). In the voting study cited in the previous note, for TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
AUTONOMOUS AGENCY AND SOCIAL PSYCHOLOGY
185
subjects who knew a good deal about politics, the correlation between reported attitudes and behavior was 0.53 (Wilson et al. 1989). 50 In fact, knowledge of the social psychology evidence itself might increase our autonomy (1) by improving our ability to attend to situational influences we tend to overlook (see Beaman, Barnes, and McQuirk 1978; see Pietromonaco and Nisbett 1992), (2) by informing us about how to manipulate our environment to increase the likelihood that we act on our principles (e.g., “channeling factors” are subtle changes in environment that can significantly increase our ability to carry out planned actions), and (3) by decreasing our confidence in the strength of our character traits so that we do not rely on them to guide us successfully through tempting situations (see Doris 1998). 51 See Holt (1989; 1993). 52 This chapter is drawn from ideas developed over a number of years and with feedback from many audiences and friends to whom I am grateful. I would like to thank, in particular, Owen Flanagan, Al Mele, Dana Nelkin, John Doris, and Manuel Vargas. TP
PT
TP
PT
TP
PT
B. Consciousness
CHAPTER 14 THE COGNITIVE ROLE OF PHENOMENAL CONSCIOUSNESS Tiziana Zalla
A number of concepts are included in the term “consciousness”. It denotes several distinct phenomena which might be related to different cognitive functions and eventually calls for different accounts. The notion of “phenomenal consciousness” (p-consciousness) denotes those qualitative properties that enable us to distinguish between different sensory experiences, such as, for example, the experience as of red and the experience as of green. The vividness of pain, the pleasure, the redness, the taste of red wine are examples of qualitative experiences. Although p-consciousness is a widespread phenomenon, it is often considered a mere epiphenomenon, with no causal role in cognition.1 Many non-human species may experience smell, colour or pleasure, even if these experiences are qualitatively different from ours.2 We know from neuropsychological dissociations between implicit and explicit knowledge that a large part of information processing can be accomplished unconsciously.3 Then, why does some information get a so robustly phenomenal character? Does it play some role in cognitive processing? What kinds of higher cognitive functions might it be used for? In the present chapter, I will suggest answers to these questions and depict a possible biologically adaptive function for p-consciousness in cognition. In order to naturalise the concept of consciousness and to understand the causal role of phenomenal properties, one has to show that it is a natural kind, i.e. a phenomenon which can be isolated as explanandum and integrated in a scientific theory of mind. Here, I will first argue that two received ideas have to be abandoned: the idea of pervasiveness and that of the apparent unity of phenomenal consciousness. I will claim that p-consciousness differs from other forms of consciousness: it is intimately associated with our perceptual nonconceptual experiences and therefore shares some modular properties with our perceptive systems.4 I will thus defend the thesis that pconsciousness has to be considered as a way to label our experiences and internal mental states.5 Labelling is an essential function of cognitive processing since it enables us to discriminate and to identify our own mental representations. The putative role of p-consciousness is to allow a metacognitive mechanism of Source Monitoring to detect the perceptual origin of our beliefs and mental states. As shown by the neuropsychological literature, a defective source monitoring might explain hallucinations, confabulations and erroneous self-attribution of mental states. Its qualitative correlate will be thus presented as an evolutionary product which emerged to cope with environmental and behavioural complexity. TP
PT
TP
TP
PT
TP
TP
PT
PT
189 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 189–199. © 2007 Springer.
PT
190
TIZIANA ZALLA
1. THE NOTION OF P-CONSCIOUSNESS In this section, I will address the issue of whether all conscious states are intrinsically endowed with phenomenal properties. Although in everyday life we are often in a hybrid cognitive state, phenomenal qualities are distinct from propositional attitudes. Phenomenally conscious representations are nonconceptual and intimately associated with the output of modular perceptual systems. Ned Block distinguished two main notions of consciousness: “phenomenal consciousness” and “access consciousness” (a-consciousness).6 While pconsciousness corresponds to the qualitative or experiential properties of mental states, the notion of “a-consciousness” refers to the availability of information for use in reasoning and rational guidance of speech and action, or to the awareness of a representational content. A person can have a propositional attitude, e.g., believing that snow is white, and yet not know what it is like to have the experience that snow is white.7 Conversely, the same belief might cause someone else to form a nonverbal image of a white expanse of snow or an auditory image of the words “snow is white”. However, although p-consciousness and subjectivity (e.g., sense of ownership, perspective taking) are experientially related, they need to be conceptually distinguished. I will therefore provide a more restricted notion of pconsciousness and distinguish p-consciousness from “qualia” which, in Nagel’s terms, are related to the notion of subjective experience.8 Nagel’s notion is based on the idea that subjectivity is in principle an irreducible phenomenon, and on the related view that p-consciousness is a pervasive feature of the mind. Contrary to this view, in accordance with representational theories of mind,9 I will assume that phenomenal properties are associated with non-conceptual contents of representations which are used as cues for those inferences leading to subjective experience or to higher-order thoughts. Fron this perspective, Nagel’s “what it is like” experience corresponds to those particular epistemic states that are inferentially promiscuous and could be therefore completely accounted for by one’s being in a second-order state relative to one of these kinds of first-order states. According to Tye, TP
TP
PT
PT
TP
PT
TP
PT
[…] no belief could have phenomenal content. A content is classified as phenomenal only if it is non-conceptual and poised. Beliefs are not non-conceptual, and they are not appropriately poised. They lie within cognitive system, rather than providing inputs to it.10 TP
PT
Similarly, Dretske wrote: [Experiences] make one conscious of whatever properties the representation is a representation of and, if there is such, whatever objets [bearing C to the representation] these properties are properties of. That, if you will, is the representational theory of consciousness.11 TP
PT
191
THE COGNITIVE ROLE OF PHENOMENAL CONSCIOUSNESS
According to this view, the experiential content corresponds to the intentional content of our non-conceptual representations, which are determined by external factors. The trouble with such an approach is that it has to explain how one is able to discern one’s own mental contents on the basis of internal clues without calling for a correspondence with the external environment. 2. MODULARITY AND P-CONSCIOUSNESS Most of the physiological processes going on in our body are achieved without involving any conscious experience or monitoring function. We are not conscious of our immune system, we do not feel each contraction of our stomach, we are not aware of maintaining our equilibrium at each moment. If we think of complex unconscious tasks like shape and object recognition, we may even wonder why cognition involves consciousness at all. Why are we sentient beings, why are we not unconscious like robots? In this section, I will defend the thesis that the phenomenal properties of our experience mirror certain modular properties of neural computational processes. Modularity is an overall conception of the cognitive system which, in Fodor’s version (1983), postulates mainly two distinct components: input analysis systems, responsible for perceptual representation functions and central processes, responsible for comprehension, reasoning and belief fixation. According to Fodor, unconscious processes are achieved by domain specific modules, which are characterised by encapsulation, relative inaccessibility, domain-specificity, mandatoriness, and automaticity. Moreover, they are biologically hardwired, associated with a fixed neural architecture, and follow well-defined sequences of development and specific patterns of neuropsychological breakdown. Given the modular nature of the perceptual systems, p-consciousness is thus mediated by encapsulated, automatic, mandatory processes which are insensitive to considerations of utility and deeply differ from the voluntary or deliberate character of higher-order conscious cognitive processes. Only the outputs of the modular systems are experienced as phenomenologically salient; these properties emerge at the interface between modular and central systems. However, although phenomenal properties are the product of informationally segregated perceptual systems, the issue of the cognitive role of p-consciousness is related to the problem of representational unity, known as the binding problem.12 In the brain, different kinds of processing occur in different physical locations. For instance, colour analysis, shape recognition, movement and several other characteristics of visual scenes are detected in separate parts of the visual cortex. Nevertheless, our brain constructs a single and global view of the scene. The integration of these kinds of information requires a binding mechanism so that, for example, we are able to simultaneously assign colour, direction, movement and form a single object, which can be thus conceptually identified. It has been experimentally observed that neurones located in different cortical areas may function synchronously13 and evidence from neurophysiology14 suggested that frequency TP
TP
PT
TP
PT
PT
192
TIZIANA ZALLA
locking between neurone groups could account for the integration of different features of a given perceived situation. This temporary synchronous neural activity has been often associated with consciousness. As Damasio puts it: It is not enough for the brain to analyse the world into its component parts: the brain must bind together those parts that make whole entities and events, both for recognition and recall. Consciousness must necessarily be based on the mechanisms that perform the binding.15 TP
PT
Similarly, Crick and Koch claimed that the mechanisms of binding are the basic neurophysiological mechanisms underlying different forms of consciousness: Our basic hypothesis at the neural level is that it is useful to think of consciousness as being correlated with a special type of activity of perhaps a subset of neurons in the cortical system. Consciousness can undoubtedly take different forms, depending on which parts of the cortex are involved, but we hypothesise that there is one basic mechanism (or a few) underlying them all.16 TP
PT
Contrary to this view, I claim that the binding mechanism essentially integrates intentional contents through a synchronous neural activity so as to create a representational unity. In this integrated form, different modality-specific nonconceptual representations are available for higher-order mechanisms, such as executive and monitoring functions, reasoning, planning, and decision-making as well as for several metacognitive processes.17 This is in accordance with the notion that at least a portion of the human brain must house a “global workspace”,18 where representational contents are a-conscious. Empirical evidence in favour of the modular nature of the phenomenal mind comes mainly from neurophysiological evidence and from the several highly selective neuropsychological deficits. Cognitive disorders occurring after brain injuries sometimes reveal that specific aspects of consciousness may be selectively impaired. For instance, blindsight patients are blind in a certain area of their visual field, that is, their phenomenal experience is selectively impaired in the visual modality.19 However, although they are totally unaware of their residual visual capacity—they just claim they are “guessing” during visual tests—these patients are still able to perform visual processing tasks such as localising simple visual stimuli, elementary patterns or movements.20 Other types of neuropsychological syndromes (like amnesia, hemineglect, and agnosia) that may alter or suppress specific aspects of qualitative experience strongly suggest the existence of dissociation within the perceptual domains of information processing. As far as we can conclude from brain lesion studies, phenomenal properties appear to be associated with a specialised neural architecture. These selective syndromes support the idea that perceptual p-consciousness is not TP
PT
TP
TP
PT
TP
PT
PT
193
THE COGNITIVE ROLE OF PHENOMENAL CONSCIOUSNESS
globally distributed: it is modality-specific and its modular properties mirror the organisation of perceptual input modules. Qualitative properties of experience mainly originate from functionally specialised modular systems, and it is indeed because of this functional segregation that they are endowed with a discriminative function. We can taste an apple, see an apple, smell an apple, but we never confuse the redness of an apple with its flavour or with its smell. 3. P-CONSCIOUSNESS AND FEELING OF KNOWING In order to assign a cognitive role to p-consciousness, one has to avoid the conflation between the notion of p-consciousness and those of various forms of aconsciousness.21 One has thus to provide empirical evidence of the existence of a specific function and possibly of a distinct neural correlate for p-consciousness. In a commentary on Block (1995), Zalla and Palma have suggested that a common phenomenon, termed “Feeling of Knowing” (or “Tip of the Tongue” experience, when the item to be retrieved is a lexical one), could in fact be seen as empirically supporting the reality of this conceptual distinction.22 Indeed, although subjects experience difficulties in recalling some item they have “stored” in memory, they still have the distinct feeling of knowing it without being able to identify it. Several experiments indicate that the information is indeed present in memory: the majority of the subjects when cued or prompted with some partial information (such as the first letter of the word to be recalled) is able to retrieve the “missing” item with a rate of success which is generally above purely random guessing rates.23 In “Feeling of Knowing” situations, we are in a p-consciousness state without being simultaneously in an a-consciousness state for, while the experience is there, the representational or conceptual content is not available to conscious thinking, planning or reasoning. Discussion of “Feeling of Knowing” phenomena in the literature revolves around two main axes of explanation: the trace-based view which posits the existence of an internal monitoring system, or, alternatively some form of retrieval procedures based on inferential and contextual knowledge.24 One possible hypothesis is that because of brain damage, the monitoring system fails to create a second-order conscious state and to get access to the information. Indeed, neuropsychological studies on impaired patients have tentatively connected deficits in “Feeling of Knowing” judgement reliability with a metamemory impairment brought about by frontal lobes lesions.25 Although p- and a-consciousness always come together, “Feeling of Knowing” cases seem to support the view that they are distinct states. TP
PT
TP
TP
PT
PT
TP
TP
PT
PT
4. P-CONSCIOUSNESS AND ITS NEURAL CORRELATE A very serious claim in modern cognitive neurosciences is that p-consciousness is systematically associated with a specific physical neural state.26 Nevertheless, the property of consciousness emerging from the thesis of neural or functional TP
PT
194
TIZIANA ZALLA
complexities is not operational. It does not explain why every brain region does not equally contribute to consciousness, nor does it explain why brain damage may alter phenomenal experience selectively. Moreover, whenever p-consciousness is depicted as an emergent feature of some kind of a-consciousness machinery, neural organisations or informational integration, it often turns out to be an epiphenomenon. The issue here is that what is retained by selection is not pconsciousness itself, but the underlying neural or functional mechanism. Pconsciousness is thus a fortuitous side effect and plays no causal role by itself in cognition. According to Dehaene and Changeux, neural correlates of a-consciousness can be identified in a broad set of “workspace neurons” which, thanks to long-range excitatory axons, connect, for example, visual areas in the occipital cortex with frontal and parietal areas.27 The authors also state that there is a winner-take-all competition among the single representations to be broadcast in the global workspace, that is single phenomenal representations compete for dominating the aconsciousness workspace. As also suggested by Block, since single p-conscious representations have their own neural correlate in distinct brain areas (i.e., experience as of motion in MT/V5; experience as of a face in the fusiform face area in the temporal lobe), p-consciousness and a-consciousness are differentially instantiated in the brain.28 The author concludes that the notion of a single neural correlate of consciousness does not make sense of the empirical data—in particular of the signal detection theory data—since, because of a subject’s motivation or expectations, the same experiential content may result in different reports and phenomenal experience. Evidence of distinct functional and neural substrates for p-consciousness comes also from neuropsychological studies on attention. Moscovitch and Umiltà made a distinction between two kinds of attention subsystems as related to different neural correlates.29 This hypothesis supports the idea that attention mechanisms summoned by the “posterior subsystem” might be responsible for p-consciousness, whereas the “anterior subsystem” corresponding to the effortful and voluntary deployment of attention would be associated with several kinds of a-consciousness. These speculations are also compatible with Schacter’s Dice Model30 and anatomical evidence which postulates a system for a-consciousness located in the frontal regions of the brain and a p-consciousness awareness system located in the posterior parietal cortex.31 TP
TP
TP
PT
PT
PT
TP
TP
PT
PT
5. THE FUNCTIONAL ROLE OF P-CONSCIOUSNESS The hypothesis of a qualitative labelling role of p-consciousness finds empirical evidence in experimental studies on Source Monitoring. The psychologist M.K. Johnson and collaborators carried out several studies in which they showed that subjective and phenomenal qualities of representations are systematically used by subjects to discriminate between real and imagined events, between autobiographical events and other kinds of information, such as knowledge and
195
THE COGNITIVE ROLE OF PHENOMENAL CONSCIOUSNESS
beliefs.32 The qualitative properties (whether it is the richness and vividness of their perceptual details or their perspectival nature) of some kind of representations contribute to the more general processes of discrimination, judgement and attribution of mental events. Distinctive phenomenal features, associated with functionally separate neural substrates, allow different inferential processes. The existence of distinct domains for semantic knowledge, autobiographical knowledge and body representations are particularly important for the construction of self-consciousness. Phenomenal and qualitative properties accompanying some mental states, such as perceptual, proprioceptive and autobiographical memory states, enable us to ascribe mental states and experiences to ourselves.33 As also stated by Tulving, memories originating from perceptions have more sensory and contextual details, whereas memories from thoughts bring more information about inferential conceptual operations.34 When memory information without qualitative phenomenal characteristics is recalled, it is experienced as knowledge or belief. The authors claimed that the ability to discriminate different kinds of representations reflects the operation of a Source Monitoring that assigns ongoing mental events to a particular source origin.35 They identified three important types of source monitoring: External Source Monitoring, Internal Source Monitoring and Internal-External Reality Monitoring. For all these systems, there are multiple cues as to the source: sensory/perceptual information, contextual (spatial and temporal) information, semantic details, affect and cognitive operations. These authors also suggested that the confusion about the nature of different mental representations, due to the absence of these phenomenal experiences, as well as impoverished or unreliable information on self-generated events, could lead to confabulation syndromes and misattribution. Experimental studies have shown that Source-Memory deficit in schizophrenia might be closely linked to concomitant hallucinatory behaviour.36 When direct visual control is prevented and an ambiguous visual feedback is provided, schizophrenic patients fail to distinguish their own hand movements from similar gestures produced by others. Consequently, they are not aware of their own actions and lack on-line monitoring of motor responses. TP
PT
TP
TP
PT
PT
TP
PT
TP
PT
6. THE ADAPTIVE VALUE OF P-CONSCIOUSNESS From an evolutionary perspective, the adaptive value of consciousness is a hard question. Whenever we assign a causal role to p-consciousness, we are faced with “the great paradox of ephiphenomenalism”.37 Even if the phenomenal character of experience itself is there, p-consciousness may very well turn out to be epiphenomenally supervenient on an array of physical states and operations, each and every one of them having its own function and causation wholly separated from the quality of our experiences. Epiphenomena are known in evolutionary biology. Features that were not selected for, but result from the selection of other characteristics, are evolutionary TP
PT
196
TIZIANA ZALLA
epiphenomena. The most often mentioned example is the human chin, which appeared as a consequence of face and jaw reduction.38 Similarly, if p-consciousness is considered as a supervenient property of some neural states, it is nothing more than a fortuitous and neutral by-product of brain evolution, which has no effect on the survival of individuals. In the present work, I have claimed that p-consciousness has an adaptive biological function, that is, to label non-conceptual representations at the output of perceptual systems. Even if phenomenal representations are assigned with a conceptual content once they reach central systems, phenomenal features are used to establish their source origin. The assumption about a genuine causal role of conscious states entails that such states are causally efficacious in virtue of their phenomenal properties, despite the fact that they are physically implemented. This means that any evolutionary account of these properties must capture the adaptive role played by the various kinds of phenomenal states and not by a particular physical support or functional correlate. If we accept the hypothesis that p-consciousness has been directly produced by evolution to fulfil an adaptive function, then we have to assume (1) that pconsciousness is part of the phenotype and (2) that neural states underlying phenomenal states only exist because the latter have an adaptive function. According to this view, p-consciousness is shaped by natural selection to perform its functions. Moreover, if phenomenal qualities were epiphenomena, it would be difficult to explain why our perceptions would give rise to such a variety of phenomenal states that correspond to the representational features of the external world. From this perspective, phenomenal properties are what led the evolution of cognitive systems towards increasing discriminatory capacities which, in the case of the Internal Source Monitoring, are crucially involved in self-ascription of mental states and action. As Sherry and Schacter put it, the semantic and episodic systems had phylogenetically different evolutions because they subserved functionally distinct adaptive purposes.39 The adaptive advantage of a multiple memory and learning system rests on the failure of a single system to give appropriate specific responses for all the different problems encountered in the environment. Procedural memory (perceptual and motor skills) is based on unconscious detection and preservation of invariants over several different events that are used to accomplish automatic and over-learned actions. On the contrary, episodic memory emerged from an originally unified system of memory in order to solve domain-specific problems and to accomplish specific operations, such as storing self-related knowledge and enabling the self-ascription of mental states and action. In light of these considerations, the cognitive role of p-consciousness in the processes of discriminating internal states and in self-attribution can be better understood. As Shoemaker noticed, our ability to communicate and to explain our behaviour goes with having direct access to our mental states.40 Hence, it is important that we have access not only to the content of our beliefs, desires and intentions, but also to our reasons for acting and believing. Even if we usually are not explicitly aware of the feel or the look of things, we can eventually become TP
TP
PT
PT
TP
PT
THE COGNITIVE ROLE OF PHENOMENAL CONSCIOUSNESS
197
aware, when the need arises, and use this tacit knowledge about the phenomenal appearance of things to justify, adjust or revise our perceptual beliefs. In order for such operations to be accomplished, we must preserve the origin (e.g., memorised vs. imagined; perceived vs. remembered) of those representations which are poised for use in reasoning and in rational control of speech and action. Processes like justification and revision of beliefs, especially perceptual beliefs, are sensitive to the qualitative features of non-conceptual representations. In everyday life, knowledge about the source of information contributes to the ability to exercise control over our own opinions, decisions and actions. If we remember the source of a “fact”, we have information that is important for the evaluation of its veridicality. The functional role of conscious remembering is striking when the subjective feeling of experience is absent, as in the case of patients with amnesia. Although amnesics can guess about their past, they are unwilling to trust that information enough to act on it. 7. CONCLUSION In the present chapter, I have argued that the cognitive role and the adaptive value of p-consciousness can be highlighted within the modular architecture of the peripheral perceptual systems and in relation to a Source Monitoring mechanism. Central cognitive processes are sensitive to the qualitative properties of experience which are used to determine the source of higher-order representations. According to this hypothesis, p-consciousness was selected as a way of labelling some kinds of representations and to carry information about their origin. Such adaptive value might explain why it is so widespread among higher species. Perceptual pconsciousness could enable the organisms to be sensitive to stimulus saliency relevant to their survival. The vividness of sensory experience might enable more reliable, and faster responses so that a selective pressure might have acted in favour of a more robustly phenomenological perceptual system. The organisms are thus able to better cope with the wide range of situations found in a complex ecological environment. P-consciousness should be therefore considered as a proper phenotypic character on which natural selection is acting upon. Alternative accounts in terms of neural states that consider qualitative properties as epiphenomenally supervenient can hardly explain the fact that they correspond to intentional features of the external environment. Summarising, I have proposed an alternative to the epiphenomenal conception of p-consciousness, one that conceives the mind as endowed with a set of functionally specialised modules and of a central system partially subserving a “global workspace”.41 TP
PT
198
TIZIANA ZALLA
NOTES 1
See, e.g., Churchland (1988); Dennett (1988; 1991). See Nagel (1974). 3 Schacter (1991). 4 This first thesis is in accordance with Tye’s and Dretske’s notions of p-consciousness (Tye 1995; Dretske 1995). 5 See Zalla (1996). 6 Block (1995). See also this volume, p. 3, n. 3. 7 See also the “knowledge argument” from Jackson (1986). 8 Nagel (1974). 9 See Tye (1995); Dretske (1995). 10 Tye (1995, p. 138). 11 Dretske (1995, p. 104). 12 See the discussion of the unity of consciousness in this volume, chapter 15. 13 Singer (1993). 14 von der Malsburg and Schneider (1986). 15 Damasio (1989, p. 125). 16 Crick and Koch (1990, p. 266). 17 Metarepresentational theories of mind are also used to explain the ability of a person to ascribe mental states to herself and to others and to predict behaviour on the basis of such states (Leslie 1987; Sperber 1994). 18 Baars and Newman (1994). See also this volume, p. 204. 19 These patients suffer from brain damage and their blind area corresponds precisely to the location of lesions in the primary visual cortex. 20 Weiskrantz (1986). 21 See Block (1995). 22 Zalla and Palma (1995). 23 See Hart (1965); Schacter (1983); Schacter and Worling (1985). 24 See Koriat (1993). 25 See Shimamura and Squire (1986); Janowsky Shimamura and Squire (1989). 26 Edelman (1989); Damasio (1989); Crick and Koch (1990); Koch (2004). 27 Dehaene and Changeux (2004). 28 Block (2005). 29 Moscovitch and Umiltà (1989). Interestingly, Williams James (1890, pp. 393-394) defined attention as divided into kinds in various ways. First, he distinguished sensorial attention and intellectual attention; secondly, attention could be immediate, that is when the stimulus is interesting in itself, or derived (apperceptive attention), that is when the stimulus is in relation to something else. Attention could also be either passive, by reflex, non-voluntary and effortless or active and voluntary. Voluntary attention, said James, is always derived, but sensorial and intellectual attention might be either passive or voluntary. Passive sensorial TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
THE COGNITIVE ROLE OF PHENOMENAL CONSCIOUSNESS
199
attention, to which James was referring, might be the essential feature of phenomenal consciousness. 30 Schacter (1989). 31 See Dimond (1976). 32 Johnson (1988); Johnson, Hashtroudi and Lindsay (1993). 33 A syndrome associated with deep lesions in the right posterior, non-linguistic hemisphere is the patient’s denial of “ownership” of their paralysed, left arm. Many anecdotes about this exist in the clinical literature. The clinical neuropsychology of “body schema” suggests that while we can conceptually represent our arms and legs as “objects”, our normal experience of them is as belonging to our self. We can suppose that the lack of proprioceptive qualitative experiences is the cause of one’s misattribution of body’s parts. 34 Tulving (1985). 35 See Johnson et al. (1988, 1993); Suengas and Johnson (1988). 36 See Daprati et al. (1997); Franck et al. (2001). 37 Kim (1993). 38 Although the chin is not an organ shaped by evolution in the first place, this does not preclude it from acquiring a secondary function when fully developed. 39 Sherry and Schacter (1987). 40 Shoemaker (1991). 41 Many thanks to Tim Bayne for helpful comments on an earlier version of this chapter. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 15 THE UNITY OF CONSCIOUSNESS: A CARTOGRAPHY Tim Bayne
One of the many fault-lines within accounts of consciousness concerns the unity of consciousness. Some theorists claim that consciousness is unified—indeed, some theorists insist that consciousness is essentially unified. Other theorists assert that the unity of consciousness is an illusion, and that consciousness is often, if not invariably, disunified. Unfortunately, it is rare for proponents of either side of the debate to explain what the unity of consciousness might involve. What would it mean for consciousness to be unified? In this chapter I provide a brief cartography of the unity of consciousness. In the next section I introduce a number of unity relations that can hold between conscious states, and in the following sections I show how these unity relations can be used to construct various conceptions of the unity of consciousness—what I call unity theses. These unity theses provide us with a set of reference points by means of which we can orient discussions of the (dis)unity of consciousness.
1. UNITY RELATIONS A number of unity relations structure consciousness. Here I will introduce four such relations.1 The first unity relation is subject unity. Conscious states are subject unified when they are had by the same subject of experience. My current experiences are mutually subject unified, and your current experiences are mutually subject unified. A second unity relation—or rather, type of unity relation—is representational unity. Representational unity concerns the representational content of experience. One form of representational unity involves the integration of representational content based around perceptual objects. We don’t merely experience colours, shapes, movement and so on, rather, we experience objects as coloured, as having shapes, and as being in motion. Accounting for representational unity has become known as the binding problem. A third unity relation is access unity. The contents of experience are often available to “consuming systems”—the kinds of systems responsible for intentional behavior, verbal report, belief-formation, perceptual categorization, memory consolidation, the voluntary allocation of attention, and so on. Roughly speaking, the 201 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 201–210. © 2007 Springer.
202
TIM BAYNE
members of a set of experiences will be access unified to the extent that their contents are available to the subject’s consuming systems. A final unity relation is phenomenal unity. Experiences are phenomenally unified when they have a conjoint phenomenology; that is, when there is something it is like to experience them together. There is something it is like to have an experience of pain, there is something it is like to see a dog, and there is something it is like to have an experience of a dog and an experience of pain together. One can think of phenomenal unity as a relation that experiences have when they occur as components of a single phenomenal state.2 With these unity relations in hand we are now in a position to examine various conceptions of the unity of consciousness. TP
PT
2. THE CONSISTENCY THESIS One conception of the unity of consciousness focuses on representational consistency. According to the consistency thesis, phenomenally unified experiences must be representationally consistent. Baars appears to have the consistency thesis in mind when he suggests that the unity of consciousness involves the idea that “the flow of conscious experience […] is limited to a single internally consistent content at any given moment”.3 Baars himself holds that consciousness is unified in this sense, and a number of other theorists have agreed with him.4 But is the consistency thesis correct? As the phenomenon of binocular rivalry shows, consciousness is clearly resistant to representational inconsistency. When one’s eyes are presented with different images—say, a sunflower and a house— one’s visual experience typically alternates between these two images. One typically sees either the house or the sunflower but not both at the same time.5 Nonetheless, imaging experiments reveal that both stimuli are processed.6 But if both stimuli are perceptually processed, why are we conscious of only one at a time? Perhaps the mechanisms of consciousness ensure that inconsistent percepts cannot be phenomenally unified. The phenomenon of multi-stable images points to a similar conclusion. One can see a Necker cube as having one or other of two orientations, but one cannot see it as having both orientations at once. (Try it!) But although consciousness resists representational inconsistency the consistency thesis appears to be false. Consider the effects of wearing inverting spectacles.7 Prior to adaptation, such spectacles cause the contents of vision to be inconsistent with the contents of touch: the vase looks as though it’s upside down, but it feels as though it’s the right way up. Of course, the data from these experiments shows that consciousness exhibits a drive towards consistency, for the contents of one’s various sensory modalities are, after a period, brought into line with each other. But my point here is that consciousness tolerates inter-modal perceptual inconsistency, if only temporarily. One might challenge the claim that these cases involve inconsistent contents.8 Perhaps the fact that the visual and tactile experiences involve different modalities entails (or at least suggests) that their contents differ. Perhaps we should TP
PT
TP
PT
TP
TP
TP
TP
PT
PT
PT
PT
THE UNITY OF CONSCIOUSNESS: A CARTOGRAPHY
203
not think of experiences in different perceptual modalities as having the same intentional objects. But I am inclined to resist this objection. It seems to me that the representational contents of the two experiences are inconsistent, in that they could not be simultaneously satisfied. (Some support for this can be derived from the fact that the two experiences justify inconsistent beliefs.) One might attempt to revise the consistency thesis so as to deal with intermodal inconsistency. The obvious way to do this is to restrict the consistency constraint to intramodal contexts.9 According to the revised consistency thesis, phenomenally unified experiences within a single modality must be representationally consistent. (I will leave to one side the tricky question of how perceptual modalities might be individuated.) Does the revised consistency thesis fare any better than the original version? Perhaps not. Consider the waterfall illusion, first described by Aristotle. If one looks at a waterfall for a period of time, and then directs one’s gaze to a stationary object, the stationary object will appear to be moving in the direction opposite to the apparent motion of the first object. Yet, at the same time, it also appears to be stationary.10 Here is Frisby’s description of the experience produced by the waterfall illusion: “Although the after-effect gives a very clear illusion of movement, the apparently moving features nevertheless seem to stay still! That is, we are still aware of features remaining in their ‘proper’ locations even though they are seen as moving. What we see is logically impossible!”.11 Are we simultaneously conscious of the target object as both moving and notmoving? Frisby seems to think so, and a number of authors have joined him in this.12 I’m not so sure. I can see the object as moving, and I can see it as stationary—and I can move my attention between these two experiences—but I am not convinced that I ever experience it in both ways at once.13 So-called impossible objects provide another potential counter-example to the revised consistency thesis.14 One of my favourites is the Devil’s Pitchfork, also known as the impossible fork: TP
TP
PT
PT
TP
PT
PT
TP
TP
TP
PT
PT
Figure 15.1. The Devil’s Pitchfork. Does your experience of the Devil’s Pitchfork have inconsistent contents? It might be thought so—after all, how else could one see it as an impossible object? But perhaps one doesn’t see it as an impossible object all at once. Perhaps one only builds up a perception of it as an impossible object over time. Perhaps there is no point at which one’s visual phenomenology has inconsistent content. When one focuses on the prongs one experiences them as straight, and when one focuses on the
204
TIM BAYNE
“handle” one experiences the middle prong as lying behind the upper and lower prongs, but one does not experience both ends of the pitchfork at once. One builds up a representation of the pitchfork as a whole; this representation has impossible content, but it is not a visual representation. It is, rather, a belief formed on the basis of visual experience. (It is a belief about the kind of object that would have to exist in order for one’s experiences of the various parts of the object to be veridical.) Perhaps perceptions of so-called impossible objects do not falsify the revised consistency constraint, nonetheless, I think they do put pressure on it. There is one final point to consider. Suppose that the revised (intra-modal) consistency thesis were true—what might follow from this? The original consistency thesis suggested that something in the very nature of consciousness might prevent inconsistent contents from occurring within a single state of consciousness. By contrast, the revised consistency thesis promises to tell us more about perceptual modalities and representational content than about consciousness as such. 3. THE AVAILABILITY THESIS Another common conception of the unity of consciousness focuses on access unity. According to the availability thesis, the contents of a subject’s conscious states are globally available—they are available to each of the consuming systems that the subject possesses at the time in question. If the availability thesis is correct, then the contents of a subject’s current experiences will be available to the same consuming systems, and hence their consciousness will be behaviourally unified. Although it is rarely put in quite these terms, something very much like the availability thesis is widely assumed within current approaches to consciousness. Consider, for example, global workspace models of consciousness.15 According to such models, conscious content occurs within a workspace that enables it to be broadcast throughout the subject’s cognitive system. Sometimes global workspace theorists appear to identify consciousness with global availability, while in other places they appear to suggest that consciousness is the categorical ground of global availability. Either way, global workspace models hold that the difference between conscious and unconscious content consists in the range of consuming systems to which the content is available: unconscious content is available only to a restricted range of consuming systems, whereas conscious content is globally available to the subject’s consuming systems. The availability thesis accords with normal waking consciousness, but it fits less well with so-called altered states of consciousness, such as dreaming and delirium. The strangest events occur in dreams—a dog turns into an elephant, one’s aged grandmother eats a hamster on toast, and the Queen gets married to Lenny Bruce—without the dreamer registering any awareness of the incongruity. Although the contents of dream consciousness may be available to some of the subject’s consuming systems, they do not seem to be available to those systems involved in consistency checking, belief-updating, introspection, and (in at least some cases) memory consolidation. The tight correlation between consciousness and global TP
PT
THE UNITY OF CONSCIOUSNESS: A CARTOGRAPHY
205
availability seems also to be compromised in delirium. Delirious subjects lack the integrative, monitoring and mnemonic capacities typically associated with consciousness.16 In short, the association between global availability and waking consciousness appears to have more to do with wakefulness than with consciousness. There are at least two ways in which the proponent of the availability thesis might respond to these objections. Firstly, she might hold that consciousness can come in degrees, and that the restricted availability of content that one sees in dreams and delirium is a reflection of the fact that these states involve only minimal degrees of consciousness. I do not have much sympathy with this response— whatever exactly a “minimal” form of consciousness involves, it seems to me that there is little reason to think that the phrase applies to dreams and delirium. The phenomenology involved in dreams and delirium is—or at least can be—as rich as that which occurs in normal waking consciousness. Secondly, the proponent of the availability thesis might hold that the correlation between consciousness and global availability holds only in the normal waking state. This is a more plausible response, but in making it the theorist has given up on a global availability approach to consciousness as such. But the global availability account of consciousness is problematic even if we restrict our attention to the normal waking state. In the remainder of this section I examine a number of syndromes, each of which suggests that consuming systems can have differential access to the contents of consciousness: content can be available to some of the subject’s consuming systems without being available to all of their consuming systems. In the Dimensional Change Card Sort task children are shown two target cards and asked to sort a series of cards (e.g., red rabbits and blue cars) into piles according to a certain dimension such as colour.17 Having sorted several cards, children are then told to switch the sorting rule. 3 year-olds typically fail to switch dimensions regardless of which dimension is presented first. Nonetheless, the children usually respond correctly to questions about what they ought to be doing. Their verbal behaviour suggests that they are conscious of the post-switch rules, yet their sorting behavior suggests that they are conscious of only the pre-switch rules. In short, the children exhibit a dissociation between knowing and doing. Research in the visual perception of normal subjects provides other examples of selective availability to consuming systems. Consider the following experiment conducted by Cumming.18 Cumming’s subjects were shown a horizontal row of five letters flashed in rapid succession, one after another. This spatio-temporal arrangement of the letters was designed to produce a form of metacontrast masking known as “sequential blanking”. The subjects were then given a visual search task. The instructions were to press one key if the letter J (for example) was present in the display, and to press another key if no J was present: “[w]hen urged to respond as fast as possible, even at the cost of making a good many errors, subjects now tended to respond to the occurrence of a target letter in the “blanked” positions with a fast (and correct) press of the “target present” key, and then, a moment later, to apologize for having made an error”.19 Arguably, Cumming’s subjects had TP
PT
TP
TP
PT
TP
PT
PT
206
TIM BAYNE
experiences whose contents were available for report in one modality (manual button-pressing) but not another (verbal report).20 Marcel’s experiments provide examples of similar dissociations.21 Marcel asked normal subjects to respond when they detected a light that was illuminated for 200 ms. Subjects were asked to respond as quickly as possible, and to respond in three ways at once: by blinking, by pressing a button, and by saying “yes”. Subjects often gave inconsistent responses. In a single trial a subject might, say, report with their finger that he saw the light but fail to report orally that he saw a light. Or a subject might indicate that she had seen the light by pressing the button but fail to say “yes” or blink.22 (Marcel also found that subjects were unaware that their responses were inconsistent.) Again, subjects appear to have had conscious states whose contents were not globally available to consuming systems. On any particular trial, an experience of the light might (say) be available to those consuming systems involved in button-pressing but not verbal report, or vice-versa. The clinical syndrome known as “anosognosia” also provides prima facie evidence against the availability thesis. Anosognosia involves a lack of awareness of a deficit—typically unilateral neglect or hemiplegia (paralysis on one side of the body). Often this lack of awareness is partial, and the patient will have “dim knowledge” of their condition.23 A patient with hemiplegia may verbally acknowledge her condition but nonetheless attempt to rise from bed or engage in other activities that are obviously precluded to her (such as knitting). Other anosognosic patients behave in ways that indicate that they are aware of their deficit, yet when asked about their condition they resolutely deny that there is anything wrong with them. Still other patients give inconsistent self-evaluations: they may deny hemiplegia but admit that the affected limbs are “lazy” or “naughty”. Hemiplegic patients with anosognosia may claim that although they themselves are able to engage in certain tasks—such as climbing a ladder—that they are obviously incapable of, other people would be unable to engage in these activities if they were affected by the same impairment. In each of the syndromes just reviewed subjects appear to have conscious states whose contents are only locally available to consuming systems. As such, these phenomena provide prima facie evidence against the availability thesis. Of course, none of these syndromes provides a knockdown objection to the availability thesis. I have suggested that these cases involve conscious states whose contents are only partially available to the subject’s consuming systems, but it might be argued instead that they involve the high-level availability of unconscious content. Perhaps the sorting behavior of children in the Dimensional Change Card Sort task is guided by unconscious representations of the pre-switch rule, and perhaps the visual states that drove the button-pressing (guessing) responses in Cumming’s experiments were not conscious. Alternatively, the availability theorist might insist that these states are conscious, but that consciousness (phenomenality) is correlated not with global availability to the subject’s consuming systems but with the availability of content to each of the members of a certain subset of the subject’s consuming systems. Both lines of response have something to be said for them, but I lack the space to evaluate them here. My goal here is to draw attention to the kinds of cases that put pressure TP
PT
TP
TP
PT
TP
PT
PT
THE UNITY OF CONSCIOUSNESS: A CARTOGRAPHY
207
on the availability thesis, and to argue that the thesis is far less straightforward than it is often thought to be. 4. THE UNITY THESIS The third of my unity theses concerns the relation between subject unity and phenomenal unity. Consider what it is like to be a subject of experience. One typically enjoys a variety of perceptual, cognitive, emotional and agentive experiences, but no matter how numerous, varied, or complex these experiences are, they occur as parts (components, aspects) of a single, global phenomenal state. It is the content of this global state of consciousness that determines what it is like to be you right now. Arguably, this phenomenal unity extends beyond normal waking phenomenology to include even non-standard forms of consciousness, such as those experienced while dreaming and in states of delirium. Building on this thought, one might argue that for any subject of experience, there will be a global phenomenal state that subsumes each of the experiences that the subject in question has at that time. We can call this proposal the unity thesis.24 If the unity thesis is right, then there is a deep and intimate connection between subject unity and phenomenal unity. Engaging in a full evaluation of the unity thesis is obviously beyond the scope of this chapter. Here, I will be content to argue against an influential objection to the unity thesis—the “split-brain objection”. The basic split-brain syndrome is produced by sectioning the corpus callosum—the bundle of fibers that serve as the primary channel of communication between the two hemispheres of the brain. This procedure has little impact on cognitive function in everyday life, but careful research has revealed a complex array of deficits—and the occasional benefit—in the split-brain.25 The standard methodology for studying perception in the split-brain involves projecting distinct stimuli to the patient’s two hemispheres.26 Consider a typical split-brain patient (SB). The word “key-ring” might be presented so that “key” falls within the SB’s left visual field and “ring” falls within SB’s right visual field. The contralateral structure of the visual system ensures that stimuli projected to the left visual field are processed in the right hemisphere and vice-versa. When asked to report what she sees SB will say only that she sees the word “ring”, yet, with her left hand SB may select a picture of a key, ignoring pictures of a ring and a key-ring. It is widely assumed that this behavior demonstrates that SB has two simultaneous streams of consciousness, at least within experimental contexts.27 Two sorts of arguments are given for this conclusion. Some theorists appeal to SB’s apparent representational disunity: SB appears to have conscious representations of the words “key” and “ring”, but no conscious representation of the word “key-ring”. Other theorists appeal to SB’s apparent access disunity: SB’s representation of “key” is available to some consuming systems, and her representation of “ring” is available to other consuming systems, but none (or only very few) of the subject’s consuming systems has access to both representations. TP
PT
TP
TP
PT
PT
TP
PT
208
TIM BAYNE
I think it is reasonable to grant that if split-brain patients suffer from the kinds of representational and access disunities just outlined then there is a strong case against the unity thesis. But there is little reason to think that split-brain subjects do suffer from these representational and behavioral disunities. A careful examination of the evidence suggests that conscious perception in the split-brain subject may alternate between their hemispheres, rather than each hemisphere supporting its own stream of consciousness. The main evidence for this claim comes from research conducted by Levy and collaborators involving chimeric stimuli, that is, stimuli created by conjoining similar stimuli at the vertical midline.28 Since each hemisphere received a different stimulus one would expect the subject to produce conflicting motor response if representations of both stimuli were processed up to conscious levels. For example, one would expect the patient to verbally identify the stimulus as face A while using his or her left-hand to point to face B. In fact, such responses were vanishingly rare: TP
PT
For all patients examined, and for tasks including the perception of faces, nonsense shapes, picture of common objects, patterns of Xs and squares, words, word meaning, phonetic images of rhyming pictures, and outline drawings to be matched to colors, patients gave one response on the vast majority of competitive trials. Further, the nonresponding hemisphere gave no evidence that it had any perception at all. Thus, if the right hemisphere responded there was no indication, by words or facial expression, that the left hemisphere had any argument with the choice made, and, similarly, if the left hemisphere responded, no behavior on the part of the patient suggested a disagreement by the right hemisphere.29 TP
PT
Levy and Trevarthen found that asking patients to match chimeric stimuli based on their visual appearance typically favoured the figure presented to their LVF (implicating the right hemisphere), whereas instructing patients to match the stimuli based on their function typically favoured the figure presented to their RVF (implicating the left hemisphere).30 But at no point did both hemispheres appear to sustain simultaneous conscious perceptions. Arguably, the reallocation of attention not only changes the contents of the patient’s experience, it also changes the consuming systems to which the contents of their experience are available. How then might we explain the “key-ring” data? Arguably, we can apply the account just given of the studies reported by Levy and Trevarthen to these anecdotal reports. Perhaps the patient’s action of saying the word “ring” and of reaching for a key were successive rather than simultaneous. Although the reports of these “inconsistent” behaviours sometimes suggest that they are simultaneous rather than successive,31 I am unaware of any quantitative support for such claims. Further, even if the behaviours in question are simultaneous, it is possible that the conscious states underlying them might not be simultaneous. Either way, the proposal is that the patients have successive “key” and “ring” experiences rather than simultnaneous TP
TP
PT
PT
THE UNITY OF CONSCIOUSNESS: A CARTOGRAPHY
209
(and disunified) experiences, and that inter-hemispheric switches of attention might be responsible for this alteration. The findings that Levy and Trevarthen report indicate that there is no straightforward “split-brain” objection to the unity thesis. Split-brain subjects do, of course, suffer from certain kinds of disunities in consciousness, but there is little reason to think that they have two simultaneous streams of consciousness at once. Of course, there are other challenges to the unity thesis, but it is beyond the scope of this chapter to examine them. 5. CONCLUSION In this chapter I have examined three conceptions of the unity of consciousness—the consistency thesis, the availability thesis, and the unity thesis. Each thesis attempts to capture a central respect in which consciousness is, or at least appears to be, unified. I have argued that the consistency and availability theses face serious objections, and I have suggested that an important objection to the unity thesis might not be as potent as it is often assumed to be. But rather than exploring any one conception of the unity of consciousness in detail, my primary goal has been to present a framework of the unity of consciousness which can be used both for understanding various syndromes and for evaluating accounts of consciousness. Filling in this framework is a task that must be left for another occasion.32 TP
PT
NOTES 1
Bayne and Chalmers (2003). In the recent philosophical literature phenomenal unity is frequently referred to by the term “co-consciousness” (Dainton 2000; Lockwood 1989; Hurley 1998; Shoemaker 2003). I prefer “phenomenal unity” rather than “co-consciousness” because “co-consciousness” has been used within psychology for a very different relation (roughly, the relation that experiences bear to each other when they are co-subjective but not phenomenally unified) and because “phenomenal unity” is more descriptively accurate than “co-consciousness”. 3 Baars (1993, p. 285). 4 See, e.g., Tononi and Edelman (1998). 5 Rubin (2003). 6 Logothetis et al. (2003). 7 Kohler (1961); Taylor (1962). 8 Thanks to Chris Maloney for the following objection. 9 Hurley (2000). 10 See Frisby (1979, pp. 100-101) for a description of how to produce the waterfall illusion using a record player. Those readers born after 1980 might need to borrow their parents’ record player. 11 Frisby (1979, p. 101). 12 See Crane (1988). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
210
13
TIM BAYNE
Another possibility is that one sees the target object as moving-relative-to-object-X and as not-moving-relative-to-object-Y. 14 Tye (2003). 15 Baars (1988, 2002); Dehaene and Naccache (2001); Dennett (2001). See also this volume, p. 192. 16 Gill and Mayou (2000); Lipowski (1990); Fleminger (2002). 17 Zelazo (1996, 2004). 18 Reported in Allport (1988). 19 Ibid., p. 175. 20 It is possible that the dissociation Cumming discovered is really between early and late reports rather than between button pressing and verbal reports. Lachter et al. (2000) and Lachter and Durgin (1999) found an advantage for early (speeded) reports over slower reports in a meta-contrast masking study, but only when the masking of the stimulus was strong. Information about the target stimulus appears to be briefly available for report before being “over-written” by the appearance of the mask. 21 Marcel (1993, 1994). 22 It might be thought that Marcel’s subjects were merely failing to give a positive response rather than giving a negative response. But Marcel reports that he got the same results when his subjects were instructed to make a motor response for negative trials. 23 Bisiach and Berti (1995); Marcel et al. (2004); Vuilleumier (2004). 24 Bayne and Chalmers (2003). 25 See Bogen (1993); Corballis (1995); Gazzaniga (2000, 2005). 26 See Gazzaniga (2000) for a useful introduction to the techniques used to study the splitbrain. 27 Davis (1997); Gazzaniga and LeDoux (1978); Marks (1981); Moor (1982); Puccetti (1981); Tye (2003). 28 Levy (1977, 1990); Levy and Trevarthen (1976); Levy, Trevarthen and Sperry (1972); Trevarthen (1974). 29 Levy (1990, p. 235); see also Levy (1977). 30 Levy and Trevarthen (1976). 31 See, e.g., Gazzaniga and LeDoux (1978). 32 Many thanks to Jillian Craigie and Neil Levy for their helpful comments on an earlier version of this chapter. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 16 EXTENDED COGNITION AND THE UNITY OF MIND. WHY WE ARE NOT “SPREAD INTO THE WORLD” Michele Di Francesco
Where then is the mind? Is it indeed “in the head”, or has mind now spread itself, somewhat profligately, out into the world? […] Every thought is had by a brain. But the flow of thoughts and the adaptive success of reason are now seen to depend on repeated and crucial interaction with external resources. […] In a sense, then, human reasoners are truly distributed cognitive engines: we call on external resources to perform specific computational tasks, much as a networked computer may call on other networked computers to perform specific jobs. Andy Clark
1. TWO NOTIONS OF MIND According to a model of the mind theorised by post-classical cognitive science, mental processes are embodied and distributed examples of cognitive processing.1 Body and environment contribute to the achievement of our cognitive tasks in such a fluid and integrated way that they can be considered bona fide parts of cognitive agents. According to the extended model of cognition, the “mind” lies at least in part outside the body. What makes a piece of information cognitively relevant is the role it plays, and nothing prevents this role from being played by an external item. In turn, the extended model of cognition leads to an extended model of subjectivity, according to which the subject “is spread into the world”. The aim of this chapter is to compare the extended mind with the personal mind. The personal mind is the kind of mind we attribute to (human) persons, by means of folk psychological intentional language. The personal mind has two fundamental features: it is the locus of subjectivity and it is the locus of rationality. In other words, (i) it makes reference to a “subjective ontology”,2 or, as I shall say, it designs a subjective space which requires intentional language in order to be described. Subjective space is characterised by the first person point of view; it expresses an individual perspective; it exhibits a peculiar unity and a phenomenology; and its contents are given to the subject in a privileged way. (ii) Personal mind is fundamental, too, when we explain human action; it characterises the “space of reasons”,3 with its normative and intentional features; it allows us to TP
TP
TP
PT
PT
211 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 211–225. © 2007 Springer.
PT
212
MICHELE DI FRANCESCO
understand rational actions as rational, and gives us the conceptual tools to characterise actions as actions (not merely happenings). We should not identify the personal mind with the conscious mind. Unconscious mental phenomena may be part of personal mind. Neither should reference to personal mind imply a form of radical dualism. Personal mind may (should) be considered as the product of subpersonal processes whose nature is impersonal (third-personal) and causally explainable in terms of neural correlates, functional organisation, and so on (even if no explanation is available at present). In this case, I shall say that personal mind is emergent from subpersonal processes.4 In the following pages, I argue that mere causal-informational connections, which characterise a cognitive system in the extended mind paradigm, are not sufficient to explain the peculiar kind of unity which is essential to our notion of personhood and subjectivity. The unity of subjective space cannot be explained by means of the causal processes which constitute the extended mind. Those processes, in fact, are blind to the boundaries between inner and environmental processes. If we want a slogan: we may have extended cognition, but there are no extended subjects. And in order to be able to explain this fact, we need the concept of personal mind. TP
PT
2. THE EXTENDED MIND MODEL OF COGNITION “Where does the mind stop and the rest of the world begin?” This question opens Andy Clark and David Chalmers’ seminal paper The Extended Mind,5 where they advocate a peculiar form of externalism: “an active externalism, based on the active role of the environment in driving cognitive processes”.6 The conceptual background of active externalism is a model of the mind theorised by post-classical cognitive science. According to this model, (human) cognitive agents have a strong tendency “to lean heavily on environmental support”—making use of various instruments which vary from pen and paper to the nautical slide rule, “and the general paraphernalia of language, books, diagrams and culture”.7 Environmental support allows us to perform epistemic actions, which are a kind of (cognitive) activity in which “a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognising as part of the world”.8 According to Clark and Chalmers, this allows us to consider the world as part of the cognitive process. One may think that, while brain-based cognitive resources are designed by nature to allow humans to obtain their cognitive goals, external resources are only contingently available (a subject may forget her pen, or her notebook). Clark and Chalmers reject this kind of objection, claiming that mere contingency does not rule out cognitive status. The reason is that from the extended cognition perspective, what matters is the causal coupling between the organism and its environmental resources, and external features are “just as causally relevant as typical internal features of the brain”.9 As we shall see, this conclusion may be acceptable in case of extended (and external) processing, but it faces serious problems when applied to the extended mind. TP
TP
PT
TP
PT
TP
TP
PT
PT
PT
213
EXTENDED COGNITION AND THE UNITY OF MIND
It is important to note that, despite the obvious differences, nothing prevents classical and situated cognitive sciences from sharing some metaphysical assumptions about the nature of mental processes. Even if post-classical cognitive science stops thinking of the mind/brain as a (self contained) computer, its main assumptions are still compatible with the fundamental idea according to which information processing is the basis of thought.10 The difference from the old paradigm is that—this time—part of processing takes place in the environment, and/or takes advantage of the physical constitution of the involved cognitive system. According to the extended model of cognition, fundamental aspects of the mind (such as beliefs and desires) lie outside the body: “There is nothing sacred about skull and skin. What makes some information count as belief is the role it plays, and there is no reason why the relevant role can be played only from the inside of the body”.11 In this sense, the extended mind is a natural development of a conception of cognition based on information processing, which is not sensitive to the internal-external distinction. It adopts the notions of information processing and explanatory role as key concepts. This entails the following: (a) when we explain a cognitive activity, only causal processes are involved; (b) causal processes (in particular information processes) are blind to the boundaries between inner and environmental processes because mind, body and environment are part of an integrated system. We have then to re-design the boundaries between cognitive systems and the world. Even in a more cautious analysis of the issue, one of the authors goes so far as to claim that such a new picture of the mind “threatens to reconfigure our fundamental self-image by broadening our view of persons to include, at times, aspects of the local environment”.12 In the following pages I would like to show that this reconfiguration is not necessary: persons are not at risk of spreading into the world. There is at least one important sense in which the coupling between persons and their extra-bodily cognitive resources is a relation between two different kinds of phenomena. TP
TP
PT
PT
TP
PT
3. COGNITION, CONSCIOUSNESS, AND THE EXTENDED MIND An obvious criticism of Clark and Chalmers’ view is that external cognitive processing has no phenomenological content. There is nothing that “it is like to be” if you are an external support for cognition: why then call such states “mental”? They should be considered tools that a mind uses, and not part of the mind itself. The authors’ reply is that the objection confuses the mental with the conscious. While it is implausible that our consciousness goes outside our body, there are many cognitive processes (connected with memory, language, skill acquisition13) which are not conscious. In fact, the distinction between mental and conscious is a commonplace in contemporary cognitive science, so any criticism of the extended mind model based on the absence of consciousness in the external processing is a symptom of confusion. TP
PT
214
MICHELE DI FRANCESCO
However, the issue is more complex. To characterize a form of cognitive processing which takes place in the external environment as “mental” suggests that it is in some way homogeneous with other kinds of processes which we normally call “mental”. But this is disputable. There is no doubt that, if we adopt the extended mind paradigm, there is causal homogeneity among internal and external processes. But even the authors admit that the simple presence of a causal connection is not enough to secure the existence of a mental phenomenon. It would be silly to say that I know all the facts in the Encyclopaedia Britannica “just because I paid monthly installments and found space for it in my garage”.14 Note that the same problem arises when we compare mental activity and neural activity. There are many neural processes which are causally necessary for the existence of mental phenomena, but that are not called “mental”: presence or absence of a certain neurotransmitter, the level of myelin on the axons, the glucose rate in the blood are obvious examples. So, even if we confine ourselves to a strictly cognitive notion of mind and consciousness (ignoring for the moment any reference to the phenomenological level15), it is natural to distinguish between genuine mental processes and processes that are causally relevant for the emergence of the mind, but are not part of the mental realm themselves. Note that we may well call “mental” many forms of cognitive processing that are not accessible to consciousness, but such processing should be strictly related to “conscious” processing. In fact, when we deal with the sub-personal level operating in the brain, the outputs of brain-realised cognitive processing become accessible to our conscious mind in a more intimate and direct way than those processed by external devices. And the nature of this intimate relation has a direct influence on the way our conscious mind works and interacts with other parts of our cognitive system. This point is obviously strengthened when we take into account subjective aspects of consciousness. Phenomenological content may affect subsequent mental processing;16 consciously experienced beliefs are among the building blocks of “the space of reasons”, and are thus essential to our explanation of human agency. For brevity’s sake, we may introduce the distinction between “subpersonal” and “nonpersonal” cognitive processing. Both are in a sense “external” with regard to the personal mind (generally speaking, subpersonal brain processes are not directly accessible to the subject), but they entertain a different relation to the personal mind. The crucial point is that, when the contents of subpersonal processing based on internal support have access to consciousness, they show different properties from those based on external supports. If I write a note in my notebook and later I read it, the note is presented to me in a different way from a memory. For example, they don’t exhibit immunity from error through misidentification.17 When I remember something I don’t ask to myself: “here there are some memories; are they my memories?” (if I do, this may be a symptom of mental disorder). On the contrary, I may well ask myself “is that annotation mine?”. Apart from schizophrenia et similia, I don’t ask “are these thoughts in my mind my thoughts?”, while I may ask myself “is that handwriting my handwriting?”. We shall return to this point later. TP
TP
PT
TP
PT
TP
PT
PT
215
EXTENDED COGNITION AND THE UNITY OF MIND
The “mineness”18 of experience introduces the issue of the unity of the mind, the fact that conscious states are normally part of a unified conscious field and that “this total state is not just a conjunction of conscious states. It is also a conscious state in its own right”.19 Mineness and unity are also related to the perspectivalness of experience: the fact that our conscious experiences take the self as their common centre. Or, to use another metaphor, the fact that the subjective space where every conscious experience is located has relational structure, with the self as one of the relata—the intentional structure of conscious experience takes the self as a stable element, relating with changing object-components.20 The ways in which mental contents are available to the personal mind entail an immediate matching between the subject and her “internal” states. That is, when the subject has (conscious) access to the contents of her personal mind, she immediately and directly experiences them as part of her experience. When she entertains I-thoughts (thoughts that are typically expressed by the first person pronoun “I”),21 and she thinks to herself in the first person perspective, there is no room for mistakes of identification. For the subject to have an experience is to immediately and directly experience it as her experience.22 Of course we may use the notes in our notebooks to remember something, simply by reading them. When we acquire information by conscious perception we may take the result of external processing as input for our personal mind. But the connection between the information so acquired and the personal mind is quite different from the connection which originates from subpersonal processing. In the latter case (a) the content is immediately and directly available to the subject; (b) it exhibits a co-presence relation with the other (conscious) mental states of the subject; (c) it may be taken as a non-inferential starting point for a judgement.23 All these features of personal mind are not present in the extended mind. Right now, it is easy to imagine a criticism to my line of thought: instead of rejecting the extended model paradigm, we have rather changed the subject, switching from extended cognition to consciousness. I would like to stress that my point is not that extended cognition is impossible. The purported irreducible subjective character of the (conscious) personal mind is not direct evidence against the extended mind model, since that model deals with a more general notion of cognitive processing. My (first) point is that there is a gap between extended mind and personal mind, and that the conceptual resources available to the extended mind model are not sufficient to fill the gap. And (my second point) if we cannot fill the gap, there is no future for the notion of the extended subject. TP
TP
PT
PT
TP
TP
PT
PT
TP
PT
TP
PT
4. FROM EXTENDED COGNITION TO EXTENDED MIND? The existence of a gap between extended and personal mind suggests that there are two kinds of informational processing connected to our cognitive capacities. The first is constituted by those subpersonal processes that offer direct input to the personal mind. The second is the wider set of computational activities that are causally relevant to the achievement of a determinate cognitive goal. This wider set
216
MICHELE DI FRANCESCO
of cognitive processing may make use of external support, and in this case interacts only indirectly with the personal mind. We should note that the interesting distinction here is not internal vs. external, or biological vs. mechanical. Let us suppose that it were possible to install in our brain electronic implants which made a certain kind of information accessible to our conscious mind, in the immediate and direct way that biological subpersonal processes do.24 In this case I think that we should consider them as examples of the first kind of information processing, since they would offer direct input to the personal mind.25 There may be many theories of the relation between subpersonal processing and the personal mind. A moderate version tries to limit the “dualistic” consequences of the personal/subpersonal distinction. For example it may present it as a “change of destination” of the same input, as in the following model, proposed by Gareth Evans: TP
TP
PT
PT
We arrive at conscious perceptual experience when sensory input is not only connected to behavioural dispositions […]—perhaps in some philogenetically more ancient part of the brain—but also serves as the input to a thinking, concept applying, and reasoning system; so that the subject’s thoughts, plans, and deliberations are also systematically dependent on the informational properties of the input. When there is this further link, we can say that the person, rather than just some part of his brain, receives and possess the information.26 TP
PT
Here, even if the personal level is a new emergent aspect, a systematic connection is required between the content of the thoughts exercised by the subject and the informational properties of the inputs.27 More radical versions may choose to underline the distance between “the space of causes” and “the space of reasons”.28 The moderate and the radical approaches will offer us different treatments of the connection between thoughts and behaviour, but I am not interested in this aspect of the issue.29 What is in question now is the emergence of the personal level from the subpersonal one. In particular I claim that, while it is conceivable that the personal mind may emerge from subpersonal processes (from the “subpersonal mind”), it is very difficult to find an appropriate relation connecting extended and personal mind. The differences between the two ways of treating mentality are so great that we may suppose a kind of incommensurability between them. An obvious reply could be that it must be possible to fix within the extended mind a proper subset of processes which can be considered as the emergence-basis of the personal mind. But the point is that it is not possible to individuate this subset by means of the conceptual resources of the extended mind paradigm. For the very reasons given by its proponents, the extended mind model invalidates and nullifies the internal/external distinction.30 It is by starting from the personal mind that we TP
PT
TP
TP
PT
TP
PT
PT
EXTENDED COGNITION AND THE UNITY OF MIND
217
can discriminate the extended mind processes which are suitable candidates for the subset. To clarify this point, let us consider the foundations of the unity of the extended mind. We know that it is based on causal-informational relations (those relations that are involved in the cognitive processing developed by the system). Is any relation good enough? Suppose that an apple hits my head, causing a thought about the law of gravity. Is the apple part of the process of thinking? It seems obvious that a positive answer raises problems—a fact that Clark and Chalmers acknowledge, as is apparent from their discussion of what we may call the “portability objection”. The brain—the objection says—comprises “a package of basic, portable, cognitive resources” which may incorporate bodily actions, but which does not encompass “the more contingent aspects of our external environment”.31 The authors’ reply is that “mere contingence of coupling does not rule out cognitive status”32 for two reasons. The first is that in the future we may be able to plug technological modules into our brain (for example, a chip that increases our short-term memory). Our reply to this is that if such an implant produced a genuine form of memory which offered direct input to the personal mind, it would simply represent a non-biological development of the biological bases of subpersonal processes—and the result of its processing would not be external to the personal mind. Quite a different situation occurs when we consider truly external devices such as pocket calculators, notebooks, etc. In this case, to answer the portability objection, Clark and Chalmers acknowledge that reliable coupling is required. To quote a related work by Clark: TP
PT
TP
PT
Mind cannot usefully be extended willy-nilly into the world. […] The notebook is always there—it is not locked in the garage, or rarely consulted. The information it contains is easy to access and use. The information is automatically endorsed—not subject to critical scrutiny, unlike the musings of a companion on a bus. Finally, the information was originally gathered and endorsed by the current user (unlike the entries in the encyclopaedia).33 TP
PT
These requirements sound reasonable. In fact, from my point of view, they try exactly to mirror at the causal level the phenomenological properties of the personal mind we have described above. We may describe their function by saying that they allow the passage from cognitive processing to mind. But from the extended mind perspective, I claim that they are completely ad hoc. Let us take the third requirement as an example: “the information is automatically endorsed – not subject to critical scrutiny”. This is exactly what happens when the result of subpersonal processing becomes accessible to the conscious-personal mind of a subject. The resulting mental item is “just there”, as a part of the integrated mind of the subject; poised for thinking, reasoning (and feeling). But why should such an uncritical attitude be extended to the content of the notebook? In fact, if I read a note
218
MICHELE DI FRANCESCO
I can doubt it, or I can reject it as unbelievable; I can suspend my judgement about its paternity, and so on. I can carry out all these acts regardless of any explicit promise I may have made previously such as “I promise I’ll never distrust my notebook entries”. In the extended mind model, the mind is a Parfittian entity,34 a kind of club, collecting any processing that leads to the accomplishment of a cognitive task. And it is a democratic club: no processes can be excluded, according to the club rules. The restrictions described above mean to facilitate the passage from processing to mind, but they implicitly use a view of mind which comes from the “personal” side—which refers to our subjective inner life as first personally experienced. Of course “reliable coupling” is a useful biological requirement, but it has nothing to do with a process being an example of information processing, or not. The extended mind model needs a sub-set of processes which are constituents of a reasonably integrated subject (no apples within it). To strengthen their position, the extended mind’s supporters have to navigate the dangerous waters of subjectivity. Is an extended subject possible? TP
PT
5. FROM EXTENDED MINDS TO EXTENDED SUBJECTS? In their target paper, Clark and Chalmers explicitly claim that they are not simply speaking of external processing: they want to describe extended minds. As we have seen at the beginning of section 3, they make a distinction between experiences (which may be determined internally by the brain35), and other mental states such as belief, which can be constituted partly by the external environment. We have already noted that it is not obvious that such a distinction is tenable, since the phenomenal component of our mental content may be relevant to the way we compute them. But in any case, beliefs are sufficient to make room for subjective spaces and rationality in our framework—shifting, so to speak, from cognitive agents to subjects. In human beings at least, beliefs can be consciously entertained, consciously endorsed or rejected, and they are referred to in our explanation of action. In other words, beliefs (and other propositional attitudes) have access to “the space of reasons”—and to its normative and intentional features, which are coherent parts of the personal mind’s territory. In any case, something is a belief for the extended mind paradigm if it conveys, in a reliable way, content that is causally relevant to the achievement of a cognitive goal. The reliability condition must be added to avoid purely contingent causal processes becoming part of the mind (as in the apple example above). So described, however, a belief is hardly definable as “mental”, for the reasons I have proposed above. To characterize a vehicle of content as “mental” should require that it can be taken as input for the personal mind of a subject—exhibiting in this way the right connections with his phenomenology, rational agency and capacity for thought. The point is not terminological, but substantial: of course, we may call anything we want “mental”. But, following the extended mind usage, we miss the distinction between subpersonal and nonpersonal cognitive processing. And the TP
PT
219
EXTENDED COGNITION AND THE UNITY OF MIND
introduction of the abovementioned requirements to secure “reliable coupling” does not help. To clarify this point we may look at the story of Otto and Inga.36 Otto suffers from Alzheimer’s disease, and carries with him a notebook which plays the role of his biological memory. Whatever new information Otto learns, it is written in the notebook. Let’s suppose that he is told about an exhibition at the Museum of Modern Art, and that he decides to go to see it. Then he looks in the notebook for the Museum address, finds it, and so goes to the Museum. Now, if we compare Otto’s story with that of Inga—a friend of his who heard about the exhibition and simply remembered the Museum address—we may think that Otto’s notebook plays exactly the same cognitive role as Inga’s biological memory. Especially if we introduce suitable restrictions to secure “reliable coupling”: Otto constantly uses the notebook, he easily finds any information it contains,37 he automatically endorses it,38 and so on. “Otto’s and Inga’s cases seem to be on a par: the essential causal dynamics of the two cases mirror each other precisely”.39 Causal dynamics is not all, however. It fits pretty well with an enlarged version of cognitive processing, but it simply misses the distinction between the property of being an external item accessible to perception and the property of being an internal content accessible to thought. Both properties are causally relevant to explain Otto’s behaviour; but they are, nevertheless, different in kind. Note that the point here is not the simple lack of a phenomenology associated with the retrieval of information in Otto.40 The point is that the absence of a phenomenological dimension associated with the notebook entries offers evidence for the fact that these entries are not potential entries of Otto’s personal mind—they do not contribute to making him the particular subject he is. The connection between Otto’s mental content and the notebook entries is causal, but not motivational: it does not explain Otto’s acts as actions. In other words, to be considered a reason, the notebook content should be assimilated within Otto’s personal mind. And this is not the case. If Otto reads in the notebook “I want to go to the exhibition” and he doesn’t go, this is not a case of weakness of the will. If the notebook contains incompatible projects, this does not amount to an interior conflict. If Otto reads “I can fly like Superman”, but he doesn’t believe it, he is not contradicting himself; and so on. Of course, according to the requirements proposed to secure “reliable coupling”, in many respects Otto functions as if the notebook were part of his personal mind. As we have seen, the reason is that the requirements mirror, at the empirical level, the conceptual requirements which define the personal mind (as it is shown by phenomenological investigation and logical analysis). But are constancy, accessibility and automatic endorsement which are acquired ad-hoc sufficient to say that we may have subjects who are “spread into the world”? According to the notion of subjectivity we adopted before, the answer is negative: Otto is not an extended subject who carries on fairly well thanks to his extra-mind prosthesis. He is an ill person, who can attain his cognitive goals by means of external support. But perhaps this kind of criticism simply begs the question. Perhaps we should enlarge our notion of subjectivity to grant that integrated systems may be called “subjects”. I don’t think that this is a good idea: rationality, normativity and TP
TP
TP
PT
PT
TP
TP
PT
PT
PT
220
MICHELE DI FRANCESCO
phenomenology are constitutive aspects of subjectivity which do not extend themselves outside personal mind. If we want to renounce them, we had better stop speaking of subjects.41 Let us suppose however that we may have extended subjectivity: “a coupling of biological organism and external resources”;42 and, to avoid confusion, in our example, let’s call this enlarged entity Super-Otto. SuperOtto is an enlarged (expanded) system which may be compared with Otto. Now, to demonstrate that they exemplify two different kinds of entities, I shall consider the very different ways in which they obtain their “mental” unity. In fact, many aspects of the unity issue have been anticipated. When Otto exercises his perceptual abilities to obtain information from his notebook, the contents he acquires have a different connection from those contents available to him (the subject) as the emergent result of some subpersonal processing. In this latter case, they are part of a phenomenological and unitary mental space; they entertain conceptual, normative and motivational connections with other mental states; (when occurring as component of I-thoughts) they exhibit “immunity from error through misidentification”; they cannot be expressed by a definite description. The ways in which mental contents are available to the personal mind entail an immediate matching between the subject and his “internal” states. We are then entitled to speak of personal mind when the subject is directly and immediately aware of his experience as his experience.43 None of this can be said about Super-Otto: he/it is a real Parfittian entity, which can incorporate any form of (reliable) information processing44—the only necessary glue being a causal connection. Of course we may try to reconstruct within his/its enlarged mental space the distinction between subpersonal and nonpersonal processing—thus explaining the difference between processes that affect the unity of “pre-mental” level from those which affect mere cognitive processing. But, again, this distinction is made starting from the personal level and looking “down” at the “subvenient” levels (which can be taken as the emergence bases of mental properties individuated at the personal level45). This means that the concept of personal mind is our only access to the distinction between (genuine) subpersonal processes which can be taken as the emergence-basis of a subjective space and the other nonpersonal informational processes which characterise the extended view of cognition. The extended mind paradigm is not enough to capture fundamental aspects of phenomena we call “mental”. TP
PT
TP
TP
PT
PT
TP
TP
PT
PT
6. CONCLUSION: OTTO, SUPER-OTTO AND THE PERSONAL MIND If the reflections proposed here are convincing, the reader will be prepared to accept the idea that the perspective of personal mind is irreducible to the perspective of extended mind. Cognitive processing is not sensitive to the internal-external distinction and it is blind to the boundaries between inner and environmental processes. However, when we take mind, body and environment as part of an integrated system we lose sight of the subject and its mental space. But since
221
EXTENDED COGNITION AND THE UNITY OF MIND
personal mind phenomena do exist, the world-description we arrive at by means of the extended mind paradigm is incomplete. Now a new question arises: let us concede that the perspective of personal mind is irreducible; but what if it is also irrelevant? In other words, are personal mind phenomena important data if we want to explore the “cognitive dynamics” generated by the relation of a (human) organism and its environment? Or are they just cognitive fossils, memories of a primitive stage of human development, of very limited importance when we try to understand the essential features that characterise our species today? We may perhaps credit Clark (1997, 216) with a similar conception. Here Clark addresses the question as to whether the “putative spread of mental and cognitive processes into the world implies some correlative (and surely unsettling) leakage of the self into the local46 surroundings?”. Clark’s answer is “Yes and No”, and the “No” component depends on the fact that he credits the subject with conscious contents which supervene on the individual brain. But then he specifies that such conscious contents are not very important: they are “at best snapshots of the self considered as an evolving psychological profile”.47 Being limited to just these conscious contents, we would miss the evolution of reason and thought, characterised by the intimate and complex interplay between mind, body and environment described by the extended mind model of cognition. I think that this view contains more than a grain of truth when it stresses the limits of self-consciousness, but that it seriously underestimates the importance and explicative richness of the personal mind. It also underestimates the personal mind’s extension, which is not limited to ongoing conscious phenomena, but contains all the phenomena which can become accessible to the mind or (more generally) can serve as input to a thinking, concept applying, and reasoning system. From this perspective our personal mind is relevant in the explanation of behaviour not only because of the importance of emotive and sensitive factors in the construction of self and rationality.48 Another reason is that the personal mind may internalise, so to speak, aspects of the cognitive environment—first of all language.49 Cultural development in humans allows the personal mind to incorporate part of their social environment simply by concept- and language-acquisition. More literally, internalised language is part of personal mind, and internalised language makes our subjective world richer and deeper. The mental life of a language-speaker is qualitatively different. To give one example, let us suppose that we make a mistake that greatly affects our life. We take the wrong decision about a job, and, as a consequence, our personal life faces serious troubles. When we realise the consequences of our mistake, we may describe our mind as full of sorrow, grief, misery, remorse, sadness, regret, contrition, and so on. The more conceptual resources we have, the more fully we will be able to analyse our inner states. And the more fine-grained this analysis will be. And this is just the beginning. These mental states will motivate actions and conduct based on beliefs, opinions, desires and hopes, whose coherence and desirability we can evaluate in advance. In this sense, the mental life of a subject which entertains these kinds of mental states is not a static blueprint, but rather a dynamic reality. All these states of mind are detected TP
PT
TP
TP
PT
PT
TP
PT
222
MICHELE DI FRANCESCO
by introspective ability powered or even made possible by language.50 Note however that—differently from the extended mind model—here language is not an external part of a boundary-less entity, but has been internalised to act as a proper part of an integrated mind. I think this is a very important difference, and probably it is not by chance that Clark himself (1997, 198) wants to distinguish his position from Daniel Dennett’s apparently similar claims (1991). Dennett says that our cognitive abilities depend only partially on our brain “hardware”—innate structure.51 More important are the ways in which culture and language affect the plasticity of the brain. Public language acts on a programmable brain, and so modifies its neural organisation. So—in Clark’s words—for Dennett, public language is both a tool and a source of brain (mental?/cognitive?) reorganisation. In opposition to this, Clark proposes the extended mind paradigm, which is “inclined to see [language] as in essence just a tool—an external resource […]”.52 On the contrary, I think that Dennett’s model is useful to order to see how the personal mind is more than mere consciousness—and that it is relevant. Powered by internalised language (and other external practices, such as mathematics) personal mind has become rather efficacious and complex—very different from the extension-less locus of immediate consciousness depicted by the extended mind model. Note that, if I agree with Dennett that our personal mind is relevant, I disagree with him in considering it irreducible.53 As is well known, the self for Dennett (1991) is a virtual entity whose unity and coherence is fictitious, and which is reducible to subpersonal distributed and parallel processes. I think instead that we should think of the subject as an emergent entity, endowed with causal powers and the capacity to interact with the real world. The thesis of reducibility of the personal mind to subpersonal brain activity, however, is quite a different issue from the reducibility of the personal mind to the extended mind. To repeat it for the last time, if we adopt the extended mind stance, any distinction between mind and world vanishes. We cannot isolate—by means of a purely impersonal causal language— those processes that we take as internal to the subject from those that lie outside the realm of subjectivity (with its normative and intentional character). The very existence of a unitary subject is revealed only when we adopt the personal mind stance. To express it in a single sentence: causes alone cannot delimit the subject from the inside; but if we select the relevant causal chain by starting from the self then we are already outside causal language. In saying this, I am more interested in the defence of the necessity of the personal mind than in the criticism of the idea of extended cognition itself. Contrary to other criticisms (which are for example discussed in Clark 2005 and 2006), I consider extended cognition talk useful and insightful in many respects, especially when it casts light on the importance of social and cultural factors in shaping human cognition (and brain function). There is no reason to deny that extended cognition is a real and very important phenomenon. First, our cognitive tools change the way our mental tasks are developed and articulated dramatically. Second, socially powered mental phenomena expand the capacities, aims and performance of our personal TP
TP
TP
PT
TP
PT
PT
PT
223
EXTENDED COGNITION AND THE UNITY OF MIND
minds. As a consequence of their (biologically based) openness, human minds (and in general human beings) are open to technological complements. But to be so incorporated, cognitive technologies must produce their internal counterpart (external support is not enough). In this sense, even if external cognitive devices shape our personal minds, and our personal minds are part of the world, we—the subjects—are not spread into the world. So far as agency and thought are concerned, we are unified entities, individuals with a perspective on things—who are not only in the world, but also have a world; we are subjects of acts, and rational entities.54 TP
PT
NOTES 1
See this volume, pp. 15 ff., 87. See Searle (1992). 3 See Sellars (1956); McDowell (1992). 4 Emergentism claims that reality is structured in different levels and, when a given level reaches a certain degree of complexity, new properties emerge. These properties are not explainable and are not predictable on the basis of knowledge of their lower-level bases. They also bring into existence new causal powers, which can exercise their action upon lower levels. In fact there are different kinds of emergentism, but I am not taking any position about the kind of emergentism (radical or moderate) which should be adopted here. See Di Francesco (2005) for a discussion of various kinds of emergentism. 5 Clark and Chalmers (1998). 6 Ibid., p. 7. 7 Ibid., p. 8. 8 Ibid. 9 Ibid., p. 9. 10 See Di Francesco (2002, 2004) and Marconi (2001). 11 Clark and Chalmers (1998, p. 14). 12 Clark (1997, p. 214). 13 Clark and Chalmers (1998, p. 10). 14 Clark (1997, p. 217). 15 We may adopt Ned Block’s distinction between access and phenomenal consciousness, and confine ourselves to access consciousness. See this volume, p. 4, n. 4, and chapter 14 passim. 16 This claim could be criticised by a defender of the epiphenomenal character of phenomenal consciousness. But I think that there is evidence of a causal role played by phenomenal properties. To propose just one example, it may be argued that phenomenal properties play a causal role in informing the higher order cognitive system about the nature and origin of the acquired information (“source monitoring”—see Johnson 1988; Johnson, Hashtroudi and Lindsay 1993). In this sense, they are cognitively relevant, since they may have causal efficacy in the process of evaluation of belief and decision making (see Zalla 1996, and this volume, pp. 194-195). 17 See Evans (1982, chapter 7). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
TP
PT
TP
PT
PT
224
18
MICHELE DI FRANCESCO
Metzinger (2004). Bayne and Chalmers (2003, p. 27). 20 I take the unity of the personal mind as a (phenomenological) fact which should be taken into account in our theorising about the mind. However, I admit that my analysis is rather sketchy and prima facie open to the criticism that new results in cognitive science and brain sciences may compel us to give up our phenomenological intuitions. Bayne (this volume, chapter 15) presents a detailed discussion of the many ways in which we may interpret the unity of consciousness thesis. He also shows that experimental or clinical case studies, or the adoption of certain general models of consciousness, may raise doubt about (various versions of) the unity thesis. The issue is rather complex, but I think there is room to defend the unity thesis if we adopt an emergentist picture of the mind, according to which cognitive disunion at the sub-personal level is compatible with (emergent) unity at the higher personal level. Personal mind unity (which is different from the unity of consciousness) is achieved by unconscious agencies; but it manifests itself as a real (and causally efficacious) phenomena— operating at a different level from that of its sub-personal basis. 21 See Evans (1982, chapter 7). 22 Here again I take I-thoughts as emergent upon a set of abilities, information links among our bodily states, perceptual states, memory states and so on, whose complexity and divisibility into parts are not relevant to our acknowledgement of the presence of a unitary subject of experience. Of course, preconditions for the emergence of I-thoughts must be satisfied in order for I-thoughts (and subjectivity) to exist. But at their level of emergence, Ithoughts and subjectivity exhibit properties which must be considered primitive and are constitutive of our idea of a subject of experience. 23 This means that it is part of the space of reasons. Note that this typically holds for beliefs and other cognitive states, without taking into account phenomenal states as such. 24 See Clark (2003, chapter 1), for a presentation of this issue. 25 So we agree that “there is nothing sacred about skull and skin”. The internal/external distinction is relative to the (personal) mind, not relative to the body. 26 Evans (1982, p. 58). 27 Ibid., p. 159. 28 See Sellars (1956); McDowell (1992); Brandom (2000). 29 I also explicitly avoid any reference to scientific theories of consciousness—confining myself to the philosophical field. The reason is that there are not widely accepted scientific theories of consciousness. As concerns the two philosophical perspectives here described, it is perhaps of some interest that we may read the moderate and the radical versions as based on two different interpretations of the emergence relation. See Di Francesco (2005), for these interpretations. 30 See however n. 22. 31 Clark and Chalmers (1998, pp. 10-11). 32 Ibid. 33 Clark (1997, p. 217). 34 See Parfit (1984). 35 Clark and Chalmers (1998, p. 12). 36 Ibid. TP
PT
19 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
EXTENDED COGNITION AND THE UNITY OF MIND
37
225
A very difficult requirement to fulfil, and not only for an Alzheimer’s patient—especially if we consider the huge amount of information the notebook would have to contain to be used as a viable alternative to our biological memory. See Marconi (2005) on this point. 38 Here again we may doubt whether it is really possible to suspend the critical evaluation of the acquired information, when it comes by way of perception and reading, rather than mere presentation as content of the personal mind. 39 Clark and Chalmers (1998, p. 13). 40 Ibid., p. 16. 41 Nothing that I say in this paper, in fact, shows that such an eliminativistic attitude is mistaken. See Metzinger (2003) for a very impressive attempt in this direction. But in this case I think we should admit that there are no subjects (and stop speaking of extended subjects; subjects spread into the world; and so on). 42 Clark and Chalmers (1998, p. 18). 43 Of course, Otto is an ill person. So sometimes he cannot achieve the sufficient integration of cognitive resources to act (and think) properly. And he can make use of technological aids. But in this case we say that his personal mind is impaired, and that he uses extra mental devices—not that his mind encompasses those devices, which are more like wheelchairs than transplanted legs. 44 Marconi (2005) discusses an interesting mental experiment where a father’s mind is part of the extended mind of a lazy daughter (who uses it as an automatic translator from Latin to Italian). 45 In this sense, these subpersonal properties exhibit a form of ontological dependence on the emergent mental properties. This is why we can consider them, in a sense, mental. (See Di Francesco 2005, for the ontological analysis of this issue.) 46 Why local? In the extended mind paradigm nothing prevents the self from being spread to the most remote regions of the world. 47 Clark (1997, p. 216). 48 The connection between reason and emotion is a widely accepted thesis in contemporary cognitive science and neuroscience. To drop one name, we may simply refer to Antonio Damasio’s work in this field. See Damasio (1995, 1999, 2003). 49 Differently from the extended mind causal model, this process is based on brain modifications connected to language acquisition and verbal competence, and it conforms pretty well to the idea of brain processes as the emergence-basis of thought and subjectivity. 50 I shall not address here the question as to whether mental states themselves are created by language, even though, generally speaking, the relation between language and consciousness is probably quite relevant to our present issue. 51 Dennett (1991, p. 219). 52 Clark (1997, p. 198). 53 A nice set of arguments against Self-elimination can be found in this volume, chapter 17 (even though the authors are cautious with regard to Dennett’s real position). 54 I would like to thank Tim Bayne, Diego Marconi and Massimo Marraffa for comments on earlier versions of this paper. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
C. Agency and the Self
CHAPTER 17 EXTREME SELF-DENIAL Ralph Kennedy and George Graham1 TP
PT
A man that disbelieves in his own existence is surely as unfit to be reasoned with as a man that believes he is made of glass. Thomas Reid
Our topic, extreme self-denial, is a kind of nihilism about personal existence. In its most striking version it’s the view a person would entertain by thinking the thought, “I do not exist”. It’s closely related to a not uncommon view discussed (but not endorsed), for instance, by Owen Flanagan as “the death of the subject”,2 according to which there are no subjects, no agents, no selves. The connection with extreme self-denial is straightforward: if there really were no subjects, no agents, and no selves, then there would be no me or you. We think it premature to announce the death of the subject, and we believe no one has good reason for thinking herself not to exist. In this chapter we attempt to explain and undermine extreme self-denial. TP
PT
1. IDENTIFYING DENIERS Extreme-self denial faces a simple and apparently decisive objection: if I deny that I exist, then I exist. My denial that I exist cannot, therefore, be true. (It should be clear that one cannot get around this objection by saying, “well, maybe I only think that I deny that I exist”.) Call this the objection from Descartes. In his Meditations, Descartes wrote that although he could manage to believe that he had no body and that the world of physical objects was an illusion, he could not pretend that he did not exist.3 A.J. Ayer, transposing this Cartesian thought into a linguistic key, said of the words “I exist”: “[…] no one who uses these words intelligently and correctly can use them to make a statement which he knows to be false. […] If he succeeds in making the statement, it must be true”.4 Given this objection, it is tempting to ask, “Why would any clear-thinking person endorse extreme self-denial? Isn’t its absurdity simply patent? Why even discuss such a view? Who could possibly hold it?” We need to confront this question at once; we wouldn’t want to waste our time or the reader’s attacking a straw man. TP
TP
PT
229 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 229–242. © 2007 Springer.
PT
230
RALPH KENNEDY AND GEORGE GRAHAM
There definitely are at least some people who claim not to exist or to be dead: claims of this sort are characteristic of victims of Cotard’s syndrome.5 But this fact is not enough to show that, given our purposes, we are not attacking a straw man; we mean to be going after thoroughly rational, non-delusional, philosophers (and others) who, though in some way committed to extreme self-denial, would presumably be appropriately sensitive to cogent considerations against the view. So, are there really any such people? There certainly is, or was, at least one: Peter Unger. He once wrote, “[o]f course Ayer is right in pointing to the absurdity of a person’s trying to deny his own existence”. But he promptly went on to announce his intention to deny his own “putative existence”.6 Perhaps it’s a measure of the force of the argument from Descartes that few philosophers embrace extreme self-denial so straightforwardly. Even so, it’s not uncommon for philosophers and others to say “there is no such thing as the self”, “the self is an illusion”, “there is no thinking, experiencing subject”, and other such things. Such people, even if they stop short of denying their own existence, are of interest to us. They seem to mean to deny the existence not of themselves, but of their selves: what are we to make of this? We have considerable sympathy for Anthony Kenny’s caution that it’s a “philosophical muddle to allow the space which differentiates ‘my self’ from ‘myself’ to generate the illusion of a [special] metaphysical entity distinct from, but obscurely linked to, the human being who is talking to you”.7 Philosophers who deny, in the manner of the last paragraph, the existence of the self often seem to be doing precisely what Kenny cautions against: allowing the space that distinguishes “my self” from “myself” to generate the specter of a special metaphysical entity (the self) and then denying the existence of this problematic entity. We resist giving such metaphysical significance to that space and accordingly have trouble making good sense of one who denies the existence of her self but not of herself. If you deny the existence of your self, why should you not affirm “I do not exist”? As Peter van Inwagen writes, “what could [the referent of a typical use of ‘I’] be but the self of the speaker?”.8 Extreme self-denial may be part of a general philosophical outlook constructed to bring one’s conception of oneself into harmony with one’s conception of the rest of the world.9 One has to enter into the outlook to appreciate not just the reasons for the denial, but whether it is indeed a denial. Unger was led to extreme self denial by reflections on vagueness.10 Derek Parfit, Buddha, and many others commend views of a self-denying kind as being not only true but conducive to serenity in the face of the vicissitudes of life. Parfit writes, “[i]nstead of saying, ‘I shall be dead’, I should say, ‘There will be no future experiences that will be related, in certain ways, to these present experiences’. Because it reminds me what this fact involves, this redescription makes this fact less depressing”.11 Unger is (or was) an extreme self-denier. Can the same be said of Parfit? This is, admittedly, less clear. Parfit describes his own view as reductive rather than eliminative: I exist, but what this fact involves is nothing more than such and such. My existence is not a “further fact” beyond the existence of certain experiences, certain physical processes, and certain relations among them.12 TP
TP
TP
PT
PT
TP
TP
PT
PT
TP
PT
TP
TP
PT
PT
PT
EXTREME SELF-DENIAL
231
Nevertheless, we’re inclined to see Parfit as a self-denier. As we read him, the comfort to be gained from reconceptualizing one’s own future non-existence as involving only the failure of certain relations to hold between these present experiences and any future experiences occurring after a certain date is connected with the thought that one’s demise, so understood, does not involve any genuine ceasing-to-exist. This is why, as far as we can see, the proposed reconceptualization is supposed to make the prospect “less depressing”. Bertrand Russell once wrote: “If we are to avoid a perfectly gratuitous assumption, we must dispense with the subject as one of the actual ingredients of the world”.13 In his Tractatus Wittgenstein wrote “[t]here is no such thing as the subject that thinks or entertains ideas”.14 If I exist, I do think and entertain ideas. (Perhaps this can be doubted, but we assume it to be true.) Anything that does think and entertain ideas is a subject, so if subjects do not exist, I do not exist. If I were to become convinced of the view expressed by Russell and Wittgenstein in these quotations, it seems I should then deny my own existence. Is Daniel Dennett a denier?15 As if anticipating this question, he provides the following little dialogue in Consciousness Explained: “Interlocutor: ‘But don’t I exist?’ Dennett: ‘Of course you do. There you are, sitting in the chair, reading my book and raising challenges”.16 What could be more straightforward? Dennett appears to be going on record as rejecting extreme self-denial. In context, things are less clear. Consider this little riff on “Call me Ishmael” from the immediately preceding page: “‘Call me Dan’, you hear from my lips, and you oblige, not by calling my lips Dan, or my body Dan, but by calling me Dan, the theorists’ fiction created by […] well, not by me but by my brain, acting in concert over the years with my parents and siblings and friends”.17 If Dennett is right, I am a “theorist’s fiction”. If I’m (just) a fiction I don’t actually exist. I’m like Moby Dick: my “existence” is nothing more than the existence of a fiction (novel, short story, etc.) having a certain character. So, we take it that a person who, like Dennett, maintains that he is only a “fiction” has said enough to justify being classified as an extreme self-denier.18 Thomas Metzinger announces on the first page of his Being No One that “no such things as selves exist in the world”.19 The self is an illusion, he says, though it’s not actually suffered by anyone because “there is no one whose illusion the conscious self could be, no one who is confusing herself with anything”.20 Merely calling the self an illusion thus threatens not to do sufficient justice to its utter unreality, since the very idea of an illusion may suggest too strongly the existence of things capable of suffering illusions! Nothing capable of suffering an illusion exists, and this is clearly meant to include you and me. As he writes in a note late in the book, “Strictly speaking, no one ever was born and no one ever dies”.21 It seems no stretch to call Metzinger an extreme self-denier. TP
PT
TP
PT
TP
PT
U
U
U
U
TP
PT
TP
PT
TP
TP
PT
PT
TP
TP
PT
PT
232
RALPH KENNEDY AND GEORGE GRAHAM
2. WHY DENY? Denial is, then, a real phenomenon, and by no means one restricted to the mentally ill. As we noted earlier, there seems to be nothing at all to be said in favor of denial in at least its most extreme form: the assertion “I do not exist” cannot be true when asserted. The propensity of some philosophers to say things of a self-denying kind thus warrants investigation. We argue two things. First, we argue that in at least some instances extreme self-denial arises out of a tendency to adopt a distorted or inappropriate view of what sort of thing I must be in order to exist and perhaps also to know that I exist. When extreme self-deniers talk of (or commit themselves to views that entail) their not existing, they are often, unwisely, presupposing standards for personal existence that are disastrously mistaken. Second, we argue that warrant for belief in one’s own existence as a conscious subject is readily found in the experience of being a conscious subject. We are each manifest to ourselves as conscious subjects or as consciously experiencing. And so—in the words of Roderick Chisholm—“in being aware of ourselves as experiencing, we are, ipso facto, aware of the self or person— of the self or person as being affected a certain way”.22 Here’s a cartoonish example to illustrate what we have in mind by speaking of a tendency to embrace unwise or inappropriately restrictive standards for personal existence. Suppose that you once went through a Cartesian phase during which Descartes’s claim that you are essentially an immaterial substance struck you with great force as being true. That was long ago. Now you are a materialist, but you’re still infected with a significant residual Cartesianism. So it seems to you that if you really have being, you must be an immaterial substance. However it also seems to you that there are no immaterial substances. So, you conclude that you don’t exist. You become, as a result of accepting the Cartesian standard of personal existence, an extreme self-denier. It seems evident that a much more reasonable thought, given your present conviction that there are no immaterial substances, would be that you needn’t be one in order to exist. You are directly aware of yourself as “feeling warm, feeling cold, […] enjoying and disenjoying, […] feeling happy or feeling sad”;23 you are manifest to yourself as having certain conscious properties (being happy and so on); such manifestation provides “topic neutral”24 warrant: it justifies us in believing ourselves to exist and to have the relevant properties without justifying any particular view about what sort of thing we may be. TP
PT
TP
TP
PT
PT
3. APPREHENDING ONE’S EXISTENCE: SELF-MANIFESTATION In his commentary on Kant’s first Critique, Strawson notes that the pronoun “I” can be used by a person “in consciousness of [himself] as being in such-and-such” a conscious state “without criteria of subject identity and yet refer to [himself]”.25 One conception of introspection and self-ascription to be avoided, Strawson claims, is the assumption that in being aware of a conscious experience as my own, I need TP
PT
EXTREME SELF-DENIAL
233
be aware of myself as an “object of singular purity and simplicity”.26 I can know that I am thinking of Paris without knowing anything about what sort of thing it is that is thinking about Paris. Introspection is neutral here. William James says that “[w]hatever I may be thinking of, I am always at the same time more or less aware of myself, of my personal existence”.27 Alvin Goldman makes a similar remark: “The process of thinking about x carries with it a non-reflective self-awareness”.28 James and Goldman are claiming the following: conscious experiences manifest themselves as modifications or alterations of the subject or person who has the experience. In experiencing I am aware of myself as experiencing. And it is in virtue of the fact that I am aware of myself as experiencing, James suggests, that I can be assured of my existence. Suppose a thought or other conscious event occurs in a stream of consciousness. James and Goldman then claim that this thought or other conscious event appears as a modification of its subject. James then suggests that insofar as the subject is aware of herself as modified, it is manifest to her that she exists. Or again: for it to be manifest that I am affected or am having certain experiences, it must also be manifest, of course, that I am something that exists. Call this the “manifestation of self” thesis. According to the manifestation of self thesis when a person has a conscious experience, she apprehends herself as undergoing the experience, and therein apprehends herself as being altered or modified in some manner by the experience. What we should say is not merely that she is thinking but that she is aware of herself as thinking. To which James adds perhaps the obvious given the preceding description: from the fact that she is aware of herself as thinking, it follows that she is manifest to herself as existing. So, the self manifestation thesis may be summed up as follows: In conscious experience one apprehends oneself as existing insofar as one apprehends oneself as being affected or modified by experience. What kind of thesis is the self manifestation thesis? It is a thesis about the phenomenology of experience, about the appearance of a person to herself in conscious experience. It is intended to be a truth about the conscious content and phenomenological character of experience. It is in virtue of reflecting on one’s own experience that its truth is expected to be grasped or appreciated. The self manifestation thesis is perhaps not obviously true. Might I be thinking of Paris without apprehending myself as thinking of Paris? Much depends upon just how explicit (or attentive) self manifestation or self-awareness is supposed to be. By some standards, the proposition that experiences present themselves as modifications of their subjects (the James-Goldman thesis) may not hold true in every instance.29 Suppose, e.g., it were held that in order to apprehend myself in experience I must consciously entertain the proposition that I exist. We take such a demand not to be what James or Goldman have in mind, when they refer to selfapprehension being “more or less” or “non-reflective”, respectively. In any case without wandering off into a discussion of the explicitness or attentiveness of selfapprehension, it is clear, we believe, that at least oftentimes in undergoing conscious experiences we are aware of ourselves as experiencing—of ourselves as being TP
PT
TP
TP
TP
PT
PT
PT
234
RALPH KENNEDY AND GEORGE GRAHAM
altered or modified some way. So, for example, a toothache may be manifest as my toothache. I grasp my jaw with my hand, examine my open mouth with the mirror, and report “My tooth hurts” or “I have an ache in my tooth”. The ache appears not just to or in me but for me i.e. as mine. Or consider feeling warm, feeling cold, or feeling happy or feeling sad. Such feelings may wear their being mine character on their phenomenal sleeves. I delight in my own happiness. I write letters to friends telling them about it. I therein apprehend myself as feeling happy—as modified. So, let us state the manifestation thesis as follows: Oftentimes in conscious experience the subject apprehends herself as existing insofar as she apprehends herself as being affected or modified by experience. A toothache may appear not just as a toothache (whatever that might mean) but as her toothache. Assuming that the manifestation thesis qualified with “oftentimes” is correct, does this mean that we are out of the metaphysical woods of extreme self-denial? If a toothache appears as mine does that mean it is mine and that therefore my existence as a subject is (at least while the toothache lasts) assured? This is not so clear. It seems a denier might with some warrant take the line that there is a profound epistemic gap between experiences that appear as modifications and the purported fact that such experiences really are modifications—between, that is, the “selfy” content of some experiences and the ontological underpinnings of experience? Phenomenology notwithstanding, it is the underpinning that the denier claims is self-less. Unger, for example, might not mind admitting that some toothaches appear as modifications.30 But he would deny that he exists, toothaches and other sorts of conscious experience notwithstanding. So the denier could easily enough grant the purely phenomenological thesis that some experiences appear as modifications of subjects, but deny that any experiences actually are modifications of subjects. Whether one knows it or not, she might say, one never apprehends oneself as being affected or modified. TP
PT
4. CONSCIOUSNESS AND IMPERSONALITY Suppose the deniers are right. Suppose, that is, that despite appearances, one never apprehends oneself as modified in one way or another. Experience must be described in quite another way. How would this go? Perhaps, in this case, instead of saying “my tooth aches,” we should say “a toothache is now occurring here,” referring thereby only to the ache and not to the subject on whom it appears adjectival. We’ll call this sort of impersonal, self-less, description “Lichtenbergian,” after Georg Lichtenberg, who urged us not to say “I think” or “I experience” but rather to say “thinking is going on,” “experience is going on,” and so on.31 Parfit endorses Lichtenbergian descriptions when he writes: “We could fully describe our experiences, and the connections between them, without claiming that they are had by a subject of experiences”. “We could give what I call an impersonal description”.32 Can the extreme self-denier somehow prune her introspective experiential reports of any reference, however indirect, to herself as subject of those TP
PT
TP
PT
EXTREME SELF-DENIAL
235
experiences33 and still have fully adequate reports? In the next section we consider some reasons to doubt that she could. TP
PT
5. THE UNITY AND IDENTITY OF EXPERIENCE We find it implausible that one could fully describe the underpinnings of conscious experience in impersonal or Lichtenbergian terms. Many philosophers say that a self or subject to whom conscious experience occurs must exist if conscious experience occurs. Galen Strawson writes: “A subject of experience […] is simply something that must exist wherever there is experience, even in the case of mice or spiders”.34 Shoemaker claims: “Experiencing is necessarily an experiencing by a subject of experience, and involves the subject as intimately as a branch bending involves a branch”.35 Chisholm writes: “What one apprehends when one apprehends […] love or hatred is simply oneself”; “Whether one knows it or not, one apprehends oneself as being affected or modified”.36 We find these assertions of Strawson, Shoemaker, and Chisholm quite congenial and so turn to a discussion of some arguments, one at least vaguely Kantian in spirit and the other quite definitely Chisholmian, in support of the idea that one cannot adequately describe experience without referring to a subject. The “Kantian” argument goes like this: conscious experiences often are as of objects in a spatially-temporally extended world. Lloyd refers to such experiences as of an objective world as “the inescapable experience of the real as real”.37 In conscious experience, he says, “there is […] a world for us, reality for us”.38 McGinn refers to the world as experienced as the outward-face of conscious experience.39 Moreover the world as experienced as real or outward typically consists of distinguishable elements—some cross-time, some at-a-time. At a time, I might be having several different thoughts, perceptions, and sensations. Suppose, to illustrate, that in addition to a toothache occurring, the following conscious contents occur: a twig is heard to snap, sweet chocolate is tasted, and rotten fish is smelt. Suppose, too, that what Galen Strawson calls an “understanding-experience” (an experience of immediately comprehending the meaning of an utterance) occurs at the same apparent time.40 An understanding-experience, unlike a tickle, taste, or smell does not possess a proprietary or distinctive sensory or perceptual quality.41 It is the conscious experience of, say, hearing a statement in a language one understands and immediately comprehending it. Someone says “The smell of rotten fish will make you sick” and you instantly understand. The utterance’s meaning “is as present within your experience as the sound of the words”.42 Multiple experiential at-a-time contents, though distinguishable, typically form a conscious unity or overlap as part of experience as of one world. Typically, that is, there is an experienced relationship between the contents as occurring here and now. The relationship is that of a co-conscious or co-present toothache, fish smell, utterance grasp and so on. The philosophic question is what explains this unity of the experienced world?43 What unites various experiences so that they are as of a single world? The “Kantian” answer is that they are united because they are TP
TP
PT
TP
PT
PT
TP
PT
TP
TP
PT
PT
TP
PT
TP
TP
TP
PT
PT
PT
236
RALPH KENNEDY AND GEORGE GRAHAM
manifest to a self or subject and this self or subject is itself single. 44 They “attach” to it. The unity of experience therein presupposes an I for whom the experiences form a unified, simultaneously experienced whole. The presence of such a subject is needed to explain the fact that what is experienced is as of something unified (a world) rather than of separate worlds for each distinct conscious content (a world of tasty chocolate, a world of snapping twigs, a world of understood utterances and so on). In calling this answer “Kantian” we intend no scholarly claim. What we do urge is that, again, whether or not the I is this, that or another sort of metaphysical entity (immaterial substance or whatever), it must be there—it must exist—it seems, if the phenomenal unity of different simultaneous contents of experience is possible. So a Lichtenbergian description would necessarily fail to capture the unity of a world as experienced. To illustrate, suppose, for example, it is true that: TP
PT
A toothache is now felt here; A twig snap is now heard here; Sweetness of chocolate is now tasted here; Rottenness of fish is now smelt here; The meaning of an utterance is now comprehended here. Suppose that the ache is not experienced as disconnected from the sweetness or from the meaning comprehension. Each occurs in the same phenomenal here and now. The toothache, the twig snap, the chocolate taste, the fish smell, and the utterance-grasp are simultaneous parts of one experienced world. How can we understand this experience of multiple objects in the here and now without presupposing that these are experiences of a single subject or “binding agent”, as Barry Dainton puts it, “which (somehow) is responsible for (the) experiential unity”?45 An impersonal or Lichtenbergian mode of description misses something crucial. It is not just that an ache occurs while rottenness is being smelt. It is not just that a sweet taste occurs while an utterance is being grasped. It is that such elements occur as parts of a single experiential world. And it seems only in virtue of assuming that such a world appears to a subject or self that the phenomenal unity of experience can somehow be explained. There is a second argument for the necessity of the subject with a somewhat different emphasis—not on the unity but on the identity or individuality of experience. Chisholm, Williams, van Cleve and others have argued that conscious experiences cannot be individuated in an idiom free of reference to “I” or to a subject of experience.46 Certain individuating facts about consciousness are I-bound or subject-presupposing. To illustrate, suppose that my tooth does not ache. Suppose I report the absence of an ache in an impersonal way. I say something like “An ache is not now felt here”. Chisholm says that using an impersonal construction such as “An ache is not now felt here” would be “speaking rashly and non-empirically and going far beyond what [the] data warrant”.47 How so? What does Chisholm mean by this charge? Well, suppose, unknown to me, tiny microbes are sentient.48 TP
PT
TP
PT
TP
PT
TP
PT
EXTREME SELF-DENIAL
237
Suppose one such microbe is housed in my tooth, sharing its space with me. Suppose, too, that an ache is occurring in the microbe. If so, it would be false to say that no ache is now felt here, for an ache is now felt or occurs here, although in the microbe and not to me. More generally, the point of raising the possibility of the hypothetical microbe is that what “here” and “now” refer to when I speak of no ache felt here and now depends upon which subject fails to feel (or in the contrary case, feels) the ache. The identity of a conscious experience depends in part upon to whom its content is directly manifest. The identity (the here and now) of the ache is inseparable from the identity (the presence or absence) of the subject for whom the ache is an ache. If I am to state just what I myself know to be true of a particular experience, I must report “I feel no ache”, which mentions the subject that presumptively is eliminated via Lichtenbergian description. To say that no ache occurs here now is true relative to me as the reporting subject, but false of the (hypothetical) microbe housed in my tooth. The Chisholm-Van Cleve argument that the individuation of experience requires reference to a subject of experience is, again, evidence for saying that given the occurrence of conscious experiences, selves or subjects are real. They are not just in experience (in James’ sense) but necessary presuppositions of experience. The argument also helps to show why, as P.F. Strawson says, “It would make no sense to think or say: This […] experience is occurring but is it occurring to me?”.49 Yes, it is occurring to me. I am its subject. That’s why I am in a position to refer to it as this experience. Its distinctive identity as represented by the demonstrative is inseparable from my own presence or existence as the subject modified by the experience. TP
PT
6. CONSCIOUSNESS ELIMINATIVISM What now are we to make of the doctrine of extreme self-denial? We have tried to motivate the following assumption: conscious experiences presuppose a subject—a real subject—of experience. We have also noted that some experiences manifest their own subject as the subject of experience: some experiences manifest themselves to me as mine, for example. So we pose the following challenge to the extreme self-denier: assume that conscious experiences occur. Assume also that conscious experiences require subjects. Assume, finally, that some conscious experiences are manifest as their subject’s own. Now try to deny that you exist. Hard to do; very hard to do. This is because—as noted in the moderated James-Goldman thesis—some experiences appear as yours. If you deny that they appear as yours to you, because no you, no subject, exists and is a subject of conscious experience, then this flies in the face of how best to understand the unity and identity of conscious experience. Note well: conscious experience is subject presupposing and not just subject appearing (in the experience itself, oftentimes). “Well and good,” the denier might say: “Perhaps it is true that if conscious experiences occur, they require subjects; perhaps it is true that such subjects would
238
RALPH KENNEDY AND GEORGE GRAHAM
sometimes appear to themselves to be subjects of experience; perhaps it is true that I cannot appear to myself as a subject of experience without myself existing. What follows? It’s open to me as a denier simply to deny conscious experience altogether, which I hereby do. It’s a fiction, I say. And if there are no conscious experiences, there are no subjects; and if no subjects, then no subjects to which conscious experience is manifest as their own.” We’ll let the denier’s imagined use of the first-person pass. That to one side, how good are the prospects for this “reply from consciousness eliminativism”? How could a denier go about arguing in its favor? There are various ways in which she might proceed. She might conceive of alleged conscious experience as possessing a certain property (say, that of being ineffable) and then deny that anything has that property. Or she might assert that conscious experience must possess several properties jointly (being ineffable and being subject to privileged first person introspective description, for examples) but then deny that these properties are compatible. Or she might claim that conscious experience should be eschewed in a scientific materialistic world view and then argue in favor of the worldview. The strategy for rebutting her arguments would be to find a description of consciousness she would accept (as getting the idea right) and then to argue either that the description is satisfied (e.g., conscious experience really is ineffable or truly does fit into a scientific worldview, etc.) or that the description does not identify an essential feature of consciousness. Consciousness denial, of course, is a startlingly bizarre prerequisite for endorsing extreme self-denial. It is telling, though, that at least one denier seems to believe that it is necessary to deny the reality of conscious experience; at any rate, he certainly does not flinch from such denial. Dennett writes: “What about the actual phenomenology?” “There is no such thing”.50 According to Dennett there are no conscious experiences, and there is no phenomenology. If in some sense there seems to be a phenomenology, that’s because (false) memories and judgments of conscious experience are being formed instantly and continuously by the brain and create a kind of non-phenomenal illusion of conscious experience.51 Individual experiences seem to occur but don’t. Dennett’s picture of consciousness is intricate and difficult to understand. More than one philosopher has called it utterly wrong-headed.52 We have no space to discuss the view here, or to defend our interpretation of Dennett as a consciousness denier.53 It’s just that Dennett usefully manages to pinpoint precisely an easily overlooked move open to an extreme self-denier: consciousness eliminativism. How effective is this Dennett-inspired defense of extreme self-denial? Obviously, it can be effective only to the extent that consciousness denial is plausible, and it is just here where things begin, in our view, to look especially bleak for the denier. Even so arch a physicalist as Quine, to whom Dennett alludes in his discussions of “quining qualia”,54 was not an anti-realist or eliminativist about consciousness. Quine noted as early as 1952 (in an essay presented that year) that his brand of physicalism was not meant to “deny that we sense or even that we are conscious”.55 Near the end of his career Quine wrote as follows: “I have been TP
PT
TP
PT
TP
TP
PT
TP
TP
PT
PT
PT
EXTREME SELF-DENIAL
239
accused of denying consciousness, but I am not conscious of having done so”. “We know what it is like to be conscious, but not how to put it in satisfactory scientific terms”.56 Where are we and how did we get here? We have recognized that Lichtenbergian or impersonal descriptions of conscious experience are inadequate to describe certain facts about experience. The subject to whom conscious content appears is a proper and essential part of experience as well as of its metaphysical presuppositions. There is also another key idea we have been advertising here. The oftentimes self-presenting modificational character of experience makes the rationale for recognizing a real me much stronger than anything that might be said in favor of extreme self-denial. It is evident that when a conscious experience directly presents itself as a modification of its very subject (of me, say), such an experience is that of that subject (it is mine). And for those self-deniers unsuspicious of apparently modified non-existents, consciousness eliminativism seems to be the only option. Note that the proposition that we are subjects of consciousness is not a very substantial metaphysical claim. It is austere and metaphysically noncommittal. Strawson claims that there is a natural and powerful temptation for subjects to believe that they are aware of themselves as very special sort of individuals— substances of singular purity perhaps—precisely because self-ascription of thoughts and experiences involves no use of a criterion of subject identity.57 Nothing could be further from the introspective truth. That a loving feeling appears directly as mine tells me only that I am undergoing the feeling. It says nothing more generally about what sort of thing I am. All sorts of metaphysical views are compatible with what we know about ourselves when we know of ourselves as subjects of experience. TP
PT
TP
PT
7. MY EXISTENCE AND MY NATURE We have argued that we are (at least when conscious) but we have said very little, in the chapter, of what we are other than speaking of ourselves as subjects of experience.58 So, more generally and metaphysically fulsomely, what am I? What are we? I certainly appear to myself to be, and believe myself to be, something of a sort that persists through change, something that is embodied and can move its body through a spatially and temporally extended world, and thus as something that persists through alteration of motion and position, and is much else besides. Suppose materialism is true. Material candidates for the subject of experience include: the entire nervous system, the brain, perhaps one or another hemisphere, or perhaps the complete living animal organism.59 It may be noted that identifying a subject with something (such as an animal) that has robust physical boundaries and cross-time duration comports with our sense that we persist and are embodied and embedded in the world. “We (therein) may talk confidently, of an undeniably persistent object […] who perceptibly traces a physical, spatio-temporal route through the world”.60 TP
PT
TP
TP
PT
PT
240
RALPH KENNEDY AND GEORGE GRAHAM
Should we accept materialism about the conscious subject? If not materialism, what alternative non-materialist view is plausible? Vexing metaphysical questions remain. We have no desire to suppress further speculation about candidates for a more fulsome metaphysics of personal existence. However, as we have tried to show in this chapter, no such fulsome account is needed specifically to defeat the case for extreme self-denial and to undermine the proposition that neither you nor I exist. As subjects we ourselves and our conscious experiences are intertwined. So unless some way can be found to prune consciousness of its subject or to expunge consciousness itself from the World of True Being, the conclusion must be that the basic idea of extreme self-denial is deeply flawed, even absurd.61 TP
PT
NOTES 1
This is a thoroughly co-authored paper. The order of authorship was determined arbitrarily. Flanagan (1996, p. 7). 3 “Yet now I know for certain both that I exist and at the same time that all such images and, in general, everything relating to the nature of body, could be mere dreams
” (Descartes [1641] 1985). 4 Ayer (1956, pp. 51-52). 5 Young and Leafhead (1996); Stephens and Graham (2004). 6 Unger (1979, p. 234). Unger has since abandoned this view (see, e.g., Unger 2004). In the current paper unqualified references to Unger will be to the Unger of the 1979 paper. 7 Kenny (1988, p. 4). 8 van Inwagen (2002, pp. 175-176). 9 See, again, Flanagan (1996). 10 Unger (1979). 11 Parfit (1984, pp. 281-282). 12 See Parfit (1984, part 3, throughout). 13 Russell (1959, p. 136). 14 Wittgenstein (1951, § 5.631). 15 Flanagan (1992). 16 Dennett (1991, p. 430). 17 Ibid., p. 429. 18 In an email exchange with Dennett one of the co-authors of this paper asked him why he could not accept a characterization that included the following: “I am not a fictional entity. I am real. Or more precisely, if living human animals are real, I am real.” In response Dennett said he was not sure that he disagreed and that he might be willing to say that he exists. So, perhaps Dennett is best understood, then, as the sort of theorist who denies that selves exist (except as fictions), but does not mean that he himself does not exist. We are indebted to Dennett for the correspondence. Because of it we do not wish to insist that he continues to hold the extreme form of self-denial that we find in Consciousness Explained. 19 Metzinger (2003). 20 Ibid., p. 626, emphasis in original. See also Graham and Kennedy (2004). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
EXTREME SELF-DENIAL
21
241
Ibid., p. 633, n. 7. Chisholm (1978, p. 144). 23 Chisholm, in Hahn (1997, p. 28). 24 Smart (1959). 25 Strawson (1966, pp. 165-167). See also Bennett (1974). 26 Strawson (1966, p. 166). 27 James ([1892] 1961, p. 42). 28 Goldman (1970, p. 96) 29 Although see Kriegel (2004) for a detailed and subtle argument that the proposition does, in fact, always hold true. 30 Unger (1979). 31 Shoemaker (1963); Williams (1978); Strawson (1994); van Cleve (1999). 32 Parfit (1984, p. 225). 33 van Cleve (1999). 34 Strawson (1994, p. 133). 35 Shoemaker (1996, p. 10). 36 Chisholm (1978, p. 145). 37 Lloyd (2004, p. 250). 38 Ibid., p. 251. 39 McGinn ([1988] 1997). See also Strawson (1959); Grush (2000). 40 Strawson (1994, pp. 5-13). 41 Strawson (1994, p. 7). 42 Dainton (2000, p. 11). 43 Kant does not present the transcendental deduction as an “inference to the best explanation”, hence our continued use of “scare” quotes. 44 Again, this would be at least contentious and probably downright wrong as a claim about what Kant actually said. To say that these experiences are all manifest to a single subject is, on most accounts, to go well beyond what Kant meant by saying that the “I think” must be able to accompany all my representations. For a good discussion see Bermúdez (1994). 45 Dainton (2000, p. 27; parenthesis added). 46 Chisholm (1976), Williams (1978), van Cleve (1999). 47 Chisholm (1976, p. 41). 48 The example is adopted from van Cleve (1999). 49 Strawson (1966, p. 165). 50 Dennett (1991, p. 365). 51 See Seager (1999, pp. 107-131), for relevant discussion of Dennett. 52 Flanagan (1992); Galen Strawson (1994); Seager (1999). 53 As we noted earlier, whatever Dennett may have once thought he may no longer wish to say that he is a fiction. 54 See, e.g., Dennett ([1988] 1997). 55 Quine ([1953] 1966, p. 213). 56 Quine (1987, pp. 132-133). TP
PT
22 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
242
57
RALPH KENNEDY AND GEORGE GRAHAM
Strawson (1966). McGinn (1999, p. 165). 59 For an influential recent discussion of the possibility that we are human animals, see Olson (1997). 60 Strawson (1966, p. 164). 61 An early version of this chapter was read at the University of Alabama at Birmingham Conference on Philosophical Issues in the Biomedical Sciences in May 2004, by the secondnamed author. Thanks are due to the audience at the conference for helpful discussion. Thanks are also due to Adrian Bardon, Owen Flanagan, Richard Garrett, Stavroula Glezakos, John Heil, Uriah Kriegel, Christian Miller, Win-Chiat Lee, and Dean Zimmerman for comments on earlier drafts. TP
PT
58 TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 18 EMPIRICAL PSYCHOLOGY, TRANSCENDENTAL PHENOMENOLOGY, AND THE SELF Stephen L. White
Contemporary experimental psychologists speak frequently of visual perceptual experience in ways that suggest its content is rich. Elizabeth Spelke, for example, says that the infant sees “the moving object as the Agent”,1 and Rochel Gelman, Frank Durgin, and Lisa Kaufman speak of the “perception of an impossible event”,2 “the causal impression of launching”,3 “the expected perception (Animate, Inanimate, Neutral)”,4 and “an animate percept”.5 Other researchers speak not only as though we perceive such properties as mechanical causation, animateness, agency, and agent causation, but also relations of power, goals, and values.6 And “perceive” evidently means perceive directly. That is, they are perceived and not perceived in virtue of something perceived more directly. In this chapter I shall give a transcendental argument (one which is a priori and based on the conditions of our having a meaningful language) that perception is indeed rich in this sense. I shall then argue that rich perception is crucial in answering Hume’s skepticism about the self. The notion of rich perception (in the visual case) is defined in contrast to the empiricist conception. According to the latter, what is given most directly and most immediately in visual experience is to be understood in terms of analogies between a mental visual field on the one hand and camera images (for Locke the camera obscura7), painted images in the Renaissance and post-Renaissance traditions of realism, or retinal images on the other. On these grounds, Hume denied, for example, that there was a perception of causation or that a causal relation could be given in perception. If, for example, we see one billiard ball collide with another, causing the motion of the latter, all we are given in visual perception (all we have a visual impression of) is one event followed by the other.8 The research program in empirical and experimental psychology that begins with Michotte challenges these claims. Michotte says explicitly that (pace Hume) there is a visual impression of causation.9 And he pioneered a research methodology to explore and characterize the temporal and spatial parameters within which the impression is induced. Similarly, J.J. Gibson’s work ascribes to subjects perceptions of affordances which we might gloss as the functionally relevant properties of their external environment. A structure of rock, for example, is not given neutrally as a solid object with a certain three-dimensional geometry, but as a seat or a stairway, a bridge, a shelter, or a hiding place.10 And such forms of rich perception have obvious analogies to the types of perception studied by the major figures in the TP
PT
TP
TP
TP
PT
PT
TP
PT
TP
TP
PT
TP
TP
TP
PT
PT
PT
243 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 243–254. © 2007 Springer.
PT
PT
244
STEPHEN L. WHITE
phenomenological tradition of philosophy. Sartre, for example, says “When I run after a streetcar, […]. There is consciousness of the streetcar-having-to-beovertaken, […]. In fact, I am then plunged into a world of objects; it is they which constitute the unity of my consciousnesses; it is they which present themselves with values, with attractive and repellant qualities […]”.11 But why, given these empirical research traditions that support the idea of rich perception, should we look for philosophical, or a priori, or transcendental support? It might seem analogous to looking for an a priori deduction of the number of planets, after the empirical methodology necessary to determine such results had been well established and well understood. There are three reasons, however, for rejecting this analogy. First, many contemporary philosophers offer accounts of perceptual and/or qualitative content that are incompatible with rich perception. In fact, many offer accounts that are incompatible with perceptual or qualitative content (as opposed to linguistic/descriptive content) altogether. Second, it follows that there is no consensus about the nature of qualitative states or the qualitative or sensational content of perceptual states. Third, I shall argue that perceptual content is governed by considerations of rationality and (hence) by normative considerations that have no counterpart where the subject matter of physics is concerned. TP
PT
1. THE PHENOMENOLOGICAL METHOD The response to the skepticism both about qualia and about rich perception lies, I shall claim, in the existence of a phenomenological method that is responsive to both sets of concerns. (I can address only the latter concerns here.) I shall argue for a phenomenology of visual perception which is both deflationary and inflationary relative to that of the empiricist tradition. This means that in some respects our visual experience is more impoverished than that tradition allows, but in others it is much richer. In this section I shall set out the method, which involves both a priori and a posteriori elements. In the following sections I shall outline the transcendental argument that there must be rich perception. 1. Informal experiments. These experiments are not couched in a scientific psychological language or performed in accordance with any formal methodology or apparatus. Nor are they classical philosophical thought experiments. They are, unlike experiments whose descriptions one simply reads, exercises in which students and readers can participate directly. The experiments are, nonetheless, perfectly and straightforwardly empirical, and could easily be formalized for the sake of precision. Their real interest, however, lies in the power of their appeal to our intuition and imagination. For reasons which I cannot explore here, the empiricist or sense-datum theory has a hold over our imaginations that no amount of empirical or conventional psychological literature seems able to dislodge. And the imaginative and perceptual experiences involved in participation play a crucial role in opening us to the possibilities implicit in paradigm shifting philosophical positions.
PSYCHOLOGY, TRANSCENDENTAL PHENOMENOLOGY, AND THE SELF
245
Deflationary experiments include: Car windshield. Imagine being asked to draw the best approximation using four straight lines of the apparent shape of your car windshield as viewed from the driver’s seat. Responses vary widely, the most common being the “real shape”—a trapezoid symmetrical around the vertical axis, with the base longer than the top. The apparent shape, however, is different in three respects: the top and the base converge to the right, the top is longer than the base, and the angles of the sides are not symmetrical around the vertical axis. In my experience with a large number of students, none has ever gotten all three differences, and relatively few get even two. Hand and foot example. If it is thought that the problem is one of memory, rather than perception, try to estimate, standing up, the ratio of the apparent length of your hand (wrist to the end of the middle finger) when it is held four inches from your eye to the apparent length of your foot. Estimates often vary by a factor of ten, even though memory plays no role (the correct ratio is between twenty and thirty to one). On the inflationary side we have: Thurber drawings. James Thurber’s drawings depict people whose expressive self-presentations we read immediately—for example, the smugly confident and overly intense demeaner of someone who expects to dominate a social exchange. (We take them in easily in 1-2 seconds.) But such an interval is a small fraction of the time it takes to determine what bodily and facial features determine these expressive properties (for example, whether the facial expression is more a matter of the eyes, the mouth, or some aspect of the relation between the two). And in many cases it is even more difficult to say what it is about the drawing that suggests these features. (This is a common feature of caricatures.)12 TP
PT
Although these experiments are hardly conclusive, they suggest the implausibility of the central assumption of the sense-datum theory—that what we perceive most directly, and to which we have most unproblematic access, are the shapes, colors, and relative sizes of the “colored patches” alleged to constitute our (mental) visual field. We normally perceive many kinds of properties of external objects directly in visual perception. And it may be only with difficulty, if it is possible at all, that we reconstruct a sense-datum basis for such experience. 2. Thought experiments and informal arguments. Classical philosophical thought experiments and arguments also support the inflationary/deflationary phenomenology. As a deflationary example we have: Wide-angle lens argument. In using pictorial metaphors to describe the character of our visual experience we feel inclined to say that the visual angle we take in is approximately 150°, that all the shapes look completely natural, and that they seem to remain stable even as we move our heads. But this combination of features, about which there is virtually no disagreement,
246
STEPHEN L. WHITE
actually undermines any literal application of pictorial analogies. A camera lens that produces natural looking shapes even at the periphery of the frame is approximately 50mm for a 35mm camera, and it has a visual angle of approximately 46°. In order to get a visual angle of approximately 150° we need a lens of approximately 10-12mm—midway between an extreme wideangle and a fish-eye lens. Such lenses produce extremely dramatic distortions everywhere in the frame. And even with an only moderately wide-angle lens, panning across a scene with a movie camera produces the effect of objects that seem to change shape as the camera moves. It is in fact impossible to combine in two-dimensional visual images both the visual angle we take ourselves to see and the naturalness and stability of the shapes that objects appear to have. And, needless to say, this is not a fact about current lens technology, but about the projection of three-dimensional space on a twodimensional surface—a fact of which Renaissance theorists of perspective were well aware.13 TP
PT
Inflationary examples include: Stone at the end of the stick. When we are driving and experience a collision where do we feel it? In line with the empiricist theory, the temptation is to say that we feel it in all those parts of our body that are in contact with the interior of our own car. But it seems far more accurate to say that under normal circumstances we would feel it in the fender of our car, just as we say that the blind person feels the stone at the end of his stick. And the following example is both inflationary and deflationary. Bradley example. Bill Bradley has described the process by which he trained himself to take in a wider than normal angle of vision, and indeed at one point speaks of seeing the whole basketball court from a location on the court.14 But is it even coherent to suppose that we might have 360° vision (without, of course, eyes in the backs of our heads)? Those in the grip of pictorial analogies for visual experience or who are committed to reading the character of such experience off the physical structure of the eye will say no. As we have noted, however, we seem to see everything within a visual angle of approximately 150 degrees, and everything seems to be in focus. What is in sharp focus at any given moment, though, occupies a visual angle of only 2-3 degrees. Thus the visual experience of the space in front of us is a construction. The brain integrates over time to produce a unified visual field from what is a mosaic or patchwork. If this is the case, however, there is no reason in principle (abstracting from computational limitations) why—given that Bradley is constantly glancing backward—the brain could not integrate over a somewhat longer period of time to produce a unified field of 360°. Given such experience, we would, of course, be insensitive to some events occurring behind us when we were facing forward. But to say that our ordinary visual field is a construction it to say that an exactly analogous point TP
PT
PSYCHOLOGY, TRANSCENDENTAL PHENOMENOLOGY, AND THE SELF
247
applies to our experience of the space before us which is outside the area of sharpest focus. 3. Testimony (open to refutation). We have already seen one instance in which our beliefs (and hence our testimony) about the character of our first person, present awareness is subject to correction. We are strongly inclined to regard our visual experience as pictorial, but the wide angle lens argument and the Bradley example show that we cannot be right. And notice the difference between this point and the point that we seem to be given the entire visual field in focus when we know that at any given moment only a very small visual angle is sharply focused. In response to the latter point it could be said that whatever the facts about the eye, the (mental) visual field is completely in focus—there is, after all, nothing to prevent our hallucinating sharp edges where none exist on the retinal image. (This would be analogous to a computer-enhanced digital photograph.) But this reply is not available in the former case. For the point in that case is not that the retinal image lacks the properties that we ascribe to the visual field. It is that no possible picture (or set of pictures) could have those properties, and thus that whatever characterization we give to our visual perceptual experience, pictorial metaphors and analogies are in important respects completely inappropriate. 4. Empirical experiments based on testimony. Many of the contemporary experiments involving the perception of causation in the tradition of Michotte depend on the subjects’ testimony as to their impression of the events depicted. Often the range of possible descriptions is open-ended, and there is often significant work to be done in interpreting and coding the responses. Examples in which this is problematic include the following. As C.L. Hardin reports, under unusual laboratory conditions subjects report seeing color fields as “reddish green”.15 This is not obviously an intelligible description of a possible experience, and no amount of explanation at the subpersonal level will make it so. What is required is that we explore the issue much further. Is the apparently contradictory description merely an artifact? Is it, for example, merely a product of an experimental situation in which subjects feel compelled to express themselves succinctly and thus produce descriptions whose apparent contradictions would disappear were they allowed to express themselves at greater length and in more nuanced ways? Or are there false assumptions about what counts as an intelligible description of a visual experience, such that if we abandoned them the appearance that the description is contradictory would disappear (as in the Bradley example)? Or (as is more likely), is it some combination of the two? Consider an analogy: Before we learn any modern physics we may be inclined to view talk of “curved space” as a category mistake. We may say that space itself cannot be curved, though lines and objects in space obviously can. However, in learning the relevant physics, non-Euclidean geometry, and philosophy, we learn the depth of the requirement that our definitions be operationalizable. And we come to appreciate the pointlessness of holding an overly complicated physical theory (with elements which seem to have no physical reality, such as universal forces), merely to preserve an a priori intuition about geometry.16 TP
TP
PT
PT
248
STEPHEN L. WHITE
Imagine, then, asking the subjects whom Hardin cites (and who say that the experience is one they could never have imagined prior to having it) whether the apparent contradictoriness is more like the apparent contradictoriness of curved space or of 360° vision, or whether it is more like the apparent unintelligibility of thinking about a round square or that p and not-p are both true. With enough possible points of comparison of the Bradley or non-Euclidean geometry types, it seems quite likely that subjects could produce extremely interesting and insightful responses—responses that would be unavailable if we immediately change the subject by switching to the subpersonal level. Similar points might be made about subjects’ reports of pain that they don’t mind, or such standard cases in the philosophical literature as seeing a hen with speckles without seeing it as having a determinate number of speckles.17 And the constraints of coherence that we have been considering might similarly be observed in order to elicit richer and more detailed descriptions of the experience of participating in split brain experiments or acting on post-hypnotic suggestions. TP
PT
5. Clinical research based on testimony. Unlike the experimental literature, the clinical literature is rife with rich descriptions that are relevant to resolving the puzzles raised by some of the responses in the former domain. Renee, for example, in The Autobiography of a Schizophrenic Girl, describes her experience by saying, […] I saw a boundless plain, unlimited, the horizon infinite. The trees and hedges were cardboard, placed here and there, like stage accessories […].18 I saw things, smooth as metal, so cut off, so detached from each other, so illuminated and tense that they filled me with terror. When, for example, I looked at a chair or a jug, I thought not of their use or function—a jug not as something to hold water and milk, a chair not as something to sit in—but as having lost their names, their functions and meanings […].19 TP
PT
TP
PT
These descriptions, which make little sense on an empiricist conception of experience, are perfectly intelligible against the background of an account of rich perception, since what Renee is describing is precisely the loss of affordances. A similar point applies to Oliver Sacks’ description of his experience of hemianopia with hemi-neglect: “The pear tree was gone, but so was the place where the pear tree stood. There was no sense of a place vacated; it was simply that the place was no longer there”.20 Again, this seems paradoxical or incoherent. For how can a place or a space (as opposed to something in space) disappear? This account is particularly puzzling against the background of the empiricist conception of visual experience, since the disappearance of the sense-data in the left side of the visual field could never explain how the space to the left of the subject might be lost. But this is precisely where the earlier discussions of 360° vision and curved space are relevant. We can take as our clue what Sacks adds to the phenomenological description of his condition. TP
PT
PSYCHOLOGY, TRANSCENDENTAL PHENOMENOLOGY, AND THE SELF
249
[…] Knowing this, [his condition] intellectually, did nothing to alter the hiatus in perception, or, rather, the hiatus in sense, the feeling that there was nothing other than what I saw, and that it was therefore senseless to look at, or look for, the “left” half of the room, so-called. With a violent effort of will, like a man forcing himself to move, inch by inch, in a nightmare, I turned my head towards the left.21 TP
PT
Imagine, first, that one’s attitude toward the space to one’s left is something like extreme weakness of the will or what one might experience in depression—i.e., one can barely make oneself look or turn in that direction, even when one knows one would be better off for doing so. Now imagine an even more extreme case: the space to one’s left is no longer given as affording opportunities for action (in roughly Gibson’s sense of “affordance”) at all. And now consider: in what sense does the space still exist? It is an implication of the transcendental argument for rich perception that one’s lived, personal space (the space within which one can perform basic actions) is prior to, and more basic than, external, geometrical space. Indeed we shall see), it is the personal, lived space of affordances—opportunities for action—that makes action possible and provides the grounding for a meaningful language. And this personal space of affordances is itself defined by our skills and capacities for action—the opportunity for escape that the space affords is only an opportunity for a creature with a certain set of agential capacities and liabilities. Thus if the possibility of action toward the left has disappeared, then the space itself, and not just the objects in it, has ceased to exist. Such an analysis serves as an extremely plausible response to another example. As Sacks describes his attempt to walk after a fracture that, as we might say, removed the leg from his subjective or lived bodily image, The floor seemed miles away, and then a few inches; the room suddenly tilted and turned on its axis […] then I perceived the source of commotion. The source was my leg—or, rather, that thing, that featureless cylinder of chalk which served as my leg […]. Now the cylinder was a thousand feet long, now a matter of two millimeters; now it was fat, now it was thin; now it tilted this way, now it tilted that. It was constantly changing in size and shape, in position and angle, the changes occurring four or five times a second […].22 But what could produce such an explosion in my mind? Could it be a mere sensory explosion from the leg, as it was forced to bear weight, and stand, and function, for the first time? Surely the perceptions were too complex for this […]. The chaos was not of perception itself, but of space, or measure, which precedes perception. I felt that I was bearing witness, even as I was undergoing it, to the very foundations of measure, of mensuration, of a world.23 TP
TP
PT
PT
250
STEPHEN L. WHITE
The puzzling aspects of this description disappear when we focus on what was apparent in the earlier example: that the lived space and the lived body are connected not merely externally (causally), but internally—they are mutually constitutive of one another. And to the extent that Sacks is experiencing a radical revision of his bodily image (of the lived body), external, lived space itself must be experienced as in the process of radical transformation. And if it is objected that one’s commonsense conception of space is Euclidean and that one knows that this hasn’t changed, it may be replied that (as we have seen) Sacks points out one’s intellectual grasp of the situation may be independent of its phenomenology. 6. Empirical experiments, developmental studies, and clinical cases based on indirect methods. Indirect sources of access (i.e., not via testimony) include such things as galvanic skin response and classic timed behavioral (nonverbal response) studies. Strictly behavioral manifestations alone cannot substitute for testimony, since we cannot normally determine whether the manifestation is of something available to the subject at the personal level. But there are also outputs which fall short of full-blown testimony but which are manifestations of subjective experience by definition. Particularly interesting in this context are the current studies based on the amounts of attention that infants give to various kinds of perceptual phenomena— particularly those involving so-called impossible events. It is unclear to what extent these responses should be considered behavioral, since they are likely to involve the experimenter’s interpretation of what counts as a manifestation of attention, and there may be no criteria codified in completely behavioral terms. 2. THE TRANSCENDENTAL ARGUMENT FOR RICH PERCEPTION The argument in outline is as follows: A. Language → demonstrative access B. Demonstrative access → agency C. Agency → phenomenology of agency D. Phenomenology of Agency → rich perception (“→” is to be read as “presupposes”.) About premise (A) I shall be brief: Language, if it is to be meaningful, if it is to be more than a formal calculus, must be grounded. That is, in addition to the word-to-word connections of the kind supplied by lexical definitions, there must be some connections between language and the world. There must be connections between some words and the world unmediated by any further linguistic content—connections of the kind we ordinarily call ostensive or
PSYCHOLOGY, TRANSCENDENTAL PHENOMENOLOGY, AND THE SELF
251
demonstrative. But, our ascription of meaning to a subject’s words is constrained by the requirement that we make the subject rational (by and large). This means we must be prepared to deal with the demonstrative versions of Frege’s problem. For example, one points out the window to one’s right and says “that ship is owned by a multinational corporation” and points out the window to one’s left and says, “that ship is not”, while (though one is unaware of it) one has pointed to the bow and stern of the very same ship, and said logically incompatible things about it. Since one could obviously be nonetheless rational, there must be different ways in which the ship presents itself that explain and rationalize the apparently contradictory beliefs. We cannot appeal to the different descriptions one might give of the ship, since we need a demonstrative case in order to show how language can be grounded. The temptation, of course, is to say that the ship is given via two different sets of sense-data, which could have been caused by different ships. But we have already seen some of the objections to this kind of phenomenology of perception. The solution, as I have argued elsewhere, is to say that the ship is given to one via two different modes of presentation (which for all one knows could have been modes of presentation of different ships) in virtue of its presenting two different packages of basic action possibilities. One can point to the bow and indicate its spatial extent, and one knows how to move toward it, or guide something or someone else toward it. If one’s view is obscured, one knows how to move so as to see it from a different angle, etc. Moreover, the stern presents a different bundle of action possibilities. And note that these are action possibilities, available to the subject. Mere causal dispositions, which are not themselves directly available to the subject, could not play the same rationalizing role. But what is the connection between agency, the phenomenology of agency, and rich perception? Imagine a passive subject who agrees with us about all the objective facts and who understands the inferential role of the agential language, but who doesn’t understand action. The passive subject says things like: “If everything is determined there is no such thing as action, and if there is randomness, this doesn’t help”, and “Quantum indeterminacies aside, the future is as fixed as the past”. If we assume, as we can, that the passive subject understands the inferential role of agential language, what must be missing is something experiential. The passive subject’s experience, we can imagine, is just the most extreme version of Renee’s that we can make coherent to ourselves. What, then, do we have to add to the experience of the passive subject to get the experience of agency? Imagine that you are one personality in a multiple personality subject and are conscious while the movements of the shared body are being controlled by another. If you have no access to the intentions of the other subject, then interpreting the bodily movements will be like interpreting those of another subject altogether—you will feel that you are being moved like a puppet for reasons to which you have no special access. Even if you have access to the other subject’s conscious intentions formulated explicitly in linguistic-descriptive terms (something we have for only a small portion of our own actions) this will not be sufficient to make the bodily movements intelligible. This is because you could still lack such perceptual
252
STEPHEN L. WHITE
components of the controlling experiences as: foreground and background relations, aspects, pragmatic spatial and temporal relations, opportunities and liabilities (Gibsonian affordances), expressive properties and significance. If, however, a meaningful language presupposes agency, and agency presupposes rich perception, then we have the argument we wanted: a transcendental argument that our phenomenology must be one that is inflationary relative to classical sense-datum theories—i.e., one that includes rich perception. 3. APPLICATION TO THE SELF Hume’s skepticism about the self (in any sense connected with the notion as it is ordinarily understood) is based on a simple argument: We aren’t given anything in introspection that could be a subject of experience. Hume concludes that we are just bundles of experiences.24 Since a bundle couldn’t be a subject and neither a bundle nor an experience could act, this amounts to skepticism about the self. Indeed, Hume concludes on this basis that we cannot even give sense to the term. Hume apparently never considered the possibility that our access to our selves might be via our perceptual access to our own bodies construed as objective physical entities in the world, and for good reason. No such ordinary object could answer to our notion of a subject of experience—at least under that description or conception. But Hume looked for our access to a genuine self in the wrong place; there is an alternative to the assumption that if we are given a genuine self it must be through introspection. The alternative is that we are given to ourselves (as agents) implicitly in the structure of affordances. In being given opportunities, we are given ourselves implicitly as having certain powers. And in being given liabilities, we are given ourselves as in certain ways vulnerable. This is simply the point we saw above: that our lived bodies and our lived spaces are mutually constitutive. We are given to ourselves as agents (and thus as subjects) in being given an agential world—a meaningful world of opportunities for actions we are capable of performing and of things which are worth doing, and a world in which it matters which we choose. If, however, we follow Hume in refusing, at least in the first instance, to identify the self with the objective entity which is the physical body in the physical world, then what is the relation between the two? The question is answered by considering the relation between the two conceptual schemes we have been discussing: that of objective physical science and that associated with the agential perspective. The relation, I shall say, is a dialectical one. What this means can be explained in part by reference to the notion of incommensurability as it grows out of the philosophies of science and of theory-laden perception associated with such philosophers as Kuhn and Hanson.25 The notion of a dialectical relation, however, goes well beyond anything discussed in the context of contemporary analytic philosophy of science. To say that perception is theory laden in Hanson’s sense is to say that it is rich in the sense defined. Where the person unschooled in physics sees wires, metal, TP
PT
P
TP
PT
PSYCHOLOGY, TRANSCENDENTAL PHENOMENOLOGY, AND THE SELF
253
and glass, the expert sees an x-ray tube.26 And the connection to incommensurability is immediate. If the proponents of radically different theories literally see different worlds, then each theory generates its own evidential base, and there is no guarantee of a neutral body of data to which such theorists can be referred in adjudicating disputes. This is not to say, and it does not follow, that such disputes can never be settled, or that such theorists can never do other than talk past one another. In the case of the agential and the objective perspectives, we go beyond the notion of incommensurability in this sense. In this special case, where our agential perception is, we might say, theory and practice laden, each perspective has the potential, if totalized, to undermine the other. As we saw in the discussion of the passive subject, if we regard the universe as deterministic, it seems that there is no room for genuine agency or action. And it seems, moreover, that admitting randomness is no help whatsoever in explaining how we could be the authors of our behavior. It seems, therefore, that no account of the underlying nature of the objective, physical universe could underwrite a view of ourselves as agents, as the genuine authors of our actions, and as responsible for the consequences. On the other side, to treat science as simply one of the many things we do is to give science a pragmatic cast, seemingly at odds with its most ambitious claims to objectivity. But to say the relation is dialectical goes beyond even this. For although each perspective has the potential to undermine the other, each arises as well, and by necessity, from the other. Self-criticism, which is constitutive of intentionality, generates objectivity about ourselves. Criticism of ourselves as reasoners, for example, is quite capable of justifying the objective study of ourselves as reasoners, and such study can, and often does, lead to an improvement in our agential capacities and capabilities. By the same token, however, critical reflection on the rationality and meaning of our scientific theories presupposes the agential perspective. For, as we have seen, agency is presupposed by the meaningfulness of our language, and, a fortiori, by the meaningfulness of our scientific language. The upshot, then, is not only dialectical—each perspective gives rise to its opposite in virtue of its deepest and most intrinsic nature—it is unstable in the sense that there is no permanent and stable balance to be struck between them. But the lack of a permanent division of labor, fixed and established once and for all is no bar to the heterogeneous phenomenological methodology outlined above. We work within the assumption of the subject’s rationality and on the basis of a coherentist methodology that renders the subject’s self-conception open to revision. But rationality is only rationality by and large, and the transcendental argument for rich perception is, as the characterization of the phenomenological method suggests, fully compatible with a full spectrum of empirical psychological investigations of the subjective perspective and the self. TP
PT
NOTES 1 TP
TP
PT
2 PT
Spelke (1995, p. 142). Gelman, Durgin and Kaufman (1995, pp. 50-184, 153).
254
3
STEPHEN L. WHITE
Ibid., p. 154. Ibid., p. 157. 5 Ibid., p. 173. 6 Premack and Premack (1995, pp. 185-199, 188). 7 Locke ([1690] 1975, book II, chapter XI, section 17). 8 Hume ([1748] 2000, section 7, part I). 9 Michotte (1963, p. 255). 10 Gibson (1986, pp. 127-143). On Gibson, see this volume, pp. 55-56. 11 Sartre (1992, pp. 48-49). 12 Thurber (1985, pp. 199, 383). 13 Wheelock (1977, pp. 70-71). 14 Esquire, October 1993, p. 57. 15 Hardin (1988, p. 125). 16 Reichenbach (1958, chapter 1). 17 Armstrong (1969, pp. 219-220). 18 Sechehaye (1951, pp. 36-37). 19 Ibid., pp. 55-56. 20 Sacks (1998, p. 73). 21 Ibid., p. 75. 22 Ibid., p. 111. 23 Ibid., pp. 112-113. 24 Hume ([1739-40], 2000, p. 261). 25 Kuhn (1962) and Hanson (1961). 26 Hanson (1961, p. 15). TP
PT
4 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 19 HOW TO DEAL WITH THE FREE WILL ISSUE: THE ROLES OF CONCEPTUAL ANALYSIS AND EMPIRICAL SCIENCE Mario De Caro
We have to believe in free will. We’ve got no choice. Isaac Singer The question of free will has been defined as “intractable”,1 as “arguably the most difficult problem in philosophy”,2 as one about which “nothing believable has […] been proposed by anyone in the extensive public discussion of the subject”.3 David Hume (before proposing his own controversial solution) revealingly worded the complexity of the issue by referring to it as “the most contentious question of metaphysics, the most contentious science”.4 It was insightful on Hume’s part to mention, in this context, both philosophy and science. Certainly, one of the main difficulties of the free will issue is to understand what contributions are supposed to come, respectively, from philosophy (intended as a practice that essentially involves conceptual analysis) and from empirical investigation. In principle, three options are open when one reflects on what roles these two fields can play in the free will discussion: TP
TP
PT
PT
TP
TP
PT
PT
a) Scientific isolationist view: “The free will problem is empirical in character; so in principle—if it can be solved—it can be solved by empirical science alone (that is, philosophy should not be expected to offer any real contribution to the discussion)”. b) Pluralist view: “In virtue of its amphibious nature, the free will problem has to be treated both by philosophy and empirical science”. c) Philosophical isolationist view: “The free will problem is a conceptual problem that has to be treated a priori; therefore, this problem pertains entirely to philosophical conceptual analysis (that is, science should not be expected to offer any real contribution to it)”.
255 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 255–268. © 2007 Springer.
256
MARIO DE CARO
I believe that the second view is correct, while the first is obviously wrong and the third is more subtly wrong. I will consider them in turn; but first some introductory remarks are necessary. 1. THE PROBLEMS OF FREE WILL In general, the intuition of free will is a basic component of what Wilfrid Sellars called the “manifest image”, i.e. the common sense-based conceptual framework from which philosophical analysis begins. However, if the intuition of free will is one of the most deeply rooted in our manifest image, it is also one of the most abstract, complex and even obscure. Thus, one of philosophy’s main tasks is to clarify this intuition, in order to understand its precise content, implications, and very possibility. Originally, philosophers used to discuss the free will issue only by analyzing and refining the manifest image. More specifically, the main problem was how to reconcile human freedom with some of God’s properties—i.e. his perfect foreknowledge and his capacity to predetermine (to “predestinate”) our lives. After the scientific revolution, however, the discussion on free will changed dramatically, since many began to feel that the real menace on free will was coming from the deterministic laws of nature, seen (pace Hume) as producing “universal natural necessity”.5 More recently, the concrete possibility of physical indeterminism added another important strand to the discussion. The problem of free will, however, has remained an example—arguably the most relevant—of the natural tension between the manifest image and the scientific image. In general, this tension appears structural. Nevertheless, it is intellectually vital to try to relax it, by means of a constant—even if, perhaps, inconclusive— negotiation between the two images. (In any case, the idea of a constant negotiation seems to me much less unsatisfactory than the alternatives: a rampant scientism that belittles the manifest image, an irrationalist conception that denies the ontological relevance of science, and a schizophrenic view that sees both images as correct, in spite of their structural tension).6 The problem of free will is no exception to this intrinsic tension, as it is easily confirmed by a quick look at the views in the market. Philosophers who prioritize the manifest image maintain that our freedom is undeniable (as one of the most influential writes, “we are certainly all condemned to believe in freedom—and, in fact, condemned to believe that we know that we are free”).7 On the opposite side, many philosophers who privilege the scientific image argue that the intuition of freedom is an illusion, although, perhaps, a useful one (in this spirit, one of these philosophers recently wrote, “Humanity is fortunately deceived on the free will issue, and this seems to be a condition of civilized morality and personal value”).8 Finally, some authors defend a very pessimistic view about the possibility of harmonizing the manifest image and the scientific image with regard to the free will issue: TP
PT
TP
PT
TP
PT
TP
PT
257
HOW TO DEAL WITH THE FREE WILL ISSUE
It seems that the attempt to locate human agents in nature either fails in a manner that reflects a limitation on what science can tell us about ourselves, or else it succeeds at the expenses of undermining our cherished notion that we are free and autonomous agents.9 TP
PT
Considering that this issue generates such contrasting views, it is not surprising that the same definition of free will is debated. Until recently, for example, there was consent at least on the idea that free will required two conditions: i) the self-control of the agent, and ii) the availability to her of alternative courses of actions (which is known as “alternative possibilities condition”).10 Now such consent is vanished. The main reason is the objective difficulty of reconciling these two conditions—which conceptual analysis distills from the manifest image—with the scientific image of the world. In this light, some scholars have begun to challenge the relevance of the alternative possibilities condition. It is very controversial, however, whether giving up this condition does justice to our intuition of freedom.11 At any rate, one can say that disagreement is almost ubiquitous in the discussion about free will. A partial list of hotly debated questions can easily give an idea of this predicament: TP
PT
TP
PT
i)
What is the content of the idea of free will and is this idea internally consistent?12 ii) Does the idea of the freedom of the will make sense at all?13 iii) Is free will compatible with causal determinism and/or indeterminism?14 iv) Is there an essential connection between free will and moral responsibility?15 v) Does free will have a structural relation with social and political freedom?16 vi) Do we actually enjoy free will and, if so, in what occasions?17 vii) Could we ever give up the belief that we have free will?18 TP
PT
TP
PT
TP
TP
TP
PT
PT
PT
TP
TP
PT
PT
This list shows that, in discussing the so-called free will issue, one actually deals with a cluster of different problems. The first five problems of the list seem intrinsically (or, at least, mostly) conceptual, whereas problems vi) and vii) seem also to have a distinct empirical dimension. This diversity suggests, therefore, that a division of labor between philosophy and science may be necessary—at least if one intends to approach the free will issue in all its facets. 2. A TAXONOMY OF THE FREE WILL THEORIES Traditionally, the taxonomy of the theories of free will hinges on the basic distinction between compatibilism and incompatibilism. Compatibilist theories
258
MARIO DE CARO
assert, and incompatibilist theories deny, the compatibility of free will with causal determinism. The incompatibilist family is, in turn, articulated in libertarianism (for which free will exists and can only be rooted in indeterminism) and hard determinism (for which, since determinism is true, there is no free will). However, for the issue that interests us here—what the respective contributions of philosophy and science in dealing with the free will issue are—it is useful to complement this classification with another, which was mentioned at the beginning of this article: the one that distinguishes between “scientific isolationism” (“Free will is a business of science alone”), “pluralism” (“Both science and philosophy have to deal with the free will issue”), and “philosophical isolationism” (“Free will is a business of philosophy alone”). As we will see shortly, most versions of scientific isolationism are (unreflective) forms of incompatibilism and more specifically of libertarianism, whereas scientific isolationism does not frequently take the form of compatibilism. This should be no surprise. In general, libertarian views tend to give theoretical dignity to the intuitive idea of free will that is proper for the manifest image. When such an idea is (naïvely) thought to be easily associable with physical indeterminism, the typical product is a scientific isolationist view. Compatibilism on the contrary, is a highly sophisticated philosophical view that requires conceptual analysis in order to be developed; so, in general, it is not defended on a scientific basis alone. Then there are several forms of pluralism, according to which the free will problem can only be treated by a combination of philosophy and science. Most libertarian theories belong to this group, since in stating that we have free will, they argue that free will requires that certain indeterministic conditions are given. Therefore, whereas the dependence of free will on indeterminism can only be established by philosophical analysis, only scientific investigation can determine whether the required indeterministic conditions are real.19 In addition, some versions of compatibilism belong to the pluralist group: these are the so-called “supercompatibilist” views, which require determinism for free will to exist. Also in this case, there is work for both philosophy and science: the former has to show that free will requires determinism, where the latter has to establish if determinism is true.20 Pluralism is, in my view, the most promising perspective in the free will debate. However, unsurprisingly, it is not easy to develop, due to the sheer difficulty of conjugating the results of conceptual analysis (which refine the notion of freedom proper of the manifest image) with those of empirical research (which are encompassed in the scientific image). Besides scientific isolationism and pluralism, there is philosophical isolationism. This comes in three versions. Two respectively try to prove a priori (i.e. through pure conceptual analysis) the correctness of libertarianism and compatibilism. The third is a skeptical view, according to which the concept of free will is contradictory and, therefore, there cannot be any free agent. All these views (however different) deal with the free will problem in a way that leaves no room for any real empirical contribution. TP
TP
PT
PT
259
HOW TO DEAL WITH THE FREE WILL ISSUE
3. SCIENTIFIC ISOLATIONISM According to the scientific isolationist view, the free will problem is intrinsically empirical. Thus, if it can be solved, it can be solved by the empirical sciences alone (for example, by the neurosciences and/or by evolutionary psychology). In this light, from a strictly theoretical point of view, the problem of free will does not differ— except for its generality—from the problem of understanding what schizophrenia or autism are, and which agents are affected by them. Such a view is very naïve and clearly wrong. As we have seen, a serious discussion of the free will problem does not even take off without a preliminary conceptual analysis and a correct definition of both the concepts involved and the theoretical options. Certainly, given the abstractness and the conceptual genesis of the free will issue, these are tasks (and very complex ones, by the way) for philosophy. Therefore, it is regrettable—but not unsurprising (given how superficially the science-philosophy relationship is often dealt with)21—that the problem of free will is presented as if it were on the verge of finally being solved, thanks to this or that scientific achievement. Frequently, this point of view assumes the form of an “unreflective libertarianism”, which is defended more or less in the following way: “Until it was common to think that the deterministic Newtonian framework was the correct one, free will was a real issue, since obviously determinism frustrates freedom.22 However, nowadays quantum mechanics has definitively proven the falsity of determinism. Now we know that the laws of nature, being indeterministic, do not represent a menace to our freedom. Therefore, the so-called ‘mystery of free will’ has finally been solved: it is not a mystery anymore!”.23 As said earlier, this argument is wrong for several reasons, which are instructive to consider. First, as we will see shortly, it may be that determinism frustrates freedom. However, even if it does, this is not “obvious” at all (a proof of that, if attainable, would surely require remarkable intellectual sophistication). Furthermore, one should be very suspicious of bold statements such as that science has proved the truth of indeterminism (and so, indirectly, the existence of free will). In general, epistemology and history of science should indeed make us suspicious of claims concerning the alleged correctness of an empirical theory. Besides the obvious fact that no empirical theory can definitely be proven correct, one cannot exclude that, in the future, quantum mechanics will be reinterpreted deterministically or replaced by a deterministic theory.24 Moreover, it is very reasonable to think that, in any case, the indeterminism of the subatomic world would not suffice, in itself, to infer the existence of free will. In the first place, it is very controversial whether subatomic indeterminism has significant repercussions at the macroscopic level. It is true that Roger Penrose famously maintained that the mind has peculiar properties (including free will), since it can perform non-computable operations (allegedly, in virtue of the systems of microtubules that sustain large-scale quantum-coherent activity).25 It is also true that, as Owen Flanagan reported, “There is work nowadays in chaos and complexity theories and in self-organizing dynamical systems theory that suggests that the TP
TP
PT
PT
TP
TP
PT
PT
TP
PT
260
MARIO DE CARO
human nervous system operates, at least sometimes, in ontologically indeterministic ways”.26 Nevertheless, the majority view seems to be that the workings of the cerebral mechanisms are deterministic or at most “quasi-deterministic” (in a sense close enough to ideal determinism that, in discussing the free will issue, one can ignore the “quasi” prefix). On this basis some even claim that the same idea that our conscious will is in charge of our acting is illusory.27 Moreover, the deterministic thesis is frequently conjunct with two other very common claims: that causal relations hold between events and that actions are events. From this conjunction, many infer that deterministic, or quasi-deterministic, laws back the causation of actions.28 Summarizing: if determinism really represents a menace for free will, then we still have a reason to keep worrying, notwithstanding quantum mechanics. Something more, however, has to be said in assessing the roles that philosophy and empirical science should respectively play in tackling the free will issue. As a matter of fact, a purely conceptual argument shows that, even if we were able to ascertain (as convincingly as possible) that indeterminism is relevant in the production of actions, our freedom would still be far from being proven. The idea is that in case our actions were generated in a purely indeterministic way, they would happen at random (or stochastically). As David Hume already noticed, randomness is the opposite of freedom—or, at least, of the freedom we care about (nobody would seriously think that a randomly generated action may be “free”!). Let us look at this argument more closely, then. If an action a is performed by the agent A without being deterministically caused, then in the causal chain of events that precedes the performance of a, there has to be at least one moment t, in which no specific future course of action is necessitated (i.e. it is not determined which course of action will be actualized). Thus, at t, besides a, some other course of action had to be physically realizable. This is to say, that if, after the action is performed, time went backward to t, a different course of action might originate from exactly the same circumstances. (To put it differently, if in another possible world W*, identical to our world until t, the action were performed by A*—A’s Doppelganger—, that action could be different from the one performed by A). This, however, means that, in those circumstances, nothing and nobody could make any difference in producing the course of action that ends in the performance of the action a, instead of the other potential actions. And this means that A was not able to control the actual production of the action a; and without control by the agents, there are no free actions, but only mere accidents. Thus we have seen that indeterminism by itself—far from automatically generating freedom—produces only randomness. However, a different question can also be asked with regard to this issue: does indeterminism also make freedom impossible, as it is frequently maintained? Or, to put it differently, can it not be that the addition of some other factors to indeterminism may make freedom possible, as libertarians (who think that freedom requires indeterminism) argue? This is a controversial issue, on which something will be said below. TP
PT
TP
TP
PT
PT
261
HOW TO DEAL WITH THE FREE WILL ISSUE
At this stage, however, one should notice that the above-stated argument against unreflective libertarianism clearly shows that philosophy, with its conceptual clarifications and analyses, has an essential role to play in the discussion on free will. It accomplishes this by determining the correct scope and the conditions of use of the concepts involved, and by evaluating the relevance of empirical evidence. In this sense, philosophy’s role is not confined to assessing the relation between indeterminism and freedom. Another example is the very common view that, if our actions were determined, ipso facto we would lack freedom. The secular philosophical discussion in this respect certainly proves that, at a minimum, the incompatibility of determinism and free will is far from being obvious. Indeed, the defenders of the so-called compatibilism (the view that freedom is compatible with determinism) have proposed many different models for removing what they see as “the confusions that can make determinism seem to frustrate freedom”.29 Even though it is controversial if they have succeeded, what is sure is that the thesis that determinism is incompatible with freedom cannot be taken for granted, since it at least needs a very sophisticated conceptual analysis. Nonetheless, at the level of conceptual analysis compatibilism also encounters two real difficulties. The first is the problem of accounting for the above mentioned “alternative possibilities condition” of free will. Traditionally, compatibilists have tried to show that in a deterministic scenario this condition can still be fulfilled if we correctly interpret it in conditional terms.30 However, as said earlier, a more promising strategy has perhaps recently been attempted by those compatibilists who simply try to give up that alleged condition of freedom. The debate on this proposal is still very much alive and it is too early to guess what conclusions will be reached. The second conceptual problem of compatibilism is newer, but more threatening. It is generated by an argument known as “Consequence Argument”.31 Here is an informal version of this argument. In order to act freely with respect to a particular action performed, an agent has to control that action. However, to be able to do this, the agent should control either of, or both, the factors that, if determinism is true, necessitate that action – i.e. the events of the remote past and the laws of nature. Both factors are, however, beyond the agent’s control, since the past is inalterable and the laws of nature are inescapable; so, the agent cannot really control the action she performs. Since, of course, this reasoning can be generalized to all human agents and to all their actions, if determinism is true no human can, ever could or will ever be able to, act freely. Determinism, therefore, is not compatible with freedom; and since its definitional claim is proven wrong, so is compatibilism. This argument caused a vast and interesting debate. Compatibilists have attempted to respond to the Consequence argument by both challenging its premises and the rules of inference it appeals to.32 The debate is still open. However what is interesting for us is that this debate is conceptual in character: as it happens with libertarianism, a purely philosophical analysis has to establish the credentials and the same legitimacy of a theory of freedom. TP
TP
PT
PT
TP
TP
PT
PT
262
MARIO DE CARO
4. PLURALISM What all this shows is only that science cannot play an exclusive role in the free will discussion. It does not show that it cannot, and perhaps should not, complement conceptual analysis. Actually, most of the major views imply that, if free will has to be real, the natural world has to be structured in an adequate way (a way that each view describes in its own terms). Certainly, determining the structure of the world is a task for empirical science. In this sense, according to many, the empirical evidence mentioned in the previous paragraph is very relevant. This is a methodological stance that I called “pluralism”: the view according to which, besides conceptual analysis, the investigation on free will also requires that empirical research explain how the natural world actually works. Most of the libertarian views belong to the pluralist group. A well-known contemporary libertarian, Robert Kane, has developed a theory that roots freedom in indeterministic causation. This means that Kane has to reply to the argument mentioned earlier, for which indeterminism implies randomness. According to Kane, the free will we really care about—the one that makes us the “true originators” of our actions—manifests itself in our conscious deliberations, which are caused by indeterministic processes that are neither contra-causal nor purely random. More precisely, according to Kane, a deliberation is the undetermined result of an “effort of the will”, which we go through when we choose among alternative courses of action. In doing so, we decide, without necessitation, between the competing reasons that support the different available choices. In this sense, Kane writes, “the choice we eventually make, though undetermined, can still be rational (made for reasons) and voluntary (made in accordance with our will).”33 Therefore, indeterminism plus “efforts of the will”, instead of generating mere randomness, makes freedom possible. This is an analysis that Kane develops on the conceptual side. As to the empirical side, Kane speculates that, even if chaotic systems are deterministic, “a combination of chaos and quantum physics might provide the genuine indeterminism one needs”. Kane’s idea is that in the brain there may be chaotic processes that “magnify quantum indeterminacies in the firings of individual neurons”.34 This is of course a highly speculative hypothesis, but what interests us here is its empirical character: if it will ever be confirmed, the confirmation will have to come from the neurosciences and/or from other empirical sciences. Kane’s view is therefore committed both to a peculiar philosophical analysis and to some specific empirical hypothesis. This implies that it can be challenged at both levels. At the philosophical level, the traditional objection that sees indeterminism as implying randomness is still impending (notwithstanding Kane’s promising appeal to the role of reasons in the generation of actions). At the empirical level, Kane’s view is committed to a peculiar view of the way the brain works; and in this respect, the final judge here is empirical research. Besides the virtues, also the price of pluralism is obvious here: a theory has to defend itself at both the philosophical and the scientific levels. TP
TP
PT
PT
263
HOW TO DEAL WITH THE FREE WILL ISSUE
Kane proposes a libertarian theory based on indeterministic eventcausation. A more radical form of libertarianism, called “agent-causation”, is centered on the idea that agential freedom postulates peculiar causal powers of the agents, that is, the agents’ capacity to cause changes in the world without being so determined. An influential version of this view was recently proposed by Timothy O’Connor, who—in order to make his proposal naturalistically acceptable— conjugated agent causation with an emergentist view of the agents’ place in the natural world. In O’Connor’s view, the agents’ causal capacities depend, for their own existence, on the microstructural properties of the brain. On the other side, they have features (including the capacity of instantiating “top-down causation”, seen as essential for free will) that are irreducible to the causal powers of the microstructural level. In this light, Having the properties that subserve an agent-causal capacity doesn’t produce an effect; rather it enables the agent to determine an effect (within a circumscribed range). Whether, when and how such a capacity will be exercised is freely determined by the agent.35 TP
PT
All this, O’Connor admits, is highly speculative. According to him, conceptual analysis shows that if free will has to exist, agents must have special causal powers. But, if these powers are actually real—and, therefore, if free will is a real fact—, this is something that is up to empirical science to determine. In this light, appealing to philosophers of science such as Nancy Cartwright, to physicist as Ilya Prigogine and to some new research in biology, O’Connor states that “contemporary scientific knowledge is sufficiently incomplete to not rule out an emergentist picture of some factors within some highly organized phenomena”.36 Therefore, “the question of emergence may be settled only in the end game, where completed theories are compared”.37 In addition, this form of libertarianism has to play on two tables. On the conceptual table, this view has to answer the usual objection: how is it that indeterminism can generate freedom and not merely randomness. On the empirical table, it has to explain how the peculiar causal powers of the agents can square with the scientific image of the world. Also in this case, a defence of the theory asks for contributions from both philosophy and science. Finally, some versions of compatibilism also belong to the pluralist group: these are the so-called “supercompatibilist” views, which require determinism for free will to exist.38 In these cases, too, there is work for both philosophy and empirical science. At the philosophical level, supercompatibilists have to deal with the two big problems that, as we have seen, apply to all compatibilist views: i.e. they have to explain how their views can deal with the difficulty concerning the alternative possibilities condition of free will, and how it is that they are not affected by the Consequence Argument. At an empirical level, supercompatibilists have instead a problem that simple compatibilists (for which freedom is merely compatible with determinism, but does not require it) do not have. TP
TP
PT
TP
PT
PT
264
MARIO DE CARO
By being committed to the existence of determinism, supercompatibilists (like most libertarians) have to hope that their speculations will be confirmed by empirical science. 5. PHILOSOPHICAL ISOLATIONISM According to philosophical isolationism, conceptual analysis is the only thing we have to care about in dealing with the free will issue. From this perspective, for example, Kant defended an a priori form of libertarianism, arguing that the intuition of free will (viewed as a power to start new causal chains) can be vindicated only at the noumenal level (where philosophy rules), not in the natural world (which is studied only by science).39 Kant’s solution, however, has seemed deeply unsatisfactory to a vast majority of philosophers, both because of its dualism and for its inability to grant real freedom to agents (the intuition of freedom—however right or wrong it may be—implies that we are free here and now, in the natural world, not in a transcendental one). P.F. Strawson proposed another influential form of philosophical isolationism.40 According to Strawson, even if science proved that we are completely determined, we would not, or should not, entertain a global doubt about our moral responsibility and freedom.41 Although there is no space for discussing this proposal here, it should be noticed, at least, that it is composed by two parts: a normative claim (it would be irrational for us, in any situation, to doubt our responsibility and freedom) and a descriptive claim (it is a fact that we would not ever be able to do that). Both claims have been convincingly criticized.42 What interests us here specifically, however, is that Strawson’s claim implicitly presupposes the unconvincing assumption that no scientific findings could ever force us to rework our manifest image (this may mean that this is in fact impossible or that rationally it should not be done). Indeed, even if the friction between the scientific and the manifest images is unavoidable, as we have seen, they still affect each other deeply. On the one side, the progress of science is one of the main reasons for which the manifest image of Aristotle’s time was different from those of Newton’s time and Einstein’s time. On the other side, the manifest image tends to put constraints on the acceptability of scientific theories. Theories can be more or less welcome—and their fortune can correspondingly vary—, depending on how intensely they challenge the manifest image. To put it differently, what is doubtful is whether it will ever be possible to find a “reflective equilibrium” between the manifest and the scientific images; not that, from an intellectual point of view, pace Strawson, we do not have the duty to keep negotiating between them. Then there are skeptical views, in which conceptual analysis claims that freedom is impossible, and there is no need to refer to science in order to understand the issue better. According to these view, therefore, our belief in freedom is a mere illusion.43 Their general weakness (besides the fact that their analyses of the concept of freedom can be challenged)44 is the assumption that conceptual analysis can be wholly independent from the other parts of our cognitive system. In short, skeptical TP
TP
PT
PT
TP
PT
TP
TP
PT
TP
PT
PT
265
HOW TO DEAL WITH THE FREE WILL ISSUE
views postulate the very doubtful distinction between analytic and synthetic judgements. As Quine famously showed, however, what is analytic (conceptual) today may be synthetic (empirical) tomorrow, and vice versa.45 Thus, there cannot be any final a priori word on a philosophical issue. One last thing should be said about the skeptical views of freedom. Very frequently, these skeptical views are inspired by an extremely naturalistic attitude, according to which whatever cannot be naturalized (that is, reduced) has to be eliminated, at least, in principle; and, according to many authors, this is exactly the case of freedom. Therefore we do not need any empirical investigation for concluding that we are not free. This skeptical strategy may appear strictly philosophical (and in this case it would be another case of philosophical isolationism). One, however, has to notice that, in this perspective, philosophy itself has to be naturalized—it has to become part of the sciences (or, perhaps, it has to annihilate itself in the natural sciences). Since there is no philosophy beyond the empirical sciences, these views could also be presented as a case of scientific isolationism. At any rate, these extremely naturalistic views encounter a deep conceptual problem (among the others), i.e. that of accounting for the content of the beliefs they aim at eliminating.46 On this regard, Stephen White has an interesting reply to Galen Strawson’s statement that free will is illusory. He asks, if freedom is an illusion, then what is it an illusion of?47 White adds: TP
TP
PT
PT
TP
PT
If freedom and agency are incoherent on the assumption of determinism and, equally, are incoherent on the assumption of randomness and on any other assumption about the objective metaphysical facts, then from any objectivist metaphysical perspective the notion is incoherent. It is then a mystery what people think they have when they think they have free will.48 TP
PT
There is another interesting point regarding these extremely naturalistic views. As I said earlier the manifest image (which includes the original intuition of freedom) and the scientific image are structurally in tension. I also added that, even if there is no ready-made way of solving that tension, we are condemned to try to relax it as much as possible. This is what the pluralists try to do. They aim at accounting for freedom by considering both the intuitions that come from the manifest image and the empirical results coming from science.49 Considering the structural tension between the two images, this may be a tantalizing enterprise (and, in this light, it is not surprising that a famous philosopher states that “nothing believable” has ever been offered as a theory of freedom).50 Many people, however, have a different opinion. This is expressed with clarity by Owen Flanagan: “If there is such a thing as free agency or voluntary action, it cannot, if the scientific image is true, be immune from the causal laws that govern all things physical”.51 Officially, therefore, Flanagan is optimistic about the possibility of translating the manifest image—more specifically, its free will TP
TP
TP
PT
PT
PT
266
MARIO DE CARO
component—into the scientific one. Notice, however, the “if” clause in this passage, which leaves the “illusionist” option open: it may be that the intuition of freedom is just wrong. Notice also the clearly ideological tone: if some phenomenon can be reduced, at least in principle, to the sciences of nature, very well; otherwise, we have to conclude it does not exist. At any rate, in the same work, a few pages earlier, there is another revealing passage, which seems to suggest something else, i.e. that Flanagan is somewhat aware that his project may not work very well. This is because the merging of the manifest image into the scientific image may be proven to be a very difficult task, even an unfeasible one. It also may not work well because, in the presence of a failure of the attempted reduction, the elimination of free will would not be a wise move. All this is suggested when Flanagan writes, “some suitably naturalized conception of human agency preserves some, but not all, of what is worth preserving in the traditional concept of free will”.52 One wonders, what results one would get by taking “the traditional concept of free will”, sectioning it into parts, and trashing the non-naturalizable parts, even if they are worth preserving! Would this procedure give us intellectual progress? Would it not be better to keep negotiating between the manifest image and the scientific image—even if this can prove to be an endless task?53 TP
TP
PT
PT
NOTES 1
Nozick (1981, p. 292) Wolf (1990, p. vii). 3 Nagel (1985, p. 112). 4 Hume (1748 [2000], p. 95). 5 Kant (1787 [1965], p. 469). It is worth noticing that the truth of determinism would not imply that human actions are necessary, but only that they are necessitatated (they would not be actualized in all possible worlds, after all!). On this, see Audi (1974). 6 See this volume, chapter 18. 7 Van Inwagen (1998, p. 172). 8 Smilansky (2001, p. 88). 9 Earman (1992, p. 262). 10 “Any adequate conception of free agency must provide for possibility and autonomy in some sense” (Watson 1987, p. 145). For the connection between self-control and free will, see Mele (2001). 11 Dennett (1984). On the role supposedly played by alternative possibilities in establishing free will (and moral responsibility) see also the vast controversy based on Frankfurt (1969), on which see Fischer and Ravizza (1993); Widerker and McKenna (2002); Fischer (2002). 12 Van Inwagen (1983); G. Strawson (1986). 13 Chisholm (1964); Frankfurt (1971); O’Connor (2000); Clarke (2003). According to G. Strawson: “‘Free will’ is the conventional name of a topic that is best discussed without reference to the will” (2005, p. 286). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
HOW TO DEAL WITH THE FREE WILL ISSUE
14
267
See below, sections 3 and 4. P.F. Strawson (1962, 1998); Fischer and Ravizza (1993); Fischer (1999). 16 Pettit (2001). 17 Van Inwagen (1989, 1994); Fischer and Ravizza (1992). 18 Recently, a number of authors have claimed that the belief in free will is illusory. Such authors, however, split with regard to the question of whether such a belief could or should be abandoned: G. Strawson (1986) and Smilansky (2001) think that this would be undesirable and practically impossible; Honderich (1988) and Pereboom (2002) affirm the opposite. 19 Different libertarian theories account differently for when and where the indeterministic phenomena on which freedom allegedly depends happen: see Kane (1996) and Ekstrom (2000, chapter 4). 20 In general, determinism is a global thesis, about all events of all times. However, for people specifically interested in the free will issue, a local determinism that concerned the human world would be enough to give rise to the dreams of the compatibilists and to the nightmares of their opponents. 21 On this issue, see the essays contained in De Caro and Macarthur (2004). 22 Indeed John Earman offers good reasons for thinking that not even Newtonian mechanics was “a paradise for determinism” (1986, p. 2). 23 Sophisticated versions of this argument are offered in Eddington (1929), Compton (1935), Eccles (1994), and Penrose (1994). 24 Weatherford (1991); Earman (1986, cap. 11); Hodgson (2002). 25 Penrose (1989). 26 Flanagan (2002, p. 121). However, more prudently, Flanagan also adds that the indeterminism of the brain processes, instead of being ontologically based, may indeed depend on our cognitive limitations. 27 Libet (1981, 2002), Walter (19992) and Wegner (2002) argue in this sense by yielding new ingenious, if very controversial, experimental evidence. On Libet, see Libet and Commentators (1985), Libet, Freeman, and Sutherland (1999), and Dennett (2003, chapter 8); on Walter, see Bayne (2002); on Wegner, see Nahmias (2002) and Bayne (2006). Useful discussions on determinism and its implications for freedom are also in Dennett (1984), Honderich (1988), Weatherford (1991, chapter 10), Pereboom (2002), Bishop (2002). It should be noticed that chaotic systems, if unpredictable, should still be considered deterministic (Earman 1986, chapter 3). 28 Davidson (1970). 29 Ibid., p. 63. 30 On this debate, see Berofsky (2002). 31 Van Inwagen (1983). 32 Kapitan (2002). 33 Kane (2005, p. 136). 34 Ibid., p. 134). 35 O’Connor (2000, p. xiv). Still another form of libertarianism, sometimes called “simple libertarianism”, conceives agent control as intrinsically non-causal: see Ginet (1990) and Ekstrom (2000, pp. 89-92). Also this view requires the empirical falsity of determinism (so it is exposed to the typical anti-libertarian argument that indeterminism makes freedom coincide TP
PT
15 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
P
P
268
MARIO DE CARO
with randomness); however, simple libertarianism has also to face another, more specific charge, i.e. that by disconnecting freedom from causation, it makes agency epiphenomenal. 36 O’Connor (2000, pp. 114-115). O’Connor does not, but with regard to this issue, one could usefully mention Dupré (1993). 37 O’Connor (2000, p. 115). 38 From Hume to Ayer this view has been traditionally defended by many compatibilists. 39 Kant (1787), (1788); on the Kantian theory of freedom, see Allison (1990). 40 P.F. Strawson (1962). 41 Ibid., p. 11; also P.F. Strawson (1998). 42 Russell (1992); G. Strawson (1986). 43 See above, n. 18. 44 Many, as for example van Inwagen (1998), conjunct the antilibertarian argument (indeterminism implies randomness) with the Consequence Argument in order to prove that all possible theories of freedom are wrong and that, therefore, free will is impossible. It is controversial, however, whether the two parts of this “pincer argument” really work; and it is even more controversial if it can be applied to the version of libertarianism known as “Agent Causation”. In De Caro (2004) I argue that one could argue in favour of free will by looking at the way the social sciences treat agency: in that respect, one could say the scientific image incorporate and refine the manifest image. 45 Quine ([1951] 1953). 46 As shown by Stroud (1996) similar difficulties affect many of the attempts (made by the advocates of scientific naturalism) to reduce or eliminate entia non grata such as values, colors, numbers and meanings. 47 White (2004, p. 203). 48 Ibid. It is interesting to notice that even some naturalists, such as Colin McGinn, would grant that in their view freedom becomes a mystery (“Free will is a mystery, and therein lies its possibilities”, Mc Ginn [1999, p. 168]). Another mystery is why they do not have the slightest doubt that this could be considered as a reductio ad absurdum of their extremely naturalistic views! 49 For other reflections on this issue, see this volume, chapter 13. 50 Nagel (1986, p. 112) 51 Flanagan (2002, p. 135). 52 Ibid., p. 127n., my italics. 53 I would like to thank Stephen White for innumerable valuable discussions on the issues treated here and Ben Schupman for carefully reading a previous version of the paper. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
D. Social Agency
CHAPTER 20 THE BELIEFS OF MUTE ANIMALS Simone Gozzano
The issue of animal minds continues to engender philosophical debate.1 Famously, Descartes considered animals without speech as automata, complex and fascinating machines with no internal lives.2 Such a view shaped the subsequent intellectual milieu even after Darwin’s theory, which closed the gap between humans and other species. For a long time it has been thought that the only hope for unveiling the capacity for thought of these creatures, thus bringing mental light into the dark matter, was by teaching them a form of language, and this position gave rise to a number of research projects. From the Kellogs in the 1930s to the Hayeses in the 1950s up to the Gardners (1970s) and the Premacks (1980-90s) much effort has been spent in teaching animals a symbolic system so as to check their cognitive abilities.3 Notwithstanding recent interesting findings,4 the paradigmatic presupposition on which these attempts are founded has recently been challenged: does thought necessarily require a language? That is to say: is it possible to conceive nonlinguistic behavior as driven and planned by thought-like states? Considerations in favor of animal mentality have been advanced by many philosophers5 and ethologists6 but the general attitude remains one of skepticism even among ethologists,7 the chief objection being that animals behavior could be explained by reference to complex series of stimula and responses. There are intermediate positions8 that grant some form of proto-thought to animals other than humans, but some of those who favor the attribution of concepts to animals continue to assert that concepts are dependent on the capacity for symbol processing.9 It is revealing that philosophers, empirical scientists, psychologists and ethologists converge on the thought-language issue as this opens up the possibility of exchanging ideas and methods between the two groups of scholars. In the spirit of stressing this interaction, in this chapter I will firstly discuss the arguments against animal beliefs; I will then outline some general conditions for the individuation of functional states with content and for the attribution of intentional states to animals, conditions stemming from the arguments against animals’ beliefs; and finally, I will test these conditions against empirical cases. TP
TP
PT
PT
TP
TP
TP
PT
TP
TP
TP
PT
PT
PT
PT
PT
TP
PT
1. THE ARGUMENTS AGAINST ANIMAL BELIEF In a paper written many years ago Malcolm imagines a dog chasing a cat. The cat runs toward an oak tree but, at the last moment, swerves and disappears up a nearby maple. “The dog does not see this maneuver and on arriving at the oak tree [...] 271 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 271–282. © 2007 Springer.
272
SIMONE GOZZANO
barks excitedly into the branch above”.10 According to Malcolm we can say that the dog thinks the cat is in the tree but we cannot say that he has the thought “the cat is in the tree.” This idea has been attacked by Davidson who rejects the possibility of having thoughts without mastering a language. According to Davidson, there are three key preconditions for the proper attribution of intentional states: i) beliefs, and in general thoughts, attributed to anybody must reflect semantic opacity,11 because such a logical feature has been considered to distinguish talk about propositional attitude from talk of other things;12 ii) semantic opacity requires the mastering of concepts, and iii) the attribution of concepts, in turn, presupposes a form of conceptual holism. Because it is neither possible to attribute concepts nor a holistic net of concepts to animals, it is not possible to ascribe semantically opaque thoughts to them; hence speechless animals cannot have beliefs and desires. Finally, Davidson argued that in order to have a belief one has to have the concept of a belief, that is, of a state that could be true or false. Now, even if some authors have disputed the importance of semantic opacity as a requirement for intentionality,13 I share Davidson’s idea that this feature is central in the analysis of intentional states.14 Following Davidson, then, we can transform the question regarding the supposed thoughts of mute animals into whether we can make sense of opacity in contexts that exclude linguistic expressions. Throughout his analysis of animal beliefs, Davidson is substantially guided by something like “Russell’s Principle”,15 according to which, in order to think about an object or to make a judgment about an object, one must know which object is in question.16 In Davidson, the kind of knowledge Russell’s Principle is calling for amounts exclusively to conceptual knowledge. However, construed in this form, Russell’s Principle seems too strong. Imagine observing two patches of red texture, one of which is darker than the other. Suppose you come to believe that the patch on your left is darker than the other. Yet, given the possibility that you lack the relevant concepts for the two shades of color, it would follow that you cannot be credited with the mentioned belief. Moreover, adopting Russell’s Principle seems incorrect for a deeper reason. Belief is the basis of knowledge in that the latter can be taken, at least, as justified true belief. Nothing can be known if it is not believed in one way or another. Now, taking something like Russell’s Principle as a precondition for beliefs’ attribution begs the question of this notion of believing because the precondition is far more complex than the condition to be fixed. To save this Principle, another route is viable: weakening its satisfaction conditions. In the case of the two patches of color, Russell’s Principle could be satisfied by nonconceptual discriminatory abilities.17 This weaker construal is the one given by Strawson, according to which the knowledge of an object amounts to the ability to discriminate such an object when perceived at a time; to recognize it if presented; and to discriminate facts about it.18 Hence, we may say that the discriminatory abilities can be, at least, “nonconceptual”—as to perceptual discrimination and recognition—and “conceptual”—as in the case of knowledge of facts.19 Many have forcefully argued against the notion of nonconceptual content.20 In its place, these authors have proposed the notion of demonstrative concepts, one that plays the same role as that attributed to nonconceptual contents, without being TP
PT
TP
TP
PT
PT
TP
TP
PT
TP
TP
PT
PT
TP
TP
TP
PT
PT
PT
PT
TP
PT
273
THE BELIEFS OF MUTE ANIMALS
holistic. Imagine me uttering: “This one is edible” in front of a piece of yellowish stuff. The demonstrative is referentially clear enough to bring about the belief in my interlocutor that something is edible here without her knowing what kind of thing it is that is edible. So the demonstrative concept does not allow forming any inferential connection with the particular in question, while it allows just partial connections to the concept of edibility. The example, however, might be dramatized: if every object we eat cannot be recognized (imagine a world where food changes continuously and we have to check it every time) then there would be no stable inferential liaisons we can use to fix the concept of edibility. On the other hand, the belief this is edible would allow for opaque substitution in the attribution practice. So, while demonstrative concepts may not meet the individuation conditions for concepts such as Davidson has in mind—which is how he arrived at the position that there is no clear answer to the question of whether the Ancients believed that the Earth was flat21—they are sufficient for crediting the individual with beliefs. As I said, I think that Davidson is right in claiming that one of the standards for attributing thoughts must be opacity, so we should presume that having thoughts entails having referentially opaque thoughts based on elements that themselves need neither to be of a fully conceptual nature nor holistically individuated or connected. The general idea, then, is that a system can be credited with a perceptually fixed belief that p if, during the fixation of the belief, the elements included in p are either i, perceptually discriminated or recognized by the system, or ii, the system discriminates facts about them. For instance, if the belief that p is, in this particular case, the belief that a is F, then the belief that p can be attributed to the system if the system is able to discriminate the individual a and the property F according to one of the three possible ways mentioned. All the discriminatory abilities of a system form what I shall label the epistemic window of a system. We then may say that the beliefs that can be attributed to a system are those whose elements fall within the system’s epistemic window.22 Some systems have, as it were, a poor epistemic window: most animals cannot be credited with the ability to know facts about objects. In this sense I think that many discussions on animals’ concepts are biased by the idea that concepts are all-or-nothing entities and that the standard of these entities are set by skilled adult human beings. But concepts should not be taken in this way: my concept of atom differs greatly from that of a physicist, and my four years’ old son may get the notion of constituent part which is to my concept of atom as this is to the physicist’s one.23 In order to ascribe perceptually fixed beliefs to non-speaking animals, the epistemic window is not sufficient. In fact, the epistemic window is not enough to warrant the sensitivity of the system to semantically opaque thoughts, and since semantic opacity helps in distinguishing talk about beliefs from other talks, it is important to show how the epistemic window and semantic opacity converge.24 In this respect, it is important to inquire what the philosophical significance of semantic opacity is. Even if knowing entails believing, both epistemic conditions share the semantic feature of not necessarily allowing co-referential substitution salva veritate. This stresses that truth is a much stronger notion than knowledge and belief: if you could get a truth, so to speak, by itself, then you would get all the TP
PT
TP
PT
TP
PT
TP
PT
274
SIMONE GOZZANO
possible expressions of that truth; whereas if you were able to isolate the pure knowledge or belief of a certain fact, you will not gain the whole truth about that fact, even if your knowledge is justified and your belief is true. This is because other possible true formulations of that fact are not necessarily included in the isolated knowing or belief. The distance that is present between knowledge and belief, on the one side, and truth on the other, marks our epistemic fallibility; that is, marks that we may be wrong about a matter of fact even if we are right to believe the very same fact (“It is not Tully that denounced Catiline, Cicero did!”). It is epistemic fallibility, the possibility of being mistaken and of being able to recognize such mistakes, which makes sense of intentional attribution.25 It is not necessary, then, to take the problem of mistaking as an extra requirement for the proper attribution of intentional states, as Davidson believes: such a condition is already included in the semantic opacity requirement. TP
PT
2. THE REQUIREMENTS FOR BELIEVING According to the functional approach, systems have complex internal workings, framed in terms of states and processes functionally construed. In one of the deepest analyses of this approach, Loar considers functional states as a net of horizontal links.26 This net encompasses perceptual inputs, internal workings—as inferences and the like—and behavioral outputs. Functional states are individuated through a process of theory construction, in which the internal states are analyzed, compared and contrasted with the overall behavior of the system. This horizontal net also has some vertical connections. Some functional states, in fact, tend to be reliably activated by non-mental conditions, so that we can say that these states are associated with certain truth-conditions even if they are independent from them.27 One of the most prominent tasks of the functional approach, then, is to interpret the internal states and processes that exist between an input, whether perceptual or conceptual, and an output, any kind of behavior, so as to individuate the content of these states and processes. Interpretation can be considered as that epistemic process that aims at individuating the contents of functional states. These contents, in turn, may have different functional status if they are embedded in epistemic states (e.g., beliefs) or conative ones (e.g., desires). These contents and their functional status determine the causal role played by the states themselves. However, when we wonder whether to attribute beliefs to animals, it is essential to consider which are the features of this notion that we take to be determining. It seems to me that there are three different ways in which we can spell out the notion of content.28 TP
PT
TP
TP
2.1
PT
PT
Contents of type 1
If a system discriminates a property, say F, then we assume that there is some internal process we can interpret as an F-detector. The resulting state can be considered a functional state whose content is “F is present”.29 This interpretation is TP
PT
275
THE BELIEFS OF MUTE ANIMALS
based on the relation of indication, the simplest form of interpretation.30 Consider a gauge that detects the presence of water in a tank. We can say that it indicates the presence of water in the tank. However, the gauge indicates what it does independently of the way in which its object is individuated: if it indicates water, then it indicates H2O molecules in a liquid state or the kind of liquid I just had a glass of. An indicating relation then allows only for transparent readings.31 As to the problem of predicting the system’s behavior, it does not matter which reading one picks up: all the possible equivalent readings have the same predictive or explicative power. Strictly speaking, the detection system indicates that the mechanism itself is in a certain situation: the float in the tank is in a certain position if everything is properly working. Complete transparency follows. The consequence is that contents of type 1 fail to be associated with the appropriate truth conditions only in case the system is malfunctioning. For instance, a gauge might indicate the presence of water in the tank even in absence of water only if the gauge or its detection apparatus are malfunctioning. If everything is properly working, then the content indicates what it is associated with. So, type 1 contents are transparent and can be false only in case of malfunction. Hence, no intentional states can be attributed to systems endowed just with content of type 1. TP
B
PT
B
TP
2.2
PT
Contents of type 2
Type 2 contents, like those of type 1, are typically associated with some truth conditions. However, a malfunction of the system is not necessary in order to sever the connection between a given state in the system and some truth-conditions which are usually responsible for the system to be in that state. Consider Malcolm’s dog again. When we say that the dog barks at the tree because, loosely speaking, the dog believes that the cat is in the tree, we are saying that the dog is in an epistemic state that we may interpret as having the content “the cat is in the tree”. As in the case of content of type 1, the interpretation of the internal state of Malcolm’s dog allows for other co-extensional readings too.32 However, the correctness of the functional state’s interpretation does not imply that the truth-conditions hold: it is possible that the cat is not in the tree even if nothing in the visual apparatus of the dog is malfunctioning. Let’s be clear about this: you may have individuated the correct content attribution, in terms of the explanations and descriptions of the system’s behavior and in terms of the reliable association that activates that state, without this content being true and without there being any malfunction in the system. In order to correct a state with an erroneous type 1 content you should fix (at least a part of) the structure of the system; to correct a state with an erroneous type 2 content you should provide more information to the system itself. In case a system exhibits a behavior with respect to which the attribution of type 2 content is justified, we must postulate some representational function between stimulus and response, that is, some cognitive activity that mediates between sensory inputs and motor outputs. What this shows is that this kind of content does not allow for every possible substitution: at least some substitution may change the system’s behavior – which is, TP
PT
276
SIMONE GOZZANO
as we saw when examining Davidson’s standard for attributing mental content to an agent, exactly what we want: the opacity criteria is vindicated by the advocate of attributing mental activity to animals other than humans. 2.3
Contents of type 3
Here we get to the notion of content as is customarily attributed to human beings. In this case, contents allow for opaque readings and their correct individuation establishes a weaker relation to truth conditions. If John believes that Cicero denounced Catiline this does not entail that John believes that Tully denounced Catiline because, even if Cicero is Tully, John may not have this information. Ascriptive talk is opaque in case of states with type 3 contents. This is so because, as is commonly known, intentional contents refer to their object through a “mode of presentation”, and modes of presentation, namely the various ways through which it is possible to identify a given state of affair or event, are more fine-grained than their associated truth conditions. In case of type 3 content it is essential to pick up the correct mode of presentation in order to have a good prediction or explanation of the system’s behavior. Finally, as in the case of states with type 2 content, the possibility of John’s having an internal state whose content does not reflect a given state of affair does not necessarily entail any malfunctioning in John’s epistemic or perceptual apparatus. Similarly, type 3 contents allow for opaque readings and can be false without the system having any malfunctioning. How should we use these different notions of content? It is important to notice that attributing type 3 contents to a system entails that it would show different behaviors with respect to the elements that form the content of John’s belief, if these were caught by different modes of presentation. John saying “Yes” to the question: “Did Cicero denounced Catiline?” does not entail John saying “Yes” to “Did Tully denounced Catiline?”. Can something like this happen in the case of animals in which type 2 contents operate? The difference between type 3 contents and type 2 contents is a difference in the degree of granular fineness. The advocates of the thought-language identification, as Davidson and possibly Chomsky, have maintained that if a thought is not fine-grained as ours it is not a thought. They have excluded the possibility of thoughts of different grain. The possibility of type 2 contents shows that there is at least another option viable: contents of type 2 fix equivalence sets between ways of describing the elements that constitute the content of a given belief that are much larger than those fixed by contents of type 3 but not as comprehensive as contents of type 1. How can this difference be assessed? By the capacity that systems have to adapting their behaviors to changes in the mode of presentation. 3. ATTRIBUTING CONTENTS TO MUTE ANIMALS In order to find plausible candidates of intentional behavior in animals I will consider deceptive behaviors because these can be performed both through language
277
THE BELIEFS OF MUTE ANIMALS
and mute action. Among human beings there are a variety of kinds of deceptions, and even if deception seems an intentional notion in itself, there are some cases in which it is possible to deceive in a non-intentional way.33 In order to reinforce the intentional interpretation of deception it should be conceived as voluntarily and goal-directed in that the deceptive behavior should not be an automatic response (such as camouflage or mimicry) and should not achieve its end by chance (the deceiving result must be brought about as driven by a goal). Let me clarify this by an example. In the Odyssey, Ulysses deceives his enemies by concealing his identity. He arrives in Ithaca dressed as a vagrant. His deception is voluntarily and goaldirected: it is voluntarily because it is not an automatic response; it is goal-directed because Ulysses’ acting as a vagrant is aimed not at the pure exhibition of his deceiving ability, but as the result of a plan intended at the verification of his enemies’ political and moral behavior. Does this case of deception establish a positive case for an intentional attribution? May we say that Ulysses’ enemies believe that “a vagrant has arrived”? I think the answer should be positive. Ulysses’ enemies can be credited with the belief “a vagrant has arrived” because i) the elements “vagrant”, “Ulysses” and the property of arriving fall within their epistemic window and these elements can be described through functional states interpreted in terms of their various contents; ii) it would be possible to ascribe to Ulysses’ enemies the content “Ulysses has arrived” that is contingently equivalent to the content of the supposed belief; iii) the attribution of a functional state interpreted as having one content does not imply the attribution of a functional state interpreted as having the other, therefore we may say that Ulysses’ enemies believe that “a vagrant has arrived”. It should be stressed, thought, that invoking voluntariness and goaldirectedness is not question begging on intentional content: these notions, as I have construed them here, can be attributed by using type 1 contents while intentional notions require at least contents of type 2. Now, the three conditions I mentioned can be summarized as follows. In order to have an intentional state with content p a system S must: TP
PT
i)
have an epistemic window in which the elements that form the content p fall;
ii)
support the attribution of another content q that satisfy condition i) and is extensionally equivalent to p;
iii) give rise to differential behaviors for p and q (that is: attributing p does not imply attributing q). Now, what kind of content should be considered in this case? Is it necessary to consider just type 3 content or will type 2 content also suffice? As we have pointed out, contents of type 2 are not as fine-grained as content of type 3, but are fine-grained enough to sever the connection between truth-condition and the correctness conditions for the attribution of the associated functional states without
278
SIMONE GOZZANO
regard to possible malfunction. I think they are sufficient in allowing the fixation of the property of appearing, the one that is in play when Ulysses appears as a vagrant. So, we should show that it is possible to give an interpretation of a case of animal deception that meets the conditions met by the situation in which we have supposed Ulysses and his enemies. This is the challenge. Consider the following case, exhibited by two home-reared chimpanzees, Austin and Sherman. Austin is smaller than Sherman and subordinate to him but, unlike Sherman, he is not afraid of the dark. During the night they both rest in a hut. Because Sherman was bullying Austin throughout the day, as night approaches Austin performs a deception to reverse the dominance order. Unobserved by Sherman, he goes out of their hut, makes strange noises as though someone is scraping on the hut, then returns inside and looks outside with a worried stance. Sherman becomes fearful and stops bullying him.34 This case of deception is voluntary, in that Austin would not have performed that behavior as a result of some automatic mechanism, nor if Sherman had not been nearby; and it is goal-directed, in that the aim is to induce Sherman to stop bullying him. But what is important for our purposes is that this could be considered a case in which we may interpret an animal’s functional state in terms of content and then ascribe to the animal an intentional state. Let us see this conclusion in details. May we attribute to Sherman a state whose content is: “something is making noises outside the hut”? I think we can respond affirmatively. We can attribute this state to Sherman because i) Sherman discriminates all the relevant elements of the content described and others such as noises, producing noises, and the properties of being inside or outside the hut; ii) it would be possible to interpret Sherman’s internal functional state as having the content “there is my-hut-companion making noises outside the hut”, an interpretation that is contingently equivalent to the state of the supposed intentional state and whose element can be discriminated by Sherman; iii) the attribution of a functional state interpreted as having the content “something is making noises outside the hut” does not imply the attribution of a functional state interpreted as having the content “there is my-hut-companion making noises outside the hut.” From all this follows that Sherman can be credited with the attribution of intentional states of type 2 content that “something is making noises outside the hut”.35 Now, the attribution is secured by the epistemic window of the system, namely, by his discriminatory abilities. These abilities allow Sherman to entertain two modes of presentation related to the same truth-conditions. Hence, we have an intentional state in a non-speaking creature based on two type 2 contents. This case shows that language is not necessary in order to have intentional states. One may object that there is no reason to attribute such complex thoughts to Sherman and Austin. It would be sufficient to consider, behavioristically, their reactions to stimuli occurring from moment to moment. This very common line of reaction, however, is somewhat paradoxical. In fact, the more the animal behavior is complex, both as a result of carefully stated anecdotes and as emerging from subtler experimental settings, the more the behavioral explanations have to be articulate. The result of having very complex behavioral explanation, though, is that of making them more powerful, apt to explain also more and more cases of human behavior. TP
TP
PT
PT
279
THE BELIEFS OF MUTE ANIMALS
So, we obtain animal non-mentality at the price of losing our own! Moreover, abandoning intentional explanation in case of mute animals reduces simplicity and predictability in ethology; and with them the advantage of ontological parsimony. Here, it seems, empirical and philosophical interests diverge. This analysis allows us to meet such other requirements as those invoked, for instance, by Quine, Dennett and Davidson, regarding the attribution of intentional states. One of these is that intentional systems cannot exist in isolation. On the present analysis, this condition is satisfied, even if minimally, by any two speechless animals. In fact, the behavior I have discussed relies on forms of communicative interaction: the conditions for believing are set by an animal (Austin) while the belief is attributed to another (Sherman). In a certain sense, the response behavior is shared both by the deceiver and the deceived, and this lends support to the hypothesis that these animals have a natural “theory of mind” that is shared and used for communicative ends.36 Now, since I have not presumed any special ability in the two chimpanzees, we may suppose their mental equipment is present in other individuals of their species as well. There is also a second requirement satisfied by this approach: it is implausible to conceive an intentional system with the capacity for a single, isolated belief, because of the holistic nature of these states. My proposal, however, is set on the assumption that there should be at least two functional states with an equivalent content. However, such an oversimplified system would be an extreme case. For each belief we must admit that the discriminated elements of each state can be attributed as parts of other complex functional states. S
TP
PT
4. CONCLUSIONS As far as animals are concerned, my contention has been that we are justified in attributing intentional states to a system whenever that system exhibits: • a behavior governed by functional states; • the interpretation of those functional states can be put in terms of contents that are supported by discriminations that fall within its epistemic window; and • the system exhibits behavioral patterns that satisfy certain conditions. I have used examples of primates because they already give us plentiful evidence for grounding such attributions; but it is possible that they are not the only non-speaking intentional systems. In looking for other possible candidates, further empirical and experimental data must be considered. I want to conclude with two general considerations. First, since the so-called “linguistic turn” there has been a tendency to give an account of thought on the model of language. As a consequence, the possibility of any form of non-linguistic or pre-linguistic thought has been almost ruled out a priori. The fact is that those
280
SIMONE GOZZANO
who have followed the “linguistic turn” have not only tended to view language as a useful methodological tool for analysis, but also as imposing a substantive constraint on the nature of thought. Consider again the problem of intentionality. If we analyze intentionality in terms of language, two main routes may be followed: either we consider linguistic expressions as the only relevant data for investigation, thereby taking language as constraining thought; or we extend the domain of the intentional to include those behaviors that satisfy certain criteria of complexity extrapolated from language. The first tradition, perhaps dominant, thinks that language mirrors thought and even if “[l]anguage may be a distorting mirror [...] it is the only mirror we have”.37 The other tradition considers language as a means to exhibit or represent certain underlying phenomena, without taking this cognitive mechanism as necessarily linguistic in nature: in Quine’s words, “[t]aking the objects of propositional attitudes as sentences does not require the subject to speak the language of the object sentence, or any”.38 In this chapter I have taken this second route. Our ascriptions of thoughts does not imply the ascription of intrinsically linguistic states, for such ascriptions may well be justified by the relevantly complex non-linguistic behaviors. Second, by ascribing thoughts to animals we are not supposing that they are capable of being in exactly the same kinds of epistemic states as humans. Language makes available epistemic attitudes of greater complexity which arguably require a different level of analysis. To this end, some philosophers have contrasted beliefs with different kinds of doxastic states.39 But if we confine ourselves to certain kinds of perceptually fixed beliefs, the cognitive capacities exercised by humans in certain situations seem to be much closer in kind to those exhibited by some speechless creatures in similar situations than they are to those exercised by humans in linguistic tasks. For all these reasons, I think that the analysis here presented may allow us to consider, from a different perspective, the relationship between thought and language.40 TP
PT
TP
TP
TP
PT
PT
PT
NOTES 1
Thorough this paper I will use “animals” to refer to living beings with behavioral capacity complex enough to raise the issue of their mentality. 2 See, in particular, the V section of his Discourse on Method, in Descartes ([1637] 1982). 3 See Wallman (1992). 4 See Savage-Rumbaugh (1986). 5 See Agar (1993); Allen (1992); Allen and Bekoff (1997); Bennett (1976); Bermúdez (2003); Dennett ([1983] 1987); Hurley (2003; forthcoming) ; Malcolm (1972); Routley (1981). 6 See Byrne and Whiten (1988); Cheney and Seyfarth (1990); Premack and Woodruff (1978). 7 See Bennett (1991); Davidson (1975, 1985, 1997); Dennett (1995, 1996); Heil (1992); Heyes (1993, 1998); Lowe (2000); O’Leary-Howthorne (1993); Premack (1988); Stich (1979). 8 See Glock (2000); Peacock (1992). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
THE BELIEFS OF MUTE ANIMALS
9
281
See Stephan (1999). Malcolm (1972, 13). 11 “Semantic opacity”, widely regarded as the hallmark of intentionality, is a feature shared by sentences containing verbs like “believe”, “desire”, etc. In these sentences, the substitution of coreferential expressions may change their truth value—contrary to so-called “Leibniz’s law”. Related features of such sentences are their failure to satisfy the law of existential generalization and the principle of truth functionality. 12 Davidson (1985, 475). 13 See, e.g., Searle (1983). 14 Searle considers intentional states as directly referring to their truth conditions, so allowing any kind of coreferential substitution. But on such a view we would not learn anything from discovering that the Morning star is the Evening star, as in fact we do. 15 Russell (1912, 58); see Evans (1982, 65, 90 et pass.). 16 For instance: “[…] a person cannot just believe that he or she is seeing a cat; in order to believe this, one must know what a cat is, what seeing is, and above all, one must recognize the possibility, however remote, that one may be wrong” (Davidson 1999, 8). 17 Peacocke (1992). 18 Strawson (1971). 19 Recognition seems to constitute an overlapping area. Some forms of recognition involve no conceptual knowledge, others do. In the following I will consider nonconceptual recognition. 20 See, e.g., McDowell (1994); Brewer (1999). 21 Davidson (1975, 168). 22 This proposal has something in common with the Generality Constraint by Evans; it parts company in that it allows the possibility of nonconceptual elements and in not having compositionality as a primary goal. 23 On this, see Allen (1992) and Allen and Bekoff (1997). 24 It should be emphasized that semantic opacity is not a necessary and sufficient condition for individuating intentional states. Modal talk, and even some scientific explanations, are semantically opaque. 25 For opposite views in appreciating this point see Davidson (1999) and Allen (1999). 26 Loar (1980). 27 The contrast between horizontal and vertical links can be also put in terms of the purposes to which attribution is put, whether to explain behavior or to facilitate communication about the world. For a unitary view of content attribution, focusing more on the first purpose, see Bilgrami (1992) and Pereboom (1995). 28 What follows may remind you of Dretske’s analysis (1988). However, I part company from Dretske (see below, n. 31). 29 The same applies in case of the individuation of types of individuals. 30 See Stalnaker (1984). 31 Dretske (1988, 70-74) argues against this view. 32 For opposing view on concepts in animals, see Allen and Hauser (1991) and Chater and Heyes (1994). 33 See Chisholm and Feehan (1977). TP
PT
10 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
282
34
SIMONE GOZZANO
Savage-Rumbaugh and McDonald (1988). Prior to this event, Sherman and Austin had been introduced to a “bad monster,” that is, a person dressed in a King Kong suit, who frightened them indoors. However, I do not think that this changes the conceptual point I am making. 35 Notice, further, that Sherman cannot be said to believe, of something (or someone) specific, that it is making the noises outside the hut, for he has no idea what or who is making the noises. We then also have a case of failure of existential generalization across a belief context. 36 See Premack and Woodruff (1978); Whiten (1991), and, for a critical view, Heyes (1998). 37 Dummett (1988, 7). 38 Quine (1960, 213). 39 Cohen (1992); Dennett (1978); Stich (1978). 40 This paper originated while I was Visiting Fellow at the Center for Cognitive Studies, Tufts University, and has been developed when I was Visiting Fellow at the Center for Cognitive Science, Rutgers University (RuCCS). I wish to thank both Centers. I am especially grateful to Dan Dennett and Brian Loar who provided constant stimulation. Thanks to Francesco Ferretti, Ausonio Marras, Carlo Penco, Silvano Tagliagambe for helpful comments on previous drafts. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 21 NAIVE PSYCHOLOGY AND SIMULATIONS Cristina Meini
In the 1970s, the paper “Does the chimpanzee have a theory of mind?”1 renewed the debate on naive psychology, that is, the strong propensity to interpret behavior in terms of mental states such as beliefs, desires, etc. Many questions arose: Can nonhuman animals “mentalize”? How does naive psychology develop? Is it innate? Is it modular? The next twenty years were characterized by the debate between two families of theories about the nature of naive mentalizing: theory theories (TTs) and simulation theories (STs). Briefly, according to the TTs, to interpret behavior we use a body of psychological knowledge relating behavior to mental states. Some theory theorists defend a strong version of this approach, according to which children are like naive scientists theorizing about the mind.2 According to most tenants of this approach, psychological knowledge does not constitute a specific domain. Rather, peculiar properties of private, mental representations are inferred through an analysis of public representations such as pictures, drawings, language. Other theory theorists defend quite a different hypothesis, according to which we are endowed with a cognitive mechanism dedicated to mental concept attribution. According to Alan Leslie, for example, we possess ToMM (Theory of Mind Mechanism), a specialized system shared by all human beings.3 ToMM is actively in place since childhood and infers folk psychological statements of the form: TP
TP
PT
PT
TP
PT
<mental state > that (e.g., “Peter believes that there is a pie for dinner”) from a perceived behavior. Thus, in this theoretical framework naive psychology comes not from a scientific-like enterprise, but from the maturation of a biological, specialized organ. Some important differences notwithstanding, TTs share the idea that naive psychology is based on the possession of genuine psychological concepts. This crucial point is rejected by the STs. To put it in a nutshell, according to the STs mental concepts are not involved in mentalizing. Rather, simulation is at the core of naive psychology. To predict and explain behavior, we put ourselves in the other person’s shoes and look at what would happen in that (possibly counterfactual) situation. This process only entails the executive system, i.e., the cognitive system by which we plan and organize our behavior. In psychological interpretation, that 283 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 283–294. © 2007 Springer.
284
CRISTINA MEINI
process has some peculiarities. First, while action planning is based on real perceptual states, in psychological interpretation the decision-making mechanism generally takes a pretended situation as input; second, the planned action is not really performed, but the decision made is inhibited from being executed and attributed to the interpreted agent. To predict what Jack—who is standing in front of a barking mastiff—will do, we pretend to be in front of a barking mastiff. Moreover, despite “discovering” (by simulation) that Jack will run away, we do not move. In describing STs for critical purposes, Stich and Nichols talk about the off-line processing of the decision-making mechanism.4 There are actually also many varieties of STs. According to “moderate” versions, mental concepts are not completely ruled out from simulation. Considering the theory on which we shall focus our interest, Alvin Goldman claims that by simulation we come to attribute to ourselves pretended mental states that we can introspect and transfer to others.5 This priority of first-person psychological interpretation is strongly opposed by Robert Gordon.6 According to his “radical” ST, we change perspective and interpret behavior without using any kind of mental concept, not even referred to ourselves. Being relevant to the topic of this chapter, let us consider these two authors in more detail. TP
TP
PT
PT
TP
PT
1. SIMULATION THEORIES
1.1
Goldman
According to Goldman, the predictive simulation routine runs as follows: 1. we see (more generally, perceive) X acting, or we imagine to see X acting; 2. we adopt X’s point of view (really, when this is possible, or in our imagination); 3. we see what happens in our head and discover what we would do in X’s place (let B be the conclusion of this subroutine); 4. we attribute B to X: X will behave as we would in her situation.7 TP
PT
At least in the simplest cases, behavioral prediction could involve no psychological analysis: we see X, we predict X’s behavior without accessing any mental state. When we simulate X facing an enraged lion, we can probably predict X’s behavior through an automatic process, a sort of escape reflex. Thus, in these simple cases the routine outlined above would appear to be redundant. Other, less elementary cases, entail a genuine psychological attribution. Jane is a child who is alone in a room when the light suddenly goes off. I put myself in her shoes. But then? Do I shout for help or am I excited? To make a prediction, I
NAIVE PSYCHOLOGY AND SIMULATIONS
285
need a psychological assumption about the child’s character: is she scared of darkness? Or does she like it? To answer these kinds of questions, we have to know the typical relations among the mental states involved and behavior. As we shall see in more detail, such knowledge is a prototypical example of theory of mind. If we consider the reverse process, behavioral explanation involves mental state attribution to a massive extent, usually at a conscious level. When we see X doing B, we first make a hypothesis R about the psychological reasons for X’s action. To test R, R is fed into our decision-making mechanism. Let us assume that D is the output of this process. If D matches B, then R is attributed to X as the reason for her behavior.8 Goldman and Sripada recently proposed an account of the retrodictive process as an alternative to this “generate-and-test heuristic”.9 Nonetheless, according to them, such an “unmediated resonance model” is restricted to the recognition of face-based emotions; it does not account for the explanation of belief (or desire)-based behavior, for which the generate-and-test heuristic is still valid. Goldman concedes that the third step of the routine outlined above involves mental state ascription. This is what makes his theory moderate. However, he claims that psychological attribution is limited to the first-person case. Thus, there is a firstperson priority which halves the weight of the theoretical knowledge involved. In (1993), Goldman offered a clearer picture of the origins of first-person knowledge. Indeed, this point is crucial for him: were he to admit that we possess a first-person full-fledged theory of mind, the role of simulation would become merely heuristic, a particular way of using a theory. By contrast, according to Goldman, we introspectively know our mental states, by an identification process based on intrinsic, qualitative properties. To detect our current mental states, we do not try to match the functional roles of the representation of the mental states themselves with the functional roles of mental state types stored in our memory. Rather, we recognize mental states by matching their intrinsic, non-relational, qualitative properties. Thus, the recognition of our own mental states has a perceptual, nonrelational character10. No conceptual-theoretical level is involved. This is also true of propositional attitudes such as beliefs and desires. For example, we recognize our belief that Paris is the capital of France, as well as our belief that 3 is a prime number, by their qualia. TP
PT
TP
TP
1.2
PT
PT
Gordon
In his radical ST, Gordon strongly criticizes two hypotheses which are crucial in Goldman’s perspective: first-person priority and the necessity of a (limited) mental state ascription.11 When simulating, we do not pretend to be the other person. Rather, we become the other person. In simulating Mary’s situation, I become Mary. “I” changes its referent: the equivalence “I=Mary” is established. Thanks to this derigidification of the personal pronoun, any introspective step is removed: we do TP
PT
286
CRISTINA MEINI
not first attribute a mental state to ourselves, and then transfer it to the other person (Mary). Since we become Mary during the simulation time, no transfer is needed. In actual fact, Gordon’s theory is even stronger. Imagine someone asks you a question about your mental states. According to Gordon, to answer this question we let an “ascent routine” run. The ascent routine is a process by which we transform a question about mental states into a semantically simpler question, involving no reference to psychological states. When someone asks you: “Do you think Paris is the capital of France?”, you transform the epistemic question to the simpler one “Is Paris the capital of France?”. The answer (“Yes”) is inherited at the higher level, producing the final answer “Yes- I think Paris is the Capital of France”. According to this view, the psychological concept is merely a syntactic prefix which we learn to use in linguistic practice. The genuine reasoning process takes place at the lower, non epistemic level. The third-person case of mental state attribution is similar; it just involves a more complex simulation routine. To answer the question “Does Emilia think the dog is going to bite her?” we become Emilia (i.e., I= Emilia in simulative contexts) and ask the question “Is the dog going to bite me=Emilia?”. Finally, we/Emilia prefix the answer: “I= Emilia (do not/does not) think the dog is going to bite me=Emilia”. 2. RECENT TOPICS While the 1970s were characterized by a renewed interest in naive psychology and the 1980s-1990s by the debate between TTs and STs, the present decade seems to be characterized by two main topics in the agenda. First, many authors have claimed that different aspects of naive psychology are based on distinct processes and cognitive mechanisms. For example, selfattribution of mental states may be independent of other processes, such as behavioral prediction/explanation and hetero-attribution of mental states.12 Second, much work has been concerned with the neural basis of naivepsychological process(es). A major step in this context has been the discovery of mirror neurons (MNs).13 TP
TP
2.1
PT
PT
Mirror neurons
In monkeys, the rostral part of the ventral premotor cortex (F5) contains neurons which code representations of hand and mouth movements. Some F5 neurons are sensitive to actions, i.e., to an entire class of different movements sharing the same goal. Conversely, there are highly specific cells that only code an action when it is performed in a precise way (e.g., with a certain kind of grip). Among the F5 neurons there are MNs, visuo-motor cells discharging not only when the animal performs an action, but also when it sees similar actions being performed by another individual.
287
NAIVE PSYCHOLOGY AND SIMULATIONS
Most MNs can properly be regarded as multi-modal. For example, MNs discharging when the monkey performs an action also fire both when the monkey sees and hears someone executing the same kind of action.14 Recent data show that most MNs also fire when the most important part of the action is hidden. For example, grasping MNs discharge when an opaque screen covers the crucial part of the action, such as the hand-object interaction.15 There is strong evidence that a mirror system also exists in the human brain.16 TP
PT
TP
PT
TP
2.2
PT
Gallese and Goldman
Among the neurophysiologists who discovered MNs, Vittorio Gallese focused on the theoretical consequences of their data. In particular, he wrote an influential paper with Goldman, in which the authors presented MNs as evidence to support STs. 17 They argued that MNs are the basis of the simulative process proposed by Goldman (see above). At the neural level, when we see X acting toward a goal, our MNs fire and simulate the same action. Thus, we have a cue to individuate X’s goal in ourselves. While in their joint paper Gallese and Goldman left the explanation of the neural basis of simulation at a rather superficial level, Gallese has recently worked out his own theory about the role of MNs in naive psychology in more detail. TP
2.3
PT
Gallese alone
On the phylogeny of MNs, Gallese makes the hypothesis that they were involved in projective modeling.18 A projective model is an internal representation which represents at time t the state of the organism at t+x, thereby allowing the preventive control of the consequences of our actions. Gallese argues that, having been used in projective modeling, MNs were recruited for a second, no less crucial function: to predict the other agent’s actions. In other words, MNs have extended their representational power from models of the self to models of both the self and others. Thanks to MN activity, two individuals share what Gallese calls a we-centric representational space.19 In this space representations are built of the form: TP
TP
PT
PT
I/X (e.g., I/X GRASP the banana) In the we-centric space there is no room for any first-person priority in Goldman’s sense. We have seen that, according to Goldman, the output of the simulation process concerns the simulator herself and only later is attributed to the other agent. On the contrary, the crucial step in Gallese’s process takes place in the
288
CRISTINA MEINI
neutral space. Thus, his more recent writings clarify the extent to which Gallese and Goldman differ. To rule out any possible misunderstanding, it is important to note that MN simulation is not incompatible with Goldman’s theory. “Neural simulation” is just a primitive step of a long process. It is possible that our MNs simulate in a neutral space and then our decision-making mechanism carries on a me-centric simulation (at a psychological level). Nevertheless, the second step does not directly follow from the first. In other words, Goldman’s theory of simulation is an independent hypothesis that should be independently justified. 2.4
Back to Goldman
Unfortunately Goldman’s theory does not seem to provide a good explanation of human naive psychological capacity. As we mentioned earlier, Goldman’s ST is a theory about cognitive mechanisms, based on two main points: in order to mentalize, 1) we use our decision-making mechanism; 2) we attribute mental concepts to ourselves by means of a non-conceptual recognition procedure based on qualia, and then transfer them to others. I have no objections to 1). Simulation is probably involved in interpretation, even in the psychological (as opposed to neural) sense. Our MNs do their job and our decision-making mechanism seems to be recruited in many cases of intentional reasoning.20 Still, this is a purely heuristic role of the simulative practice, a cognitive shortcut in the background of a theoretical psychological competence. Let us go back to the example of little Jane in the dark. To predict what she will do when the light goes off, we must calibrate our simulated decision by considering her psychological attitude. This kind of adjustment is a pervasive phenomenon. What we typically do is to (unconsciously) ask ourselves whether using ourselves as a psychological model is a good strategy in that specific case. Even when no adjustment in the psychological perspective follows, the analysis is made. This is what makes the relevant difference in naive psychological practice with respect to behavioral planning. Goldman underestimates this theoretical intervention, focusing all attention on the simulative component. On the contrary, I consider the theoretical side to be the core of folk psychology, while recognizing the importance of simulation as a heuristic practice.21 As regards the second point, I completely disagree with Goldman. In particular, I deny that beliefs can be individuated by their intrinsic qualitative properties. Admittedly, the belief that Charles remarried has a peculiar quality (at least for Camilla). But other beliefs I am disposed to subscribe to have no special quality at all. For example, I would be disposed to bet that my belief that 2 is the square root of 4 is true; but this epistemic state does not give me any particular feeling. It does not actually seem to be tied to any quality at all. If belief-states were individuated by their qualia, how could I believe something which has no qualitative aspect? TP
PT
TP
PT
NAIVE PSYCHOLOGY AND SIMULATIONS
289
Moreover, even in the case of qualitatively marked beliefs, different persons may have radically different qualia. Thus, we can imagine that Camilla and Queen Elisabeth have different qualia associated with what I take to be one and the same belief that Charles remarried. Even more important in terms of its consequences is the fact that Goldman’s account implies that communication is impossible (or mysterious): if beliefs—and, more generally, propositional attitudes—are defined by an intrinsic, totally private state such as a quale, and if different people can experience different qualia concerning the same belief, then the transmission of beliefs is not warranted. Certainly, in a sense a belief changes when it is communicated. Nonetheless, it seems hard to deny that even two radically different persons can communicate and discuss. More interestingly, in normal cases good communication currently takes place. This situation can be accounted for in a relational account of belief individuation, which is opposed to the intrinsic framework outlined by Goldman. In a relational account, a belief is individuated by its functional role—i.e., by its relations to other mental states and behavioral input-output. The functional role of a belief changes when it is transferred from one person to the other. Becoming part of another person’s mind, the belief is integrated in a different context. In current cases, the two persons’ backgrounds are just partially different; as a consequence, we can assume that belief-states do not completely change through communication. On the contrary, when we discuss with a person who is radically different from us, the functional role of our belief-states takes a more serious reorganization; as a consequence, even serious misunderstandings can occur, and genuine comprehension is always difficult. Yet, it is neither impossible nor mysterious at all. Thus, the success of communication requires that beliefs are defined by their functional role. The qualitative sensation can be (and indeed is) the side effect of this process. What would follow from a rejection of Goldman’s position with regard to first-person access? His psychological ST would not only be compatible with the neural ST, but there would be a real continuity. Still, this is a purely speculative exercise, being the first-person priority thesis the very core of Goldman’s framework. It is fundamental for him to deny that we systematically and genuinely use a (first-person) theory of mind. Otherwise, his psychological ST would collapse and become a moderate form of TT characterized by a particularly strong accent on its simulative heuristic component.22 TP
2.5
PT
Mirror neurons and Gordon
Per se, Gallese’s we-centric space hypothesis fits better with Gordon’s radical ST. Remember that Gordon denies any first-person priority. In simulation, we become the other person and let our decision-making mechanism run in a neutral space. This process responds exactly, at higher level, to the simulation performed by MNs. At a neural level, the process could be described as follows: I see Mary picking up a
290
CRISTINA MEINI
cake. My MNs discharge and I enter the we-centric space. I/Mary enter(s) the same physical state as when I want a cake. When I/Mary am (is) in that state, I/Mary eat(s) the cake. Thus, Mary will eat the cake. Unfortunately, Gordon’s ST seems to me seriously flawed. In particular, there is much to object to the ascent routine hypothesis. I do not question the fact that we use this kind of strategy as a useful heuristic in mental state attribution. I agree that we do not really ask ourselves the full epistemic question; rather, we ignore the prefix “Do you believe that…” focusing on its content. Nevertheless, this entails mental concepts as mere syntactic labels, devoid of any real meaning. On the contrary, I know the meaning of the concept “believe”, as well as the semantic properties of intensional predicates: I know that the principle of substitutability salva veritate can be violated (i.e., I can believe that Lewis Carroll is the author of Alice in Wonderland without believing that Charles Dogson is the author of Alice in Wonderland); I also know that truthfulness and falseness implications are suspended (i.e., I can believe that Tom has gone independently of whether Tom has really gone or not), as well as the existential generalization (i.e., I can believe that the present king of France is bald even if there is no king of France). If belief predicates are only syntactic labels, Gordon should have explained how we all come to know these semantic properties, precociously and without difficulty. More importantly, even if we suppose that the ascent routine is not a mere heuristic—i.e., that belief predicates really are syntactic labels—this does not mean that the ascent routine can be generalized to the other psychological predicates. For in that case the difference between the psychological predicates would disappear: how to distinguish the routines used to attribute a belief that X from the routine used to attribute the desire that X, since the psychological predicates would disappear? In Gordon’s view, the process supposedly simplifies the original psychological question by ignoring the mental predicate to answer a lower-level question. My point is that the lower-level question (X) cannot be answered unless I am able to distinguish the different attitudes toward X (“Am I asked about a desire? A belief?” And so on). And this step reintroduces mental concepts—not mere labels, genuine concepts. In other words, if the elimination of psychological predicates were a genuine operation, how could we answer the remaining question? To answer, we have to know what psychological state we are talking about. That is, we have to know what it means to believe that X as opposed to (for example) desire that X. Thus, even if belief-concepts were eliminated, the entire problem would re-emerge for the other psychological predicates. Again, it should be stressed that rejecting a psychological ST—in this case, Gordon’s psychological theory—does not entail an automatic rejection of Gallese’s hypothesis about the neural simulation performed in a we-centric space.
NAIVE PSYCHOLOGY AND SIMULATIONS
291
3. WHICH ROLE FOR NEURAL SIMULATION? To sum up, our analysis suggests that psychological STs alone cannot account for naive psychological reasoning, which seems to require the intervention of conceptual, theoretical knowledge about the mind. It also points out that in principle a neural ST is independent of an ST formulated at a psychological level. Thus, a rejection of a psychological ST does not rule out the possibility that MNs may be relevant to account for the nature of naive psychology. Indeed, my aim is to reconcile (a reduced role for) neural simulation within a psychological theoretical approach. For this purpose, I need to clarify an important point: I think that many aspects of psychological reasoning are grounded on neural simulation; but I also think that the most important aspects of psychological reasoning seem to have a nonsimulative neural basis. 3.1
Understanding intentions without simulation
According to the philosopher Pierre Jacob and the neurologist Marc Jeannerod, MNs code a motor intention—e.g., the intention to perform a basic action such as to press a switch with the right index finger.23 They also point out that motor intentions are but one of several kinds of intentions we currently attribute: prior intentions, social intentions, communicative intentions. In their view, MNs cannot even code a prior intention—in their example, the intention to turn on the light—not to mention the other, less primitive intentional states. To attribute kinds of intentions, a theory of mind is needed. Yet, a recent study with human beings suggests that MNs can code prior intentions.24 They fire when the person performs an action in order to achieve a goal but not when he performs the same action in order to achieve another goal (symmetrically, they fire when the monkey sees someone acting to achieve some specific goal but not when the agent acts to reach another goal). Thus, it would appear that these cells do not code the immediate action, but the more abstract, distal goal. It will be interesting to follow this debate. Nonetheless, I want to draw attention to another point: purely perceptual neurons—i.e., cells without simulative properties—can also have an important role in the attribution of social intentions and yet a more important role in high-level psychological practice. We have known for some time that, in rhesus monkeys, neurons in the superior temporal sulcus (STS) are sensitive to the view of specific parts of the body and specific kinds of biological movement.25 For example, STS cells selectively discharge when the monkey sees a particular view of the whole body, such as the front, the back etc., while other STS neurons code the gaze direction or the orientation of the face. As regards the specificity for different kinds of movement, some STS neurons are specialized in coding biological translations, rotations, and so on. Most STS cells only respond to movements in a particular direction, such as left to right, or toward-away. TP
TP
PT
PT
TP
PT
292
CRISTINA MEINI
What is the function of those STS neurons? One possibility we should consider seriously is that they constitute the neural basis for a process of nonsimulative intentional state attribution. As remarked by Jacob and Jeannerod, the interaction between two (or more) agents—which, I add, is more central than an agent-object interaction for naive psychological practice—does not seem to be typically interpreted by simulation.26 For example, the neural basis for interpreting submission behavior seems to involve purely perceptual cues, without any simulation. Indeed, how can my neurons decide which one of the two agents to simulate? I can choose which perspective to adopt; but it surely cannot be automatic and merely neurally-based behavior. More generally, there are many cases in which simulation does not seem to be involved: triadic relations, communicative situations, etc. Further recent research by Walter et al. goes in the same direction, circumscribing the role of simulation.27 In a neuroimaging (fMRI) study they found, in the anterior paracingulate cortex of the human brain, a prefrontal area critically involved in the attribution of intentions to persons who are involved in social interactions. Neurons in the anterior paracingulate cortex do not have simulative properties. Considering all these data together, the role of neural simulation is certainly not annulled, but it is rather reduced. A more serious limitation of its role has been suggested by Csibra, who recently put forward an alternative interpretation.28 Csibra noticed that MNs do not discharge when the agent just imitates the action. That is, they do not fire when there is no real object to be grasped. These and other characteristics described in his paper are problematic for the standard view, according to which MN simulation discovers the intention of an act: during the process leading to the discovery that the action is not real (but just mimicked), MNs should have been involved. They should have contributed to revealing the trick. On the contrary, they remained silent. To account for these data, Csibra makes the hypothesis that an action is understood by a neural non-simulative basis. He claims that STS neurons do the crucial job, sending their purely perceptual representations to the F5 pre-motor area, to which they are notoriously connected. A purely perceptual representation coming from the STS is sent to the MNs, which come into work to do something different from what the standard view says they do. Much work must still be carried out in this direction, but a suggestion comes from Dan Sperber’s hypothesis that MNs are more properly concept neurons.29 Somehow compatible with Csibra’s claims is a radically anti-Goldmanian view, which would be interesting to explore in more detail. According to this alternative view, outlined in Meini (2001), during phylogeny our ancestors first acquired a third-person (proto) theory of mind, grounded in a non-simulative neural basis such as the analysis of the direction of sight. Then, simulative neurons were recruited to relay third-person proto-knowledge to first-person proto-knowledge, thus providing the basis for first- and third-person full-blown naive psychology. When we began the surveys on experimental data, we were skeptical about a genuine, non-heuristic role for psychological simulation. That skepticism was originated by some philosophical arguments moved against Goldman’s and TP
PT
TP
PT
TP
TP
PT
PT
293
NAIVE PSYCHOLOGY AND SIMULATIONS
Gordon’s STs. The empirical analysis developed in this sector seems to confirm that simulation is not the core of naive psychology, not even at the neural level. A pure perceptual neural basis seems to be more crucial. 4. CONCLUSIONS I am sympathetic with the hypothesis of a shared simulative space, contrasting the solipsism characteristic of neo-Cartesian theories such as Goldman’s ST.30 I am also persuaded of the important, heuristic role of simulation in naive psychological reasoning. Nonetheless, it seems to me that the important research on MNs, together with the standard given to these neurological data in the psychological community, have hidden some important questions relative to both the neurological and psychological levels. I have tried to outline a more composite and realistic view in this chapter. Any ST, be it developed at a neural or at a psychological level, clearly stresses the continuity among human beings and other species. In my opinion any form of Ockham razor is a good starting point. But I also think counterevidence should be taken seriously. An indefinite eliminativistic attitude risks not fully accounting for the complexity of human thinking.31 TP
TP
PT
NOTES 1
Premack and Woodruff (1978). See, e.g., Gopnik (1993). 3 Leslie (1987). 4 Stich and Nichols (1995). 5 Goldman (1995). 6 Gordon (1995). 7 Goldman (1995). 8 Ibid. 9 Goldman and Sripada (2005). 10 This step is not shared by other simulation theorists, such as Heal (1995). 11 Gordon (1995). 12 Nichols and Stich (2003). 13 See Rizzolatti (1988). 14 Kohler et al. (2001, 2002). 15 Umiltà et al. (2001). 16 See, e.g., Fadiga et al. (1995). 17 Gallese and Goldman (1998). 18 Gallese (2003). 19 Gallese (2004). 20 See, e.g., Stich and Nichols (1995); Meini (2001). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
PT
294
21
CRISTINA MEINI
I have not considered here other moderate simulation theorists, such as Jean Heal or Gregory Currie, who do not share Goldman’s first-person hypothesis. Still, as they recognize the intervention of a “mini-theory of mind” (Heal 1995), I consider them to be vulnerable to the same criticisms I move to Goldman about the real role of simulation. 22 See Meini (2001). 23 Jacob and Jeannerod (2003, 2005). 24 Iacoboni et al. (2005). 25 See Perrett et al. (1982); Jellema et al. (2002). 26 Jacob and Jeannerod (2005). 27 Walter et al. (2004). 28 Csibra (2005). 29 Sperber (2005). 30 See Meini (2001). 31 I am grateful to Pierre Jacob and Alfredo Paternoster, whose comments on different versions of the paper were particularly useful. TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 22
THE SOCIAL MIND Francesco Ferretti
According to a classic model, dear to the anthropology, philosophy and linguistics of a good part of the last century, the human mind is a reflection of the social life of the individual. Two aspects characterize this model: the thesis that the (socio-cultural) factors external to the individual have priority over and autonomy from the individual’s internal (bio-psychic) constituents; the idea that the mind is determined through an internalization process of external factors—following a one-way constitutive path “from external to internal”. The very dualism created by both these aspects calls into question the validity of the classic model. The underlying assumption of this chapter is a unitary, rather than dualistic, conception of the human mind. Against the one-way perspective offered by the classic model we shall claim that the mind is a product of a two-way constitutive process, in which the factors which proceed “from internal to external” have the same relevance as those which proceed in the opposite direction. Our idea is that the human mind is the product of factors which are at the same time both external and internal to the individual. In these terms, such an idea appears to represent a banal truth which holds little appeal. There are good reasons, however, for putting forward the theme of the social nature of the mind in these terms. In the first place because attempts to unify past theories of the mind have had little success. In the second place because the difficulties which have given rise to the failure of these attempts would appear today to give way to more convincing solutions. Contemporary cognitive science and neuroscience may effectively offer satisfactory explanatory models of the bio-cognitive devices involved in the management of the complex inter-individual relationships among the members of a group. These devices, as we shall see, are precious tools for the investigation of internal/external relationships and for reconsidering the theme of the unitary nature of the human mind in a new light. 1. THE PRIMACY OF FACTORS EXTERNAL TO THE INDIVIDUAL Two characteristics emerge from the vision of human nature of the theoretical reflection of a good part of the last century. Firstly, the distinction between sciences of the spirit and sciences of nature founded on the autonomy of social reality and, secondly, the primacy of external factors over internal factors. The two characteristics are closely related. They converge at a dualistic conception of human beings: to consider individuals the product of external (socio-cultural) factors is equivalent to considering biology as a merely marginal aspect of their nature. Only 295 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 295–308. © 2007 Springer.
296
FRANCESCO FERRETTI
the sciences of the spirit, acting in complete autonomy from the natural sciences, are able the capture the true essence of the human being. 1.1
Autonomy of social reality
According to Émile Durkheim, social reality should be analyzed in its pure form. Social realities have an autonomous and independent status and the work of the sociologist must be aimed at distinguishing two orders of phenomena: individual realities which embody social reality and social reality intended as “a reality sui generis vastly distinct from the individual facts which manifest that reality”.1 Social realities are autonomous and antecedent to individual psychic states: TP
PT
they consist of manners of acting, thinking and feeling external to the individual, which are invested with a coercive power by virtue of which they exercise control over him. Consequently, since they consist of representations and actions, they cannot be confused with organic phenomena, nor with psychical phenomena, which have no existence save in and through the individual consciousness. Thus they constitute a new species and to them must be exclusively assigned the term social. It is appropriate, since it is clear that, not having the individual as their substratum, they can have none other than society […].2 TP
PT
Imposition through education highlights the unidirectionality of the constitutive process: social reality pervades individuals making them social agents. The circle is closed by a precise idea of the mind and, more generally, of human nature. If education is something which moulds the individual, the individual who is socialized through education is a plastic and indeterminate one. Individual natures, from this point of view, only represent “the indeterminate material which social reality determines and transforms”.3 The theory of the plastic and indeterminate nature of human beings was backed up by psychology, anthropology and philosophy for a good part of the twentieth century. The radical version of this theory is known as the Sapir-Whorf Hypothesis. Two assumptions characterize it. Firstly, “linguistic determinism”, according to which the thoughts of individuals are determined by the categories of their language; and secondly, “linguistic relativism”, for which different languages determine different thoughts. Tooby and Cosmides define “Standard Social Science Model” this conception of the human mind.4 We shall not examine in detail the criticisms which it is possible to level at such a conception, suffice it to say, for the purposes of this chapter, the important point is the relationship between the primacy given to external factors and the nature of the mind. When the path uniquely proceeds “from external to internal”, what is indeed required is a markedly plastic and indeterminate mind. But can the mind really be thematized in this way? The only model of functional architecture which the Standard Model appears to refer to TP
PT
TP
PT
THE SOCIAL MIND
297
is something similar to the tabula rasa, an hypothesis on the mind which is literally hopeless.5 For Tooby and Cosmides the “impossible psychology” on which the Standard Model is based, lies in the dualistic distinction between sciences of nature and sciences of the spirit. It is, therefore, towards overcoming this dualism that a conception of the mind which has an effective explanatory value should aim. TP
1.2
PT
Unifying approaches
Linguistic determinism and linguistic relativism are in marked decline, as is the concept of the tabula rasa. And the idea that the time has come for a reconciliation between natural sciences and sciences of the spirit has become the prevailing trend. Some sectors of cognitive science, are now looking with interest at the work of Vygotskij, an author who attempts to put forward a unitary vision of the mind and the human being.6 For Vygotskij, the specificity of the human being is guaranteed by the acquisition of speech. It is this event which determines a completely new organization of the child’s thought. Thus, since speech is an instrument external to the individual, produced by socio-historical exchanges among peoples, the growth which leads to the development of higher cognitive functions in the child is considered as a process of internalization. It is the nature of this process which is of interest here. It is known that “egocentric language” constitutes the decisive mediation device through which social language as an instrument for communication becomes a tool for thought: TP
PT
The great change in children’s capacity to use language as a problemsolving tool takes place somewhat later in their development, when socialized speech (which has previously been used to address an adult) is turned inward. Instead of appealing to the adult, children appeal to themselves; language thus takes on an intrapersonal function in addition to its interpersonal use. When children develop a method of behavior for guiding themselves that had previously been used in relation to another person, when they organize their own activities according to a social form of behavior, they succeed in applying a social attitude to themselves. The history of the process of the internalization of social speech is also the history of the socialization of children’s practical intellect.7 TP
PT
The socialization of thought is, thus, closely linked to the internalization of social language. All higher psychological functions, in fact, should be considered in terms of internalization of social relationships: even psychological factors within the individual are the result of internalized social relationships. The primacy of external factors over internal factors constitutes the basis of the influence of Vygotskij in some sectors of cognitive science. Dennett and Clark, for instance, use the concept of external scaffolding to move the computational
298
FRANCESCO FERRETTI
weight of processing towards the external, in this way removing structures and internal functions from the mind.8 According to these authors, the external scaffolding par excellence is public language, a system of symbols which individuals find ready-made (outside themselves) at birth and which, through an acquisition process, invade their minds. Typically human capacities such as selfreflection and metarepresentation depend on this invasion. Dennet claims that selfreflective thought could never exist without public language; in particular he underlines that capacities such as self-reflection could never exist without a constitutive process which “begins with the overt, public use of symbols and tokens of one sort or another […] and creates practices that later can be internalized and rendered private”.9 The idea that the process is activated by the public use of symbols and is constituted as a phenomenon which is to all intents and purposes an invasion deserves particular attention. To what extent does a theory of the sort stand up to scrutiny? Since the words, rather than the language, are at the center of the theory of Clark and Dennett, it is legitimate to wonder if lexical acquisition models available today conform in one way or another to the theory put forward by the two authors.10 The answer is negative. The prevalent empirical models today go in exactly the opposite direction. Data from cognitive psychology highlight the fact that, among other factors, the acquisition of words is guided by strong internal constraints.11 We have no empirical data in support of the theory in which words invade the mind. The acquisition of lexis requires minds which are internally rich and structured in a complex manner. What Vygotskij and followers attempt to do in bringing about the convergence of mind and society is still within a dualistic conception of the mind, it is still a conception which regards social reality as an autonomous entity outside the individual. The price to pay in maintaining the primacy and autonomy of external factors is, as we have seen, reference to a plastic and indeterminate mind. Yet, such a conception of the mind is not supported by the facts. The great majority of models available today relative to the acquisition, the functioning and the pathology of cognitive skills conceives the mind as a rich and complex whole of processing devices. Thus, since a conception of the mind unsupported by empirical data fails to provide a feasible basis, it is necessary to find alternative paths. Our thesis is that only examining the possibility of a dual constitutive process, proceeding “from the external to the internal” and vice versa, it is possible to truly bring about the integration of mind and society. Now is the time to take into serious consideration the role which factors inside the individual play in the creation of social interaction. TP
TP
PT
PT
TP
TP
PT
PT
2. AN ALTERNATIVE CONSTITUTIVE PROCESS: THE BIO-COGNITIVE FOUNDATIONS OF SOCIETY The human being is not the only social animal. The strategy for understanding the social nature of human beings starting from their peculiar characteristics holds true only on condition that it should be taken together with the study of those
299
THE SOCIAL MIND
characteristics which humans share with other social animals. Society is not an abstract entity. Every society is constituted by the number of individuals which make it up and the network of relationships which they create in relating to each other. From this point of view, human society too, while undoubtedly possessing its own specificity, cannot be understood by abstracting it from the bio-psychological characteristics of the individual: both the number of individuals and the kind of relationships which they can form in exchanges within the group are in fact constrained by the nature of the bio-cognitive system which characterizes each social animal12. With respect to single individuals, the group solves a multiplicity of adaptive problems. Nevertheless, the life of the group also creates tensions: social relations require a nervous system able to manage them. Dunbar underlines the limitation which the volume of the cerebral cortex imposes on the dimensions of social groups.13 The role of the nervous system in the management of relations with other individuals emerges clearly when, following Berthoz, the brain is considered as a machine essentially assigned to predict future, to anticipate the consequences of one’s own or other people’s action, to “buy time”.14 The most advanced evolutionary species are those which have learned to buy time on others, both in hunting prey and in anticipating the attack of predators. The brain is primarily a biological machine with which to stay one step ahead of the others. The ability to anticipate the action of others is the characteristic trait of social intelligence. The difference between the solution of problems imposed by the physical environment and competition with the other individuals who compete for the solution of the same problems is crucial from the point of view of adaptation. Humphrey shows that the genesis of the social intellect is the true evolutionary leap forward in the development of human intellectual capacity.15 With the expression “Machiavellian intelligence”, Byrne and Whiten refer to one of the most pressing evolutionary challenges with which primates had to deal, i.e. the ability to predict and control the behavior of the others, using them as a means to their own ends.16 Possessing a social intelligence means possessing a system of comprehension and prediction for behavior, that is to say a system of interpretation. Now, what is it to interpret behavior? Let us take the case of human beings. Their way of interpreting behavior appears to conform to what Dennett defines the “intentional stance”, i.e. the ability to understand and predict the behavior (one’s own or that of others) by attributing mental states to the agent.17 This is the ability to “mentalize” the behavior or, as is also said, of “mindreading”. In order to highlight the fact that mentalization comes about through a complex and integrated system of knowledge relative to the intentional states of the agent, Premack and Woodruff call this capacity Theory of Mind (ToM).18 Even though the two authors recognized that chimpanzees also possessed such capacities, the prevailing theory today is that ToM: “represents a sort of ‘mental Rubicon’, sanctioning the uniqueness of human cognitive capacities”.19 Effectively, speaking of ToM appeals to a system of propositional structures underlying a sophisticated metarepresentational system which cannot easily be hypothesized for animals which are not human. TP
TP
PT
PT
TP
PT
TP
TP
TP
PT
TP
PT
TP
PT
PT
PT
300
FRANCESCO FERRETTI
Accounting for the capacity to mentalize with exclusive reference to ToM appears to be largely fruitless, since it clearly divides humans from animals and reintroduces unacceptable forms of discontinuity from a naturalistic viewpoint. According to Gallese, the theory that considers the attribution of intentional states “solely determined by metarepresentations created by ascribing propositional attitudes to others, is biologically implausible”.20 So how may we account for the capacity to mentalize typical of human beings from a continuist point of view? To start with, mentalization does not coincide with ToM. It is possible to hypothesize abilities to interpret behavior which do not use propositional attitudes. The simulation theory, which views the interpretation of behavior as putting oneself in the shoes of who carries out the action, offers an important contribution to the study of mentalization from a continuist perspective.21 Some cognitive skills of great social relevance, such as imitation and empathy can, in effect, be explained on the basis of an interpretative model, which is particularly close to the theme of evolutionary forerunners, founded on assumptions different from those used for ToM. One such model has on its side significant empirical backing. Rizzolati et al. have discovered a class of neurons, the “mirror neurons” identified initially in the premotor cortex of macaques, active during the execution of targeted actions.22 These neurons discharge each time the macaque takes a certain object independent, for example, of the fact that it takes the object in its hand or mouth. The common denominator of behaviors of this kind does not consist of the movement that constitutes the action but in its aim. The mirror system, which humans also possess, is activated not only during the execution of an action but also during the observation of someone who is carrying out that action. In these cases, those who observe the action of another do not effectively carry out the observed action. Nevertheless, the motor system is activated as if this were the case: the perception of an action implies the simulation of that action.23 Eliminating the distinction between who interprets the behavior and who effectively acts, the simulation model offers an alternative explanation to the metarepresentational model. The implications of such a move on the level of social intelligence find confirmation in two important cognitive skills: imitation and empathy. Already at 12-21 days infants are able to imitate the extension of the tongue and opening the mouth. The reason for which imitation can be considered a forerunner of the ability to mentalize is that it presupposes the reading of behavior based on the aims, intentions and desires which have determined it.24 The most interesting aspect is that these first forms of imitation give rise, as Gallese claims, to the formation of a “we-centric” space: TP
PT
TP
PT
TP
TP
PT
TP
PT
The importance of early imitation for our understanding of social cognition is that it shows that interpersonal bonds are established at the very onset of our life, when no subjective representation can yet be entertained by the organism because a conscious subject of experience is not yet constituted. The absence of a self-conscious subject does not preclude, however, the constitution of a primitive “self-other space”, a
PT
THE SOCIAL MIND
301
paradoxical form of intersubjectivity without subjects. The infant shares this “we-centric” space with the other individuals inhabiting his world.25 TP
PT
The same mechanisms on which imitation is based also underlie empathy (Gallese 2003c). Observing the actions of others leads to the sharing of emotions and feelings which go beyond the conceptual classification of an action. The understanding of the affective expressions of behavior comes about automatically and without the mediation of higher cognitive faculties. It is once again the reference to embodied simulation which provides an explanatory key to empathy. This reference has more general repercussions on the nature of the mind. Rather than a metarepresentational mind, which interprets the others (taken as external subjects) through the filter of the conceptual system underlying ToM, a conception of the mind as a “resonance mechanism” emerges, a system which simulates within itself what happens externally (Rizzolatti et al. 2002). Gallese is right to say that the exclusive reference to the metarepresentational system fails to exhaust the possibities, human too, of mentalizing behavior. And is right above all in claiming that the existence of ToM can only be in reference to a series of cognitive precursors which humans and animals have in common. Nevertheless, it is not clear what the role of ToM is in the model he proposes. Certain claims seem to admit that ToM can coexist with the capacity to mentalize based on the embodied simulation model: claiming that evolutionary forerunners to ToM exist means in fact admitting the existence of ToM; claiming that metarepresentations are not the only way of mentalizing, signifies that they are, nevertheless, a means for doing so. Other claims seem to go in the opposite direction: in noting that ToM is based on a “disembodied” model of the mind, for example, Gallese maintains that the empirical proof available today leads us to conclude that such a model is biologically implausible. With regard to this last point there are three considerations to be made. The first is that, although the simulation theory offers important indications concerning the cognitive components underlying some relation-building capacities, it is clear that these capacities do not exhaustively explain the way in which human beings relate to one another. To say that human beings need to make use of a common hereditary system common to that of other animals is true, but it still leaves work to be done. It fails to explain what type of specific relationships humans are able to establish in their interpersonal exchanges. When human sociality is the object of analysis, as well as elements of continuity it is also necessary to study the peculiarities which characterize it. The second consideration regards a more general view of what the mind should be. The idea which emerges from a reading of Gallese is that thinking of the mind as a resonance mechanism appears to come into irresolvable conflict with thinking of the mind as a representational-symbolic system. Our idea is that the two perspectives are not mutually exclusive. We shall not examine this question in detail here.26 The third consideration concerns the relationship between metarepresentation and a discontinuist viewpoint. The theory that we shall put forward is that it is possible to tackle the theme of the phylogenesis TP
PT
302
FRANCESCO FERRETTI
of metarepresentations and that, therefore, taking a metarepresentational stance does not mean in itself adhering to a discontinuist conception. ToM represents the peculiar way in which humans interpret behavior. This does not mean that ToM is their only way of mentalizing actions or that ToM can disregard cognitive precursors common with other animals. It means, however, that its specificities await plausible explanatory models. The typical nature of human mentalization is given by the role of language in constituting a specific metarepresentational system. The analysis of the relationship between language and mentalization open the road to the possibility of testing a conception of the mind in which a bidirectional constitutive model is called into question. Dealing with this aspect of the problem we shall demonstrate that the road towards a truly unitary perspective on the relationships between mind and society is that of co-evolution between external and internal factors. Only such an hypothesis can account for both the continuity and the specificity of human cognitive systems. 3. METAREPRESENTATIONAL CONTINUISM Although the metarepresentational system underlying ToM is used as proof of a clear-cut demarcation between human beings and other animals, our idea is that considering ToM as the mentalization system specific to humans does not necessarily mean adhering to a discontinuist viewpoint. Evidence to support this idea comes from studies on the phylogenesis of mentalization. The first data to emerge from these studies is that, effectively, the relationship between metarepresentation and primary representation cannot be thematized in terms of a clear-cut opposition. There are different levels and different forms of metarepresentation. Given that the metarepresentations used in attributing propositional attitudes are typical of humans, it does not, therefore, follow that nonhuman animals relate to their environment only through primary representations. The cue for arguing in this direction is offered by a series of articles which analyze the genesis of metarepresentations starting from the case of non-human animals.27 More specifically, a further cue is offered by an essay by Suddendorf and Whiten in which continuists are invited not to get too depressed about things.28 In the first place a terminological question. Following Perner,29 it is opportune to distinguish two different senses of “metarepresentation”: TP
TP
PT
PT
TP
PT
1. intended as representation of a representation; 2. intended as representation of a representation as representation. Traditionally, metarepresentation in its full sense is that defined in 2. The test of the false belief is the test considered valid in order to identify the presence of representational states of this type. In order to pass this test, in fact, as Perner underlines, one has to master the distinction between what a representation represents and how it represents something as something (how it represents something as being a certain way). Attributing the capacity to mentalize using the false belief test can, however, be a restrictive way, calibrated to human capacities, to
303
THE SOCIAL MIND
pose the question. Since the false belief test imposes constraints which are too strict, as animals cannot pass it, it does not follow that they are unable to employ metarepresentation. It follows only that they are not able to employ metarepresentation in its full sense. An interesting intermediate level of metarepresentation is that of secondary representations. They are characterized by importantant properties, such as the capacity to “suspend”, through the process of decoupling described by Alan Leslie in the pretend play,30 the normal causal constraints which link primary representations to the outside world. This suspension makes secondary representations able to stand for hypothetical or virtual situations, situations for which it is possible to give multiple and alternative pictures, which may be in themselves contradictory, of a fact or an object, such as the possibility in pretend play of representing simultaneously an object for what it is (let us say a banana) and what we pretend it is (a telephone or a pistol). According to Suddendorf and Whiten, such properties have significant repercussions on the nature of the mind: TP
PT
The advantage of simultaneously entertaining a second model is, of course, the ability to collate the two—that is, to bring them into propositional relation. One of us therefore coined the term collating mind to describe the kind of mind that uses secondary representations (Suddendorf 1999). It is the ability to consider a mental model of a situation not currently perceived (be it past, future, or hypothetical). Although this capacity may be a prerequisite to becoming a higher order intentional system, it is a more general skill that can be applied to various areas other than mind reading.31 TP
PT
A collating mind is capable of sophisticated conceptual activity. Not only a primary understanding of “mind” (from which epistemological states are, however, excluded), but also an understanding of the persistence of the object, the means-toan-end reasoning, the empathy, the imitation, the interpretation of external representations and the recognition of self through the mirror. Experimental data show that such capacities, which are present in an analogous way in both the 2-yearold child and the great apes, are justifiable with regard to secondary representations. These representations can be considered a fundamental developmental bridge in the passage from primary representations to metareprentations in their full sense. Data confirming the argument for “evolutionary parsimony” is homology, rather than analogy, as the keystone in understanding the properties of a certain species. From this point of view, secondary representations are characteristics which the great apes share with humans, having inherited them from a common ancestor. A final point remains to be discussed. What allows the passage from a system of mentalization based on secondary representations to a mentalization system able to attribute epistemological states and, above all, false epistemological states? How is it possible to pass from a generic and shared ability to mentalize to that specific capacity, regulated by propositional attitudes, represented by ToM?
304
FRANCESCO FERRETTI
4. CO-EVOLUTION The peculiar trait of the Theory of Mind is the capacity to interpret behavior in a context of false beliefs. In order to understand such behaviors: “the particular content of the propositional attitude ascription is critical to the understanding, because it is content that is not simply reflected in the observer’s own world view”.32 The decisive role played by the ascribed propositional attitudes of this type is due to a certain class of linguistic constructs, i.e. complement structures, sentence structures in which the object of the main mental state verb (think, believe, guess, and so on) comprises a subordinate clause, as in (1): TP
PT
(1) Mary believes that John wants a can of Coca-Cola. Thus, the point which links language to the theory of mind is that very possibility, designated to the syntax, of constructing complement structures: The complement structure invites us to enter a different world, the world of the girl’s mind, and suspend our usual procedures of checking truth as we know it. In this way, language captures the contents of minds, and the relativity of beliefs and knowledge states. These sentence forms also invite us to entertain the possible worlds of other minds, by a means that is unavailable without embedded propositions. Pictures cannot capture negation, nor falsity, nor the embeddedness of beliefs-that-are-false, unless we have a propositional translation alongside. So this special property of natural languages allows the representation of a class of events—the contents of others’ minds—that cannot be captured except via a system as complex as natural language.33 TP
PT
Only language possesses the representational tools necessary to enable the cognitive system to ascribe mental states endowed with propositional contents. The empirical proof of this hypothesis can be found in the case of oral deaf children. Experimental results highlight a close link between their impoverished syntax and the incapacity to ascribe intentional states in the description of events. The case of oral deaf children supports the idea of the constitutive role of language in thought; as de Villiers and de Villiers claim: “an individual with less language or no language would not be able to formulate the appropriate representation of another person holding a false belief and hence would have no basis for reasoning about that person’s actions”.34 Following the same path is Tager-Flusberg.35 Also in her hypothesis, the syntactic structures underlying complementation precede the ability to pass the false beliefs test and must, therefore, have a foundation of deep cognitive devices. Thus, since speaking of language, according to the authors cited above, means referring to public language, i.e. historico-natural languages, it may be possible to maintain that the empirical data provided by them reinforce the theory of language as external scaffolding. Should we, thus, newly admit the primacy of TP
PT
TP
PT
305
THE SOCIAL MIND
external factors over internal factors? And once again, must we return to the old theory of the one-way constitutive process? One way of answering the question is to ask oneself which of the two components, mentalization or language, precedes the other, in other words what factor is constitutive of the other.36 In very simple terms: for language to carry out the role it carries out in its capacity to mentalize, there must be a language. The hypothesis of the priority of language over mentalization comes up against enormous difficulties concerning the genesis of language. This is for a simple reason. When the evolution of language is investigated from more generic forms of communication, shared with other animals, reference to a metarepresentational system seems to be an unavoidable step to take. Sperber and Wilson, following Grice, place the intention of the speaker as the basis of the comprehensionproduction of language.37 Reference to the intention of the speaker highlights the difficulties of the code model (according to which communication is achieved by encoding and decoding messages) in explaining comprehension-production processes in verbal communication. Furthermore, it demonstrates the impossibility of explaining these processes without a device which is able to read the minds of others. Empirical proofs supporting this theory come from the difficulties shown by autistic subjects—affected by mindblindness38—in cases of communication where, as in metaphor and irony,39 the gap between what the speaker says and what the speaker intends is evident. If language is characterized by the presence of a mindreading device, the evolutionary passage from communication to language must, therefore, presuppose such a device. In other words, mentalization must precede language. Indeed, as Origgi and Sperber claim: TP
TP
PT
PT
TP
TP
PT
PT
The function of linguistic utterances […] is—and has always been—to provide this highly precise and informative evidence of communicator’s intention. This implies that language as we know it developed as an adaptation in a species already involved in inferential communication, and therefore already capable of some serious degree of mind-reading. In other terms, from the point of view of relevance theory the existence of mind-reading in our ancestors was a precondition for the emergence and evolution of language.40 TP
PT
Verbal communication rests upon a mechanism originally ascribed to the interpretation of behavior. Only admitting such a device is it possible to account for the advent of language; only admitting this advent is it possible to account for the role of language in the constitution of ToM. Moreover, without the capacity to mentalize, it is possible neither to explain the ontogenesis nor the effective use of language, i.e. the ability to understand and produce linguistic expressions. In other terms it is possible to explain hardly anything about what makes language language. Maintaining the primacy of the mentalization system over language does not exclude the fact that language, once constituted, will have rebound effects on the capacity to mentalize. Complementation, as we have seen, is a significant case in the
306
FRANCESCO FERRETTI
role of language in constituting specific relationship-building skills among human individuals. Sustaining such a theory is not, however, equivalent to a return to the past, revisiting the theory of Vygotskij. Indeed, the dependency of language on the mentalization system excludes the possibility of calling back into play the theory of the primacy of external factors over internal factors. The relationship between language and mentalization must be considered in terms of a co-evolution process. 5. CONCLUSION Investigating the social nature of human beings signifies investigating the specificities which mark them out. Since human beings are not the only social animals, nevertheless, the peculiarities which characterize them cannot be analyzed independently from what they share with other social animals. In order to explain the unitary nature of human beings it is necessary to account at the same time for the elements of continuity and specificity which characterize them. The social mind represents a useful testing ground for this hypothesis. The role of language in constituting the specific mentalization tool which human beings employ is possible, in fact, only by admitting that language itself uses mentalization systems, in its genesis and in its common use, which are phylogenetically antecedent. Language, the principal instrument for the management of interpersonal relations, perfectly embodies the coexistence in humans of peculiar traits and lines of continuity. Attention towards the bio-cognitive foundations of social relations brings out clearly the fact that, beyond declarations of intent, all the hypotheses which, privileging the role of factors external to the mind, continue to appeal to a one-way constitutive process, cannot account for the unitary nature of the human being. Only by accepting the idea of coevolution between external and internal factors is it possible to put forward a conception of human nature in line with a naturalistic viewpoint. As it is easy to imagine, there is still much to be done in this direction. As, concretely and in detail, it is possible to realize the idea of coevolution between external and internal factors is a task which awaits effective explanatory models. At least from a methodological point of view, however, the way is laid. The analysis of the bidirectional constitutive process seems to be the only way to avoid considering the human being in dualistic terms and, therefore, to lay a final stone on the distinction between natural sciences and sciences of the spirit.41 TP
NOTES 1
Durkheim ([1895] 1982, p. 54) Ibid., p. 52. 3 Ibid., p. 132. 4 Tooby and Cosmides (1992). 5 See Pinker (2002). 6 See Frawley (1997). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
PT
THE SOCIAL MIND
7
307
Vygotskij (1978, p. 27). Dennett (2000); Clark (2003). 9 Dennett (2000, p. 21). 10 In restricting the field of investigation solely to the acquisition of lexicon, much credit must go to Dennett and Clark. Putting forward a theory of language acquisition, of all the language, in terms of “invasion” would be a simply unsustainable theory. After Chomsky, every hypothesis of language acquisition must take into account the poverty of the stimulus argument, an argument which appears to exclude the idea of plastic and indeterminate minds (see this volume, p. 7). 11 Bloom (2000, 2004); Landau and Gleitman (1994). 12 Claiming that social relations must be constantly constrained by the analysis of the biocognitive mechanisms which make them possible, we admit, in point of fact, an asymmetric relationship between internal and external factors. In this sense, despite the distinction of the different level of analysis, our investigation is in agreement with the thesis of Kostko and Bickle (this volume, chapter 23), according to which in order to speak in a proper sense of the causal role of social factors in behavior, it is necessary to have a theory of the neurological mechanisms which apply them. 13 Dunbar (1993, 1996). 14 Berthoz ([1987] 2002). 15 Humphrey (1975). 16 Byrne and Whiten (1988); Whiten and Byrne (1997). 17 Dennett (1978). See also this volume, p. 3. 18 Premack and Woodruff (1978). 19 Gallese (2004, p. 165). 20 Ibid., p. 166. 21 Gordon (1996). 22 Rizzolati et al. (1988) 23 Gallese (2004). 24 Meltzoff (2002); Gattis et al. (2002). 25 Gallese (2004, p. 162). 26 For a closer analysis see this volume, chapter 21, in which the compatibility of some simulation-theory models with some theory-theory models is demonstrated. 27 Cosmides and Tooby (2000); Sperber (2000); Suddendorf (1999); Whiten (2000); Whiten and Byrne (1991). 28 Suddendorf and Whiten (2001). 29 Perner (1991). 30 Leslie (1987). 31 Suddendorf and Whiten (2001, p. 630). 32 de Villiers and de Villiers (2003, p. 337). 33 de Villers (2000, p. 90). 34 de Villiers and de Villiers (2003, p. 338). 35 Tager-Flusberg (2000). 36 See Sperber (2000). TP
PT
8 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
308
37
FRANCESCO FERRETTI
Sperber and Wilson (1986, 1995, 2004). Baron-Cohen (1995). 39 See Happé (1995). 40 Origgi and Sperber (2000, p. 165). 41 Many thanks to Giovanni Iorio Giannoli and Thomas Suddendorf for helpful comments on previous versions of this chapter. TP
PT
38 TP
PT
TP
PT
TP
PT
TP
PT
CHAPTER 23 SOCIAL BEHAVIORS AND BRAIN INTERVENTIONS: NEW STRATEGIES FOR REDUCTIONISTS Aaron Kostko and John Bickle
Philosophers interested in social behavior typically conduct their investigations without paying much attention to experimental results from either the neurosciences or the social sciences, and for good reason. Despite the potential relevance of these disciplines for traditional philosophical problems about rationality, decision-making, and personal identity, some methodological limitations have proven daunting. Although the social sciences track organisms’ natural responses to their environments, these sciences cannot specify the mechanisms whereby these environmental features exert their influence on behavior (qua bodily movements). The neurosciences, on the other hand, in their pursuit to specify cellular and molecular mechanisms underlying behavior, have largely been confined to analyzing simple, highly controlled behaviors under artificial conditions. Recently, however, these two disciplines have begun to converge toward the goal of better understanding the causal mechanisms underlying social behaviors. Here we will explore this convergence in detail by describing a recent study showing that social rank in macaque troop dominance hierarchies affects the number and availability of dopamine D2 receptors in the brain, and subsequently individual monkeys’ susceptibility to self-administer cocaine.1 We will use this study to sort out the relative impact that each of these disciplines contributes toward understanding the causal mechanisms underlying social behaviors. Although we will be arguing for the priority of reductionistic neurobiology, one moral is less controversial: the methodological barriers that previously enabled philosophers interested in complex social behaviors to overlook the experimental results of both the social sciences and the neurosciences no longer obtain. The emergence of social neuroscience changes that. Philosophers need to confront this recent convergence or risk irrelevance. TP
PT
1. THE SOCIAL SCIENCES AND NEUROSCIENCES AS EQUAL PARTNERS? A CASE STUDY The social sciences and the neurosciences have traditionally been strangers, each equipped with distinct research aims and methods. However, as their research aims have begun to converge, the two disciplines have found themselves partnered in transdisciplinary projects. Social psychologist John Cacioppo and his colleagues clearly articulate why the social sciences and the neurosciences not only can be, but 309 M. Marraffa, M. De Caro and F. Ferretti (eds.), Cartographies of the Mind, 309–317. © 2007 Springer.
310
AARON KOSTKO AND JOHN BICKLE
must be, partners in investigating the causal mechanisms underlying social behaviors: “Comprehensive accounts [of behavior] will remain elusive as long as either biological or social levels of organization are considered unnecessary or irrelevant”.2 A “multilevel integrative analysis” is required to explain social behaviors. Each discipline is concerned with different levels of organization and has distinct methodologies that are appropriate for explaining phenomena at that level. The resources of each are needed to fully explain the causal mechanisms responsible for social behaviors. The relevance of each discipline seems apparent. A properly functioning nervous system is essential for engaging in any meaningful form of social interaction. Making sense of another’s behavior requires the ability to form representations of others mental states (e.g., thoughts, intentions, beliefs, desires), and this ability is dependent on a properly functioning nervous system. One only need consider clinical studies of patients with specific brain lesions to see this. Damasio and Eslinger report a case of a patient who underwent resection of orbitofrontal cortex to remove a meningioma.3 The resection resulted in loss of function to the right and part of the left orbital cortex, areas with extensive connections to emotional centers in the limbic system. Although the patient scored above average on intelligence tests, he lost all ability to respond appropriately to social situations and could not make accurate appraisals about the motivations and attitudes of other persons he interacted with. Autists suffer from similar behavioral deficits. They can form first order representations of people and events based on perceptual experience (e.g., “That person is sad now”), but they cannot form second order representations (e.g., “That person is sad now because he thought others were making fun of him.”).4 They too lack the capacity to attribute mental states to others, making meaningful social interactions very difficult.5 Although a properly functioning nervous system is essential for engaging in social behaviors, social behaviors have been claimed to exert a reciprocal influence on neurobiology.6 A recent study by Michael Nader and his colleagues might be interpreted as exemplifying this reciprocal influence.7 Previous rodent studies had shown that disruptions to dopaminergic systems alter responses to reward and the reinforcing effects of cocaine. Nader’s lab extends this research to nonhuman primates, measuring the impact of individual versus social housing on number and availability of dopamine D2 receptors and cocaine self-administration. In the Morgan et al. (2002) study, twenty macaque monkeys were individually housed for 1.5 years. PET scans were administered to determine the relative distribution of D2 dopamine receptors, a class that has been implicated in cocaine reinforcement and addiction.8 All individually housed monkeys, regardless of their future role in dominance hierarchies following social housing, had statistically similar D2 receptor distribution volume ratios in the basal ganglia.9 However, once the transition from individual to social housing occurred and a stable dominance hierarchy had established, there was a statistically significant increase in D2 receptor distribution ratios in the monkeys that achieved dominant status (mean percent change of 22.0 ± 8.8%), with no increase (compared to their individual housing ratios) in monkeys that obtained subordinate positions. This increase produces a decreased amount of TP
PT
TP
TP
PT
PT
TP
TP
PT
PT
TP
TP
B
PT
B
PT
B
TP
PT
B
B
B
311
SOCIAL BEHAVIORS AND BRAIN INTERVENTIONS
synaptic dopamine (since more receptors are available for ligand binding). Higher levels of synaptic dopamine, dubbed “dopaminergic hyperactivity”, have been associated with an increased vulnerability to drug abuse.10 Not surprisingly, the socially housed dominant monkeys of Nader’s lab self-administered cocaine less frequently than the subordinate monkeys, both in terms of the number of intravenous injections self-administered per session and the total amount of cocaine injected (mgs per kg body weight per session). At optimal doses, total intake per session by subordinate monkeys more than doubled that of dominants.11 While monkeys’ social rank had a significant influence on D2 receptor distribution and cocaine reinforcement during initial exposure to the drug, follow up studies with these same monkeys have shown that continued, long-term cocaine exposure can attenuate this influence. Czoty et al. (2004) found no significant differences in D2 receptor distribution volume ratios in basal ganglia or cocaine selfadministration in all socially housed monkeys who had self-administered cocaine several times a week for 2-5 years. Continued exposure to cocaine led to increased self-administration by dominant monkeys, suggesting that the drug eventually served as a reinforcer in all monkeys regardless of social rank. The authors hypothesize that this attenuated effect of social rank indicates a progression of cocaine use “phases.” The Morgan et al. (2002) study represents the effects of initial exposure, referred to as the “acquisition” phase. The Czoty et al. (2004) study, on the other hand, tracks the impact of continuous exposure, referred to as the “maintenance” phase. Nader and his colleagues contend that we should not expect that environmental variables influencing cocaine’s initial reinforcing effects (acquisition), namely, social rank, are the same ones influencing the drug’s later reinforcing strength (maintenance). Although further research is required to determine the neurobiological states associated with the progression of drug abuse, the authors suggest that one implication of their results is clear. Since the changes in D2 receptor number and distribution in basal ganglia and susceptibility to self-administer cocaine initially could not be predicted prior to the emergence of a monkey’s status in a dominance hierarchy, vulnerability to drug abuse is more a consequence of social/environmental factors than of genetic predisposition. Their results seem at first glance to represent a paradigm example of Cacioppo’s “multilevel integrative analysis.” Although neurobiology is necessary to understand changes in D2 receptor distributions, these results seemingly cannot be interpreted correctly without acknowledging the relevance of the social influences—the monkeys’ positions within dominance hierarchies. TP
TP
PT
PT
B
B
B
B
B
B
B
B
2. THE SOCIAL SCIENCES AND NEUROSCIENCES AS COMPETITORS: DETERMINING THE CAUSAL STATUS OF ENVIRONMENTAL FACTORS Nader’s work clearly demonstrates a role for environmental factors for understanding the causal mechanisms of social behaviors. But does it show that the
312
AARON KOSTKO AND JOHN BICKLE
social sciences are equal partners with the neurosciences in these pursuits? We contend that a closer examination of the levels of organization and methods of explanation characteristic of each discipline places the equal partnership of the social sciences and the neurosciences on shaky ground, even for investigating social behaviors. We begin with some abstract, philosophical considerations. The social sciences traditionally focus on interactions between individuals and their embedding social environments. They emphasize the impact of environmental influences on self-conception and behavior. To characterize these environmental influences, social scientists systematically observe and describe behaviors, often measuring or manipulating various aspects of the social environment. Their descriptions typically posit abstract entities to explain particular behaviors. As Cacioppo et al. point out, “The social world […] is a complex set of abstractions representing the actions and influences of and the relationships among individuals, groups, societies, and cultures”.12 Examples of such abstractions include social isolation, social/peer pressure, and social status. In Nader’s research, the social abstraction employed is social status, i.e., positions within a primate dominance hierarchy. The neurosciences, on the other hand, focus ultimately (at least for now) on interactions between cellular and molecular mechanisms within nervous and other biological tissues. They seek to explain behavior in terms of these underlying microlevel interactions. One prominent neuroscientific attitude is reduction—macro-level posits get reduced to underlying micro-level dynamics. Reduction is often characterized as “upward determination” or as a “bottom up” research strategy because it attempts to demonstrate how processes at more fundamental physical levels are causally responsible for processes described at higher levels.13 Reduction carries several working methodological assumptions. One is the causal closure of the physical domain: every event that has a cause has a physical cause. Another is what Jaegwon Kim refers to as the causal inheritance principle: the causal properties described at higher levels are identical to causal properties at lower levels—they are inherited from those of their underlying base.14 Although these assumptions are often treated by philosophers (Kim included) as metaphysical claims about the world, most neuroscientists would consider them to be working methodological assumptions within actual scientific practice.15 The notion of causation at work in these methodological assumptions contrasts sharply with the notion at work in the social sciences. Cacioppo et al., for example, stress: TP
PT
TP
TP
TP
PT
PT
PT
All human behavior, at some level, is biological, but this is not to say that biological reductionism yields a simple, singular, or satisfactory explanation for complex behavior or that molecular forms or representation provide the only or best level of analysis for understanding human behavior. Molar [social] constructs such as those developed by the social sciences provide a means for understanding highly complex activity without needing to specify
313
SOCIAL BEHAVIORS AND BRAIN INTERVENTIONS
each individual action of the simplest components, thereby offering an efficient means of describing the behavior of a complex system.16 TP
PT
Their suggestion is that although the biological level is essential to understanding the causal mechanisms underlying social behavior, one may have an “efficient” explanation for a particular behavior without having to appeal to biology. This language certainly seems causal. Moving from abstract philosophy to concrete science, notice that this tension is apparent in Nader’s research. On the one hand, there is the clear suggestion that a monkey’s position within a dominance hierarchy causes changes in the number and distribution dopamine D2 receptors in key brain regions and subsequently on individual monkeys’ susceptibility to self-administer cocaine: “An organism’s environment can produce profound biological changes that have important behavioral associations”.17 In other places, however, the authors adopt more cautious terminology: “D2 receptor binding potential is related to environmental influences and associated with vulnerability to the abuse-related effects of cocaine”;18 “[…] individual differences in susceptibility to cocaine abuse within a population may be mediated by social dominance rank”.19 Regardless of the terminology adopted, what we want to know is how a monkey’s position in a dominance hierarchy brings about these changes at the neuronal level. What specific mechanisms are responsible for these changes? More research is necessary before any such mechanisms could be specified confidently, but one point is clear. Something is missing when one talks about social rank as a cause of behavior without specifying the neuronal mechanisms whereby social rank exerts its influence. When this influence is claimed to be measurable at the level of specific receptor distribution changes, it must ultimately connect with the molecular mechanisms driving gene expression and protein synthesis in particular neurons. More generally, before we can refer to the environment, or any abstraction postulated by social scientists (including social neuroscientists) as a cause of measured neurobiological or behavioral effects, we must be able to explain how they are transduced down to neuronal and molecular mechanisms. For it is only the latter that drive neurotransmitter release into neuromuscular junctions to elicit muscle contractions (i.e., behavior), and gene expression and protein synthesis that changes receptor distribution. That’s not a metaphysical directive; that’s science talking. To the extent that we demand a causal-mechanistic explanation for social behavior, it is not clear how the social sciences can be equal partners with the neurosciences. Their very notions of causation stand in stark contrast to one another. Accepting the causal accounts of both leads to familiar problems of “causal dualism”—problems that can be discerned in actual scientific results, not merely on abstract philosophical grounds.20 If all behavior is biological, as even Cacioppo and colleagues contended, then it is difficult to see how the abstractions posited by the social sciences can exert any causal influence on behavior without being biological themselves. To claim otherwise is to postulate two sufficient causes, one social and one physical, in the same sense of “cause”, for the same behavior. The levels of biological organization that are now known to be influenced—like distribution of B
TP
B
B
PT
B
TP
TP
TP
PT
PT
PT
314
AARON KOSTKO AND JOHN BICKLE
dopamine D2 receptors in basal ganglia, as shown by Nader’s work—exacerbate rather than resolve this problem for social neuroscience. One possible option for the social scientist is to shift the onus of proof onto the neurobiological reductionist. Rather than succumb to the reductionist requirement of demonstrating how the social abstractions can be causally efficacious, the social scientist can demand that the reductionist show how biological phenomena explain these abstractions. This, however, is not a productive route. Neurobiological research into social behaviors is just beginning to bear fruit. Nevertheless, this is a task that neuroscientists are increasingly willing to take on. The partnership envisioned by neurobiological reductionists asks social scientists to suggest likely environmental correlations for a particular behavior. But neuroscience then seeks the neural and molecular mechanisms elicited in normal situations by these environmental features, which in turn generate the measurable neurobiological and behavioral effects. This approach does not eliminate the entities or other abstract influences postulated by the social sciences as phenomena in need of explanation. Rather, it eliminates them as phenomena endowed with causal efficacy. From the reductionistic neuroscientist’s perspective, the role that remains for these abstractions, and for the social sciences in general, is purely heuristic. They specify parameters of social behaviors and processes that then stand in need of causalmechanistic neuroscientific explanation. This appraisal does not denigrate the importance or the difficulty of the social scientist’s task. As Nader’s work demonstrates, this task can be accomplished partly by providing a more precise description of the effects of one’s position in a dominance hierarchy. Dominant status confers the benefits of increased access to environmental features such as food, mates, and territory. Subordinate status decreases access to all of these factors and adds the burden of social isolation. Each of these features in turn requires further specification before it could plausibly be shown how each could bring about changes at neuronal and molecular levels. Providing these further specifications will in turn require the neuroscientists to investigate neural circuits and regions beyond the dopaminergic reward system (Nader, private correspondence). In this way, the social sciences aid essentially in the search for the neuronal and molecular mechanisms responsible for particular social behaviors, while the neurosciences ultimately ground abstractions postulated by the social sciences by identifying the neuronal and molecular mechanisms into which they are transduced. B
B
3. SOCIAL BEHAVIORS, POTENTIAL BRAIN INTERVENTIONS, AND PSYCHONEURAL REDUCTION We can now draw parallels with new reductionist approaches in the philosophy of mind. A problem posed routinely to reductionists is that of explaining how brain activity can give rise to subjective conscious experience. A recent reductionist response appeals to interventions into neural and molecular mechanisms that generate subjective conscious experiences.21 One technique for inducing these TP
PT
SOCIAL BEHAVIORS AND BRAIN INTERVENTIONS
315
interventions is microstimulation of tiny numbers of similarly tuned neurons (< 100) that process specific features of environmental stimuli or motor commands. Bill Newsome’s laboratory have induced motion and stereoscopic depth detection in nonhuman primates by microstimulating clusters of neurons in the medial temporal cortex (area MT) responsible for processing visual location and motion direction.22 Britten and van Wezel induced judgments of self-movement direction through space based on visual cues by microstimulating clusters of neurons in the medial superior temporal cortex (area MST) responsive to changing patterns of retinal stimulation as the monkey moves through space (“heading direction”).23 Microstimulation effects are not limited to visual stimuli. Romo et al. induced fingertip flutter sensations by microstimulating clusters of Quickly Adapting neurons in primate primary somatosensory cortex.24 Based on carefully controlled behavioral measures, monkeys could not distinguish actual fingertip stimulation from cortical microstimulation. Whether these microstimulation studies present a viable reductionist approach toward phenomenal consciousness remains controversial. But they do further illustrate the “intervene cellular/molecularly and track behaviorally” account of reduction-in-practice advocated by Bickle (2003, forthcoming). They also suggest a potential strategy for providing reductive explanations of social phenomena. Just as it is possible to generate sensory judgments by intervening directly at the cellular level, it should be possible to generate specific social behaviors by directly manipulating their hypothesized neuronal or molecular mechanisms. This will require a better understanding of the neuronal and molecular mechanisms involved, but the strategy seems clear enough. In light of Nader’s work, for example, once we get a clearer picture of the neuronal and molecular mechanisms whereby social rank is transduced into neural currency, we could then strive to manipulate these mechanisms directly, independent of actual social circumstances, to produce the specific social behaviors: to induce, e.g., vulnerability to drug abuse or future position within a dominance hierarchy. This is consistent with the general aim of current reductionism-in-neuroscientific-practice. At present, this intervention strategy remains entirely speculative for social behaviors. It faces several difficult methodological obstacles. More precise characterizations of the social abstractions are needed. To the extent that multiple cellular and molecular mechanisms interact to generate a particular social behavior, inducing the behavior would require their simultaneous microstimulation. This is clearly a much taller order than the successful microstimulations performed thus far, which have been restricted to inducing fairly simple experiential and behavioral effects. However, current and foreseeable neuroscience has other intervention techniques available. One can focus on the genetic factors involved in social behaviors. In light of Nader’s work, it might be possible to overexpress the genes that control for the production of D2 receptor proteins, perhaps thereby producing monkeys that are either dominant or resistant to the reinforcing effects of cocaine by circumventing the normal environmental influences. Current research employing transgenic rodents has already produced results like this. Specific genes that are TP
TP
TP
PT
PT
PT
316
AARON KOSTKO AND JOHN BICKLE
involved in certain cellular and molecular processes can be removed, modified, or enhanced by current biotechnological procedures. After modifying the gene(s) responsible for dopamine D2 receptor synthesis, experimenters could track the effects on social behaviors (like place in a troop dominance hierarchy or cocaine self-administration). Some rodent work already speaks to this possibility. Giros et al. knocked out (in mice) the gene expressing dopamine transporter protein (DAT).25 This manipulation yielded higher concentrations of extracellular dopamine, and the reinforcing effects of cocaine were negligible in these mice. Shih et al. knocked out the gene expressing a class of monoamine transporters (of which DAT is a member), leading to enhanced aggressive behaviors in social interactions.26 The results just cited point to the importance of genetic factors in generating the social behaviors being investigated by Nader in primates. This should surprise no one. Receptors, after all, are configured proteins, controlled directly by gene expression and protein synthesis. If their numbers and distributions in specific brain regions are part of the biological mechanisms for dominance ranking and susceptibility to cocaine self-administration, then gene expression is central to generating these social behaviors. Normally this expression might be cued by environmental stimuli (transduced into the appropriate nervous system currency); but perhaps it can be induced directly through future direct brain interventions— along with its behavioral effects. Despite the many practical obstacles to actually producing these interventions in behaving primates, this strategy provides real hope for a reductionistic social neuroscience. B
B
TP
TP
PT
PT
NOTES 1
Morgan et al. (2002) Cacioppo et al. (2002, p. 35). 3 Damasio and Eslinger (1985). 4 Brothers (2002). 5 See this volume, p. 305. 6 As reviewed in Cacioppo et al. (2003). 7 Morgan et al. (2002). 8 PET is an acronym for Positron Emission Tomography, a technique for imaging the functioning brain. Morgan et al. (2002) used a dopamine D2 receptor radioligand while monkeys were housed individually and again, 5-12 months later, after they were placed in social groups and a stable hierarchy had established. PET regions of interest (ROIs) included both basal ganglia and a control region within the cerebellum (with low D2 receptor density). 9 Morgan et al. (2002, Table 1). 10 Marinelli and White (2000). 11 Morgan et al. (2002, Figure 4). 12 Cacioppo et al. (2002, p. 21). 13 See Churchland (1986); Kim (1993); Bickle (1998). 14 Kim (1993). TP
PT
2 TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
B
B
B
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
B
SOCIAL BEHAVIORS AND BRAIN INTERVENTIONS
15
317
We are assuming without argument here that most working neuroscientists are at least methodological physicalists, in that their research aim is to uncover the physical mechanisms responsible for behaviors. 16 Cacioppo et al. (2002, p. 21). 17 Morgan et al. (2002, p. 169; italics added). 18 Ibid., p. 169, italics added. 19 Ibid., p. 171, italics added. 20 See Bickle (2003). 21 See Bickle (2003, chapter 4); Bickle and Ellis (forthcoming). 22 Salzman et al. (1992); D’Angelis, Cumming and Newsome (1998). 23 Britten and van Wezel (1998). 24 Romo et al. (1998, 2000). 25 Giros et al. (1996). 26 Shih et al. (1999). TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
TP
PT
REFERENCES Adrian, E.D. (1928). The Basis of Sensation: The Action of the Sense Organs. New York: Norton. Agar, N. (1993). “What do frogs really believe?” Australasian Journal of Philosophy 71: 1-12. Allen, C. (1992). “Mental content.” British Journal for the Philosophy of Science 43: 537-553. Allen, C. (1999). “Animal concepts revisited: The use of self-monitoring as an empirical approach.” Erkenntnis 51: 33-40. Allen, C. and M. Bekoff (1997). Species of Mind. Cambridge, MA: MIT Press. Allen, C. and M. Hauser (1991). “Concept attribution in nonhuman animals: Theoretical and methodological problems in ascribing complex mental processes.” Philosophy of Science 58: 221-240. Allen,
J. (1995). Natural Benjamin/Cummings.
Language
Understanding.
Redwood
City,
CA:
Allison, H. (1990). Kant’s Theory of Freedom. Cambridge: Cambridge University Press. Allport, A. (1988). “What concept of consciousness?” In A.J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science. Oxford: Clarendon. Armstrong, D.M. (1970). “The nature of mind.” In C.V. Borst (ed.), The Mind/Brain Identity Theory. London: Macmillan. Armstrong, D.M. (1969). A Materialist Theory of the Mind. New York: Humanities Press. Arnold, M. (1960). Emotion and Personality, Vol. 1. Psychological Aspects. New York: Columbia University Press. Audi, R. (1974). “Moral responsibility, freedom and compulsion.” American Philosophical Quarterly, 19: 25-39. Auyang, S. (2001). Mind in Everyday Life and Cognitive Science. Cambridge, MA: MIT Press. Aydede, M. (2004). “The Language of Thought hypothesis.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/fall2004/entries/language-thought/). T
T
Ayer, A.J. (1940). The Foundations of Empirical Knowledge. London: MacMillan. Ayer, A.J. (1956). The Problem of Knowledge. London: MacMillan. Baars, B. (1988). A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press. Baars, B. (1993). “How does a serial, integrated and very limited stream of consciousness emerge from a nervous system that is mostly unconscious, distributed, and of
319
320
REFERENCES
enormous capacity?” In G. Bock and J. Marsh (eds.), CIBA Symposium on Experimental and Theoretical Studies of Consciousness. London: Wiley. Baars, B. (2002). “The conscious access hypothesis: origins and recent evidence.” Trends in Cognitive Sciences 6: 47-52. Baars, B.J. and J. Newman (1994). “A neurobiological interpretation of Global Workspace Theory.” In A. Revonsuo and M. Kamppinen (eds.), Consciousness in Philosophy and Cognitive Neuroscience. Hillsdale, NJ: Erlbaum. Ballard, D.H. (1991). “Animate vision.” Artificial Intelligence 48: 57-86. Ballard, D.H. (1996). “On the function of visual representation.” In K. Akins (ed.), Perception—Vancouver Studies in Cognitive Science, Vol. 5. Oxford: Oxford University Press; also in Noë and Thompson (2002). Baron, J. (1988). Thinking and Deciding. Cambridge: Cambridge University Press. Baron-Cohen, S. (1995). Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press. Baron-Cohn, S. and J.E. Harrison (eds.) (1997). Synaesthesia: Classic and Contemporary Readings. Oxford: Blackwell. Baron-Cohn, S., J. Harrison, L. Goldstein, and M.A. Wyke (1993). “Coloured speech perception: Is synaesthesia what happens when modularity breaks down?” Perception 22: 419-426. Baron-Cohen, S., M. Wyke, and C. Binnie (1987). “Hearing words and seeing colours: An experimental investigation of synaesthesia.” Perception 16: 761-767. Baum, E.B. (2004). What is Thought? Cambridge, MA: MIT Press. Bayne, T. (2002). “Review of H. Walter, Neurophilosophy of Free Will.” Metaphyscology, http://mentalhelp.net/books/. Bayne, T. (2006). “Phenomenology and the feeling of doing: Wegner on the conscious will.” In S. Pockett, W.P. Banks, and S. Gallagher (eds), Does Consciousness Cause Behavior? An Investigation of the Nature of Volition. Cambridge, MA: MIT Press. Bayne, T., and D. Chalmers (2003). “What is the unity of consciousness?”. In A. Cleeremans (ed.), The Unity of Consciousness: Binding, Integration, Dissociation. Oxford: Oxford University Press. Beaman, A.L., P.J. Barnes, B. Klentz, and B. McQuirk (1978). “Increasing helping rates through information dissemination: Teaching pays.” Personality and Social Psychology Bulletin 4: 406-411. Bechtel, W. (1998). “Representations and cognitive explanations: Assessing the dynamicist challenge in cognitive science.” Cognitive Science 22: 295-318. Bechtel, W., A. Abrahamsen, and G. Graham (1998). “The life of cognitive science”. In W. Bechtel and G. Graham (eds.), A Companion to Cognitive Science. Oxford: Blackwell. Bechtel, W., P. Mandik, J. Mundale, and R.S. Stufflebeam (2001). Philosophy and the Neurosciences: A Reader. Oxford: Blackwell.
321
REFERENCES
Bechtel, W. and R.C. Richardson (1993). Discovering Complexity: Decomposition and Localization as Scientific Research Strategies. Princeton: Princeton University Press. Beer, R.D. (2000). “Dynamical approaches to cognitive science.” Trends in Cognitive Sciences 4: 91-99. Beggs, I., and J.P. Denny (1969). “Empirical reconciliation of atmosphere and conversion interpretations of syllogistic reasoning errors.” Journal of Experimental Psychology 81: 351-354. Bem, D.J. (1962). “Self-perception theory.” In L. Berkowitz (ed.), Advances in Experimental Social Psychology, Vol. 6. New York: Academic Press. Bennett, J.F. (1974). Kant’s Dialectic. Cambridge: Cambridge University Press. Bennett, J.F. (1976). Linguistic Behaviour. Cambridge: Cambridge University Press. Bennett, J.F. (1991). “How is cognitive ethology possible?” In C. Ristau (ed.), Cognitive Ethology. Hillsdale, NJ: Erlbaum. Bermúdez, J.L. (1994). “The unity of apperception in the Critique of Pure Reason.” European Journal of Philosophy 2: 213-240. Bermúdez, J.L. (1998). The Paradox of Self-Consciousness. Cambridge, MA: MIT Press. Bermúdez, J.L. (2000). “Personal and sub-personal: A difference without a distinction.” Philosophical Explorations 3: 63-82. Bermúdez, J.L. (2003). Thinking without Words. Oxford: Oxford University Press. Bermúdez, J.L. (2005). Philosophy of Psychology. A Contemporary Introduction. New York: Routledge. Berofsky, B. (2002). “Ifs, cans, and free will: The issues.” In R. Kane (ed.) (2002). Berthoz, A. ([1987] 2002). Brain’s Sense of Movement. Cambridge, MA: Harvard University Press. T
T
Bickerton, D. (1995). Language and Human Behavior. Washington: University of Washington Press. Bickle, J. (1998). Psychoneural Reduction: The New Wave. Cambridge, MA: MIT Press. Bickle, J. (2003). Philosophy and Neuroscience: A Ruthlessly Reductive Account. Dordrecht: Kluwer. Bickle, J. (forthcoming). “Reducing mind to molecular pathways: Explicating the reductionism implicit in current cellular and molecular neuroscience.” Synthese. Bickle, J. and P. Mandik (2002). “The Philosophy of neuroscience.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/win2002/entries/neuroscience/). T
T
Bickle, J. and R. Ellis (forthcoming). “Phenomenology and cortical microstimulation.” In D.W. Smith and A. Thomasson (eds.), Phenomenology and Philosophy of Mind. Oxford: Oxford University Press.
322
REFERENCES
Bilgrami, A. (1992). Belief and Meaning. Oxford: Oxford University Press. Bishop, R.C. (2002). “Chaos, indeterminism, and free will.” In R. Kane (ed.) (2002). Bisiach, E. and A. Berti (1995). “Consciousness in dyschiria”. In M. Gazzaniga (ed.), The Cognitive Neurosciences. Cambridge, MA: MIT Press. Bjorklund, D.F. (2004). “Introduction: Special issue on memory development in the new millennium.” Developmental Review 24: 343-346. Blake, R., T.J. Palmeri, R. Marois, and C-Y. Kim (2005). “On the perceptual reality of synaesthetic color.” In L.C. Robertson and N. Sagiv (eds.), Synaesthesia: Perspectives from Cognitive Neuroscience. Oxford: Oxford University Press. Block, N. (1978). “Troubles with functionalism.” In C.W. Savage (ed.), Perception and Cognition: Issues in the Foundations of Psychology. Minnesota Studies in the Philosophy of Science, Vol. 9. Minneapolis: University of Minnesota Press. Block, N. (1983). “Mental pictures and cognitive science.” Philosophical Review 90: 499-541. Block, N. (1986). “Advertisement for a semantics for psychology.” In P. French, T. Uheling, H. Wettstein (eds.), Midwest Studies in Philosophy, Vol. 10. Minneapolis: University of Minnesota Press. Block, N. (1995). “On a confusion about consciousness.” Behavioral and Brain Sciences 18: 227-287. Block, N. (2005). “Two neural correlates of consciousness.” Trends in Cognitive Sciences 9: 46-52. Block, N. and G. Segal (1998). “The philosophy of psychology.” In A.C. Grayling (ed.), Philosophy 2. Oxford: Oxford University Press. Bloom, P. (2000). How Children Learn the Meanings of Words. Cambridge, MA: MIT Press. Bloom, P. (2004). Descartes’ Baby. How the Science of Child Development Explains what Makes Us Human. New York: Basic Books. Bogdan, R. (1993). “L’histoire de la science cognitive.” In L. Sfez (ed.), Dictionnaire critique de la communication. Paris: PUF. Bogen, J.E. (1993). “The callosal syndromes.” In K.H. Heilman and E. Valenstein (eds.), Clinical Neuropsychology. Oxford: Oxford University Press. Boghossian, P. (2003). “Blind reasoning.” Supplement to the Proceedings of the Aristotelian Society 77: 225-248. Botterill, G. and P. Carruthers (1999). The Philosophy of Psychology. Cambridge: Cambridge University Press. Bower, G. and J. Forgas (2000). “Affect, memory, and social cognition.” In E. Eich, J.F. Kihlstrom, G.H. Bower, J.P. Forgas, and P.M. Niedenthal (eds.), Cognition and Emotion. Oxford: Oxford University Press. Bowlby, J. (1980). Attachment and Loss, Vol. 3, Loss. London: The Hogarth Press.
323
REFERENCES
Braddon-Mitchell, D. and F. Jackson (1996). Philosophy of Mind and Cognition. Oxford: Blackwell. Braine, M.D. and D.P. O’Brien (eds.) (1998). Mental Logic. Cambridge, MA: MIT Press. Brandom, R. (2000). Articulating Reasons. An Introduction to Inferentialism. Cambridge: Cambridge University Press. Bratman, M. (1987). Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press. Brentano, F. ([1874] 1973). Psychology from an Empirical Standpoint. London: Routledge. T
T
Brewer, B. (1999). Perception and Reason. Oxford: Oxford University Press. Bringsjord, S. (1995). “Computation, among other things, is beneath us.” Minds and Machines 4: 469-488. Britten, K. and R. van Wezel (1998). “Electrical microstimulation of cortical area MST biases heading perception in monkeys.” Nature Neuroscience 1: 59-63. Broad, C.D. (1954). “Emotion and sentiment.” Journal of Aesthetics and Art Criticism 13: 203-214. Brook, A. (2001). “The unity of consciousness.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/sum2001/entries/consciousness-unity/). Brothers, L. (2002). “The social brain: A project for integrating primate behavior and neurophysiology in a new domain.” In Cacioppo et al. (2002). Bruce V., P.R. Green, and M.A. Georgeson (1996). Visual Perception. Hove: Taylor & Francis. Burge, T. (1979). “Individualism and the mental.” Midwest Studies in Philosophy 4: 73-121. T
T
Burge, T. (1986). “Individualism and psychology.” Philosophical Review 95: 3-45. Byrne, A. (1994). “Behaviourism.” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind. Oxford: Blackwell. Byrne, R. and A. Whiten (eds.) (1988). Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes and Humans. Oxford: Clarendon Press. Cacioppo, J.T., G.G. Berntson, R. Adolphs, C.S. Carter, R.J. Davidson, M.K. McClintock, B.S. McEwen, M.J. Meaney, D.L. Schacter, E.M. Sternberg, S.S. Suomi, and S.E. Taylor (eds.) (2002), Foundations in Social Neuroscience. Cambridge, MA: MIT Press. Cacioppo, J.T., G.G. Berntson, J.F. Sheridan, and M.K. McClintock (2002). “Multilevel integrative analysis of human behavior: Social neuroscience and the complementing nature of social and biological approaches.” In Cacioppo et al. (2002). Cain, M.J. (2002). Fodor: Language, Mind, and Philosophy. Cambridge: Polity Press. Calvo Garzón, F. (2006). “Towards a general theory of antirepresentationalism.” British Journal for the Philosophy of Science (in press).
324
REFERENCES
Calvo Garzón, F. (submitted). “The extended mind and the reactive brain: A fourth position”. Campbell, J. (1994). Past, Space, and Self. Cambridge, MA: MIT Press. Campbell, J. (1997). “The structure of time in autobiographical memory.” European Journal of Philosophy 5: 105-118. Campbell, S. (2003). Relational Remembering: Rethinking the Memory Wars. Lanham, MD: Rowman and Littlefield. Campbell, S. (2004). “Models of memory and memory activities.” In P. DesAutels and M.U. Walker (eds.), Moral Psychology: Feminist Ethics and Political Theory. Lanham, MD: Rowman and Littlefield. Chalmers D. (1996). The Conscious Mind. Oxford: Oxford University Press. Chater, N. and C. Heyes (1994). “Animal concepts: Content and discontent.” Mind and Language 9: 209-246. Chella, A., M. Frixione, and S. Gaglio (1997). “A cognitive architecture for artificial vision.” Artificial Intelligence 89: 73-111. Chella, A., M. Frixione, and S. Gaglio (2000). “Understanding dynamic scenes.” Artificial Intelligence 123: 89-132. Cheney, D.L. and R.M. Seyfarth (1990). How Monkeys See the World. Chicago: University of Chicago Press. Chisholm, R.M. (1964). “Human freedom and the self.” In G. Watson (ed.) (1980). Chisholm, R.M. (1976). Person and Object: A Metaphysical Study. La Salle, Ill.: Open Court. Chisholm, R.M. (1978). “The observability of the self.” In J. Donnelly (ed.), Language, Metaphysics, and Death. New York: Fordham University Press. Chisholm, R. and T. Feehan (1977). “The intent to deceive.” The Journal of Philosophy 74: 143-159. Chomsky, N. (1975). Reflections on Language. New York: Pantheon Books. Christman, J. (1991). “Autonomy and personal history.” Canadian Journal of Philosophy 21: 1-24. Churchland P.M. (1981a). “On the alleged backward referral of experiences and its relevance to the mind-body problem.” Philosophy of Science 48: 165-81. Churchland P.M. (1981b). “The timing of sensations: Reply to Libet.” Philosophy of Science 48: 492-497. Churchland, P.M. (1981c). “Eliminative materialism and the propositional attitudes.” Journal of Philosophy 78: 67-90. T
T
Churchland, P.M. (1998). “Activation vectors vs. propositional attitudes: How the brain represents reality.” In P.M. Churchland and P.S. Churchland, On the Contrary. Critical Essays, 1987-1997. Cambridge, MA: MIT Press.
325
REFERENCES
Churchland, P.M. and P.S. Churchland (1996). “McCauley’s demand for a co-level competitor.” In R.N. McCauley (ed.), The Churchlands and their Critics. Oxford: Blackwell. Churchland, P.S. (1986). Neurophilosophy: Towards a Unified Approach to the Mind/Brain. Cambridge, MA: MIT Press. Churchland, P.S. (1988). “Reduction and the neurobiological basis of consciousness.” In A.J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science. Oxford: Clarendon Press. Churchland, P.S. and T.J. Sejnowski (1992). The Computational Brain. Cambridge, MA: MIT Press. Clark, A. (1997). Being There: Putting Brain, Body, and World Together Again. Cambridge, MA: MIT Press. Clark, A. (2001). Mindware: An Introduction to the Philosophy of Cognitive Science. Oxford: Oxford University Press. Clark, A. (2003). Natural Born Cyborgs. Mind, Technologies and the Future of Human Intelligence. Oxford: Oxford University Press. Clark, A. (2005). “Intrinsic content, active memory and the extended mind.” Analysis 65: 1-11. Clark, A. (2006). “Memento’s revenge: The extended mind, extended.” In R. Menary (ed.), The Extended Mind. Aldershot: Ashgate. Clark, A. and D. Chalmers (1998). “The extended mind.” Analysis 58: 7-19. Clark, A. and C. Eliasmith (2002). “Philosophical issues in brain theory and connectionism.” In M.A. Arbib (ed.), The Handbook of Brain Theory and Neural Networks, 2nd Ed. Cambridge, MA: MIT Press. P
P
Clarke, R. (2003). Libertarian Accounts of Free Will. Oxford: Oxford University Press. T
T
Cleeremans, A. (ed.) (2003). The Unity of Consciousness: Binding, Integration, Dissociation. Oxford: Oxford University Press. Clore, G. and A. Ortony (2000). “Cognition in emotion: Always, sometimes, or never?” In R. Lane and L. Nadel (eds.), Cognitive Neuroscience of Emotion. New York: Oxford University Press. Cohen, J.L. (1992). An Essay on Belief and Acceptance. Oxford: Oxford University Press. Compton, A.H. (1935). The Freedom of Man. New Haven, CN: Yale University Press. Copeland, B.J. (2000). “Narrow versus wide mechanism: Including a re-examination of Turing’s views on the mind-machine issue.” The Journal of Philosophy 96: 5-32. Corballis, M. (1995). “Visual integration in the split-brain.” Neuropsychologia 33: 937-959. Cordeschi, R. (2002). The Discovery of the Artificial. Behavior, Mind and Machines Before and Beyond Cybernetics. Dordrecht: Kluwer.
326
REFERENCES
Cosmides, L. and J. Tooby (2000). “Consider the source: The evolution of adaptations for decoupling and metarepresentation.” In D. Sperber (ed.), Metarepresentations. A Multidisciplinary Perspective. Oxford: Oxford University Press. Cotton, J.L. (1980). “Verbal reports on mental processes: Ignoring data for the sake of theory.” Personality and Social Psychology Bulletin 6: 278-281. Coulson, S. (2005). “Metaphor comprehension and the brain: Empirical studies.” (http://hci.ucsd.edu/coulson/260/coulson1.doc). HT
TH
Craik, K.J.W. (1943). The Nature of Explanation. Cambridge: Cambridge University Press. Crane, T. (1988). “The waterfall illusion.” Analysis 48: 142-147. Craver, C. (2001). “Role functions, mechanisms, and hierarchy.” Philosophy of Science 68: 53-74. Craver, C.F. (2002). “Interlevel experiments and multilevel mechanisms in the neuroscience of memory.” Philosophy of Science 69: S83-S97. Craver, C.F. (2005). “Beyond reduction: Mechanisms, multifield integration, and the unity of neuroscience.” Studies in History and Philosophy of Biological and Biomedical Sciences 36: 373-395. Crick, F. and C. Koch (1990). “Towards a neurobiological theory of consciousness.” The Neurosciences 2: 263-275. Csibra, G. (2005) “Mirror neurons and action observation. Is simulation involved?” (http://www.interdisciplines.org/mirror/papers/4). Cummins, R. (1983). The Nature of Psychological Explanation. Cambridge, MA: MIT Press. Cummins, R. (2000). “‘How does it work?’ vs. ‘What are the laws?’ Two conceptions of psychological explanation.” In F.C. Keil and R.A. Wilson (eds.), Explanation and Cognition. Cambridge, MA: MIT Press. Cutting, J. (1986). Perception with an Eye for Motion. Cambridge, MA: MIT Press. Cytowic, R.E. (1989). Synesthesia: A Union of the Senses. New York: Springer Verlag. Cytowic, R.E. (1993). The Man Who Tasted Shapes. London: Abacus. Cytowic, R.E. ([1995] 1997). “Synaesthesia: Phenomenology and neuropsychology—A review of current knowledge.” Reprinted in Baron-Cohen and Harrison (1997). Czoty, P.W., D. Morgan, and M.A. Nader (2004). “Characterization of dopamine D1 and D2 receptor function in socially housed cynomolgus monkeys self-administering cocaine.” Psychopharmacology 174: 381-388. B
B
B
B
Dainton, B. (2000). Stream of Consciousness: Unity and Continuity in Conscious Experience. London: Routledge. Damasio, A.R. (1989). “The brain binds entities and events by multiregional activation from convergence zones.” Neural Computation 1: 123-132. Damasio, A. (1994). Descartes’ Error. New York: Putnam.
REFERENCES
327
Damasio, A. (1999). The Feelings of What Happens. Body and Emotion in the Making of Consciousness. New York: Harcourt Brace. Damasio, A. (2003). Looking for Spinoza. Joy, Sorrow and the Feeling Brain. New York: Harcourt Brace. Damasio, A. and P.J. Eslinger (1985). “Severe disturbance of higher cognition after bilateral frontal lobe ablation: Patient E.V.R.” Neurology 35: 1731-1741. D’Angelis, G., B. Cumming, and W. Newsome (1998). “Cortical area MT and the perception of stereoscopic depth.” Nature 394: 677-680. Daprati, E., N. Franck, N. Georgie, J. Proust, E. Pacherie, J. Dalery, and M. Jeannerod (1997). “Looking for the agent: An investigation into consciousness of action and selfconsciousness in schizophrenic patients.” Cognition 65: 71-86. Darley, J. and D. Batson (1973). “‘From Jerusalem to Jericho’: A study of situational and dispositional variables in helping behavior.” Journal of Personality and Social Psychology 27: 100-108. Davidson, D. (1970). “Freedom to act”. Reprinted in Davidson (1980). Davidson, D. (1976). “Hume’s Cognitive Theory of Pride.” Journal of Philosophy 73: 744-757. Davidson, D. (1980). Essays on Actions and Events. Oxford: Oxford University Press. Davidson, D. (1984). “Thought and talk.” In Id., Inquiries into Truth and Interpretation. Oxford: Oxford University Press. Davidson, D. (1985). “Rational animals.” in E. Lepore and B. McLaughlin (eds.), Essays on Action and Events. Oxford: Blackwell. Davidson, D. (1997). “Seeing through language.” In J. Preston (ed.), Thought and Language. Cambridge: Cambridge University Press. Davidson, D. (1999). “The emergence of thought.” Erkenntnis 51: 7-17. Davis, L. (1997). “Cerebral hemispheres.” Philosophical Studies 87: 207-222. Davies, M. (2005). “Cognitive science.” In F. Jackson and M. Smith (ed.), The Oxford Handbook of Contemporary Philosophy. Oxford: Oxford University Press. Dayan, P. and L.F. Abbott (2001). Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge, MA: MIT Press. Deacon, T. (1997), The Symbolic Species. The Co-evolution of Language and the Brain. New York: Norton. De Caro, M. (2004). “Is free will really a mystery?” In De Caro and Macarthur (2004). De Caro, M. and D. Macarthur (2004). Naturalism in Question. Cambridge, MA: Harvard University Press. Dehaene, S. and L. Naccache (2001). “Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework.” Cognition 79: 1-37.
328
REFERENCES
Dehaene, S. and J-P Changeux (2004). “Neural mechanisms for access to consciousness.” In M. Gazzaniga (ed.), Cognitive Neurosciences, 3rd Ed. Cambridge, MA: MIT Press. P
P
DeLancey, C. (2002). Passionate Engines. Oxford: Oxford University Press. DeLancey, C. (2006). “Ontology and teleofunctions: A defense and revision of the systematic account of teleofunctions.” Synthese 150: 69-98. Dennett, D.C. (1978). Brainstorms. Cambridge, MA: MIT Press. Dennett, D.C. ([1981] 1987). “Three kinds of intentional psychology.” In Id., The Intentional Stance. Cambridge, MA: MIT Press. Dennett, D.C. ([1983] 1987) “Intentional systems in cognitive ethology: The ‘Panglossian Paradigm’ defended.” In Id., The Intentional Stance. Cambridge MA: MIT Press. Dennett, D.C. (1984). The Elbow Room. Cambridge, MA: MIT Press. Dennett, D.C. ([1988] 1997). “Quining qualia.” In A.J. Marcel and E. Bisiach (eds.), Consciousness in Contemporary Science. Oxford: Clarendon Press. Dennett, D.C. (1991). Consciousness Explained. Boston: Little Brown and Co. Dennett D.C. (1993). “Review of J. Searle, The Rediscovery of the Mind.” The Journal of Philosophy 60: 193-205. Dennett, D.C. (1995). “Do animals have beliefs?” In H. Roitblat (ed.), Comparatives Approaches to Cognitive Sciences. Cambridge, MA: MIT Press. Dennett, D.C. (1996) Kinds of Minds. New York: Basic Books. Dennett, D.C. (1998). Brainchildren. Cambridge, MA: MIT Press. Dennett, D.C. (2000). “Making tools for thinking.” In D. Sperber (ed.), Metarepresentations. A Multidisciplinary Perspective. Oxford: Oxford University Press. Dennett, D.C. (2001). “Are we explaining consciousness yet?” Cognition 79: 221-237. Dennett, D.C. (2003). Freedom Evolves. New York: Viking Press. Derryberry, D. (1988). “Emotional influences on evaluative judgments: Roles of arousal, attention, and spreading activation.” Motivation and Emotion 12: 23-55. Descartes, R. ([1637] 1982). Discourse on Method. In E. Haldane and G. Ross (eds.), The Philosophical Works of Descartes, Vol. 1. Cambridge: Cambridge University Press. Descartes, R. ([1641] 1985). Meditationes de prima philosophia (Meditations on First Philosophy). In J. Cottingham, R. Stoothoff, and D. Murdoch (eds.), The Philosophical Writings of Descartes, Vol. 2. Cambridge: Cambridge University Press. T
T
T
T
T
T
T
T
De Villiers, J.A. (2000). “Language and theory of mind: What are the developmental relationships.” In S. Baron-Cohen, H. Tager-Flusberg, and D. Cohen (eds.), Understanding Other Minds: Perspectives from Developmental Cognitive Neuroscience. Oxford: Oxford University Press.
329
REFERENCES
De Villiers, J.A. and P.A. de Villiers (2003). “Language for thought : Coming to understand false beliefs.” In D. Gentner and S. Goldin-Meadow (eds.), Language in Mind. Advances in the Study of Language and Thought, Cambridge, MA: MIT Press. Di Francesco, M. (2002). Introduzione alla filosofia della mente, 2nd Ed. Rome: Carocci. P
P
Di Francesco, M. (2004). “Mi ritorni in mente. Mente distribuita e unità del soggetto.” Networks 3-4: 115-139. (http://lgxserve.ciseca.uniba.it/lei/ai/networks/04/). Di Francesco, M. (2005). “Filling the gap, or jumping over it? Emergentism and naturalism.” Epistemologia 28: 95-122. Dixon, M.J., D. Smilek, and P.M. Merikle (2004). “Not all synaesthetes are created equal: Projector versus associator synaesthetes.” Cognitive, Affective and Behavioral Neuroscience 4: 335-343. Dixon, M.J., D. Smilek, C. Cudahy, and P.M. Merikle (2000). “Five plus two equals yellow.” Nature 406: 365. Dimond, S.J. (1976). “Brain circuits for consciousness.” Brain, Behavior, and Evolution 13: 376-395. Dodds, E.R. (1951). The Greeks and the Irrational. San Francisco: University of California Press. Doris, J. (1998). “Persons, situations, and virtue ethics.” Nous 32: 504-530. Doris, J. (2002). Lack of Character: Personality and Moral Behavior. New York: Cambridge University Press. Dretske, F. (1986). “Misrepresentation.” In R. Bogdan (ed.), Belief. Oxford: Clarendon Press. Dretske, F. (1988). Explaining Behavior. Cambridge, MA: MIT Press. Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press. Dreyfus, H.L. (1998). “Response to my critics.” In T.W. Bynum and J.H. Moor (eds.), The Digital Phoenix: How Computers are Changing Philosophy. Oxford: Blackwell. Dummett, M. (1988). “The origins of analytical philosophy.” Lingua e Stile 23 (Part I and II). Dunbar, R. (1993). “The coevolution of neocortical size, group size and language in humans.” Behavioral and Brain Sciences 16: 681-735. Dunbar, R. (1996). Grooming, Gossip and the Evolution of Language. Cambridge, MA: Harvard University Press. Dupré, J. (1993). The Disorder of Things. Cambridge, MA: Harvard University Press. Durkheim, E. ([1895] 1982). The Rules of the Sociological Method. New York: Free Press. Dworkin, G. (1988). The Theory and Practice of Autonomy. Cambridge: Cambridge University Press. Earman, J. (1986). A Primer on Determinism. Dordrecht: Reidel.
330
REFERENCES
Earman, J. (1992). ”Determinism in the physical science.” in M.H. Salmon, J. Earman, C. Glymour, J. Lennox, P. Machamer, J. McGuire, J. Norton, W. Salmon, and K. Schaffner (eds.), Introduction to the Philosophy of Science. Englewood Cliffs: Prentice Hall. Eccles, J. (1994). How the Self Controls Its Brain. Berlin: Springer. Eddington, A. (1929). The Nature of Physical World. London: Dent. Edelman, G. (1987). Neural Darwinism. New York: Basic Books. Edelman, G.M. (1989). The Remembered Present. A Biological Theory of Consciousness. New York: Basic books. Edelman, G.M. (1992). Bright Air, Brilliant Fire: On the Matter of the Mind. New York: Basic Books. Egan, F. (1995). “Computation and content.” Philosophical Review 104: 181-203. Ekman, P. (1980). “Biological and cultural contributions to body and facial movements in the expression of emotions.” In A. Oksenberg Rorty (ed.), Explaining Emotions. Berkeley, CA: University of California Press. Ekstrom, L. (2000). Free Will. Boulder: Westview. Eliasmith, C. (2003). “Moving beyond metaphors: Understanding the mind for what it is.” Journal of Philosophy 100: 493-520. Ellenberger, H. (1970). The Discovery of the Unconscious. New York: Basic Books. Engel, P. (1996). Philosophie et psychologie. Paris: Gallimard. Engel, S. (1999). Context is Everything: The Nature of Memory. New York: W.H. Freeman. Enoch, D. (1991). “Delusional jealousy and awareness of reality.” British Journal of Psychiatry 159: 52-56. Ericsson, K.A. and H. Simon (1984). Protocol Analysis: Verbal Reports as Data. Cambridge, MA: MIT Press. Erneling, C.E. and D.M. Johnson (2005). The Mind as a Scientific Object: Between Brain and Culture. Oxford: Oxford University Press. Evans, G. (1982). The Varieties of Reference. Oxford: Clarendon Press. Evans, J.St.B.T, S.J. Handley, and D.E. Over (2003). “Conditionals and conditional probability.” Journal of Experimental Psychology 29: 321-335. Evans, J.St.B.T. and D.E. Over (1996). Rationality and Reasoning. Hove: Psychology Press. Fadiga, L., L. Fogassi, G. Pavesi, G. Rizzolatti (1995). “Motor facilitation during action observation: Amagnetic stimulation study.” Journal of Neurophysiology 73: 2608-2611. Festinger, L. (1967). A Theory of Cognitive Dissonance. Palo Alto: Stanford University Press. Fetzer, J.H. (2001). Computers and Cognition: Why Minds are Not Machines. Kluwer: Dordrecht.
331
REFERENCES
Fischer, J.M. (1999). “Recent work on moral responsibility.” Ethics 110: 93-139. T
T
Fischer, J.M. (2002). “Frankfurt-type examples and semi-compatibilism.” In Kane (2002). Fischer, J.M. and M. Ravizza (1992). “When the will is not free.” In Tomberlin (1992). Fischer, J.M. and M. Ravizza (eds.) (1993). Perspectives on Moral Responsibility. Ithaca: Cornell University Press. Fischer, J. and M. Ravizza (1998). Responsibility and Control: A Theory of Moral Responsibility. Cambridge: Cambridge University Press. Fivush, R. (1994). “Constructing narrative, emotion, and self in parent-child conversations about the past.” In U. Neisser and R. Fivush (eds.), The Remembering Self. Cambridge: Cambridge University Press. Fivush,
R. (2001). “Owning experience: Developing subjective perspective in autobiographical narratives.” In C. Moore and K. Lemmon (eds.), The Self in Time: Developmental Perspectives. Hillsdale, NJ: Erlbaum.
Fivush, R. and C. Haden (eds.) (2003). Autobiographical Memory and the Construction of a Narrative Self: Developmental and Cultural Perspectives. Hillsdale, NJ: Erlbaum. Flanagan, O. (1991). Varieties of Moral Personality: Ethics and Psychological Realism. Cambridge, MA: Harvard University Press. Flanagan, O. (1992). Consciousness Reconsidered. Cambridge, MA: MIT Press. Flanagan, O. (1996). Self Expressions: Mind, Morals, and the Meaning of Life. Oxford: Oxford University Press. Flanagan, O. (2002). The Problem of the Soul. Two Visions of Mind and How to Reconcile Them. New York: Basic Books. Fleminger, S. (2002). “Remembering delirium.” British Journal of Psychiatry 180: 4-5. Fodor, J.A. (1968). Psychological Explanation. New York: Random House. Fodor, J.A. (1975). The Language of Thought. New York: Crowell. Fodor, J.A. (1983). The Modularity of Mind. Cambridge, MA: MIT Press. Fodor, J.A. (1985). “Banish disContent.” In J. Butterfield (ed.), Language, Mind and Logic. Cambridge: Cambridge University Press. Fodor, J.A. (1987). Psychosemantics. Cambridge, MA: MIT Press. Fodor, J.A. (1990). A Theory of Content and Other Essays. Cambridge (MA): MIT Press. Fodor, J.A. (1991). “Too hard for our kind of mind?” London Review of Books, June 27: 12. Fodor, J.A. (1994a). “Fodor, Jerry A.” In S. Guttenplan (ed.), A Companion to the Philosophy of Mind. Oxford: Blackwell. Fodor, J.A. (1994b). The Elm and the Expert. Cambridge, MA: MIT Press. Fodor, J.A. (1997). “Special sciences: Still autonomous after All these years.” Philosophical Perspectives. Mind, Causation, and World 11: 149-163.
332
REFERENCES
Fodor, J.A. (1998). Concepts: Where Cognitive Science Went Wrong. Oxford: Oxford University Press. Fodor, J.A. (2000), The Mind Doesn’t Work That Way. Cambridge, MA: MIT Press. Fodor, J.A., T.G. Beaver, and M.F. Garret (1974). The Psychology of Language. New York: McGraw-Hill. Fodor, J.A. and Z.W. Pylyshyn (1981). “How direct is visual perception? Some reflections on Gibson’s ecological approach.” Cognition 9: 139-196. T
T
Fodor, J.A. and Z.W. Pylyshyn ([1988] 1995). “Connectionism and cognitive architecture: a critical analysis.” Reprinted in C. Macdonald e G. Macdonald (eds.), Connectionism. Debates on Psychological Explanation. Oxford: Blackwell. Ford, M. (1995). “Two modes of representation and problem solution in syllogistic reasoning.” Cognition 54: 1-71. Foster, J.K. and M. Jelicic (eds.) (1999). Memory: Systems, Process, or Function? Oxford: Oxford University Press. Franck, N., C. Farrer, N. Georgie, M. Marie-Cardine, J. Dalery, T. d’Amato, and M. Jeannerod (2001). “Defective recognition of one’s own actions in patients with schizophrenia.” American Journal of Psychiatry 158: 454-459. Frankfurt, H. (1969). “Alternate possibilities and moral responsibility.” Journal of Philosophy 66: 829-839. T
T
Frankfurt, H. (1971). “Freedom of the will and the concept of a person.” Journal of Philosophy 68: 5-20. Frankfurt, H. (1988). The Importance of What We Care About. Cambridge: Cambridge University Press. Frankfurt, H. (1999). Necessity, Volition, and Love. Cambridge: Cambridge University Press. Frawley, W. (1997). Vygotsky and Cognitive Science. Language and the Unification of the Social and Computational Mind. Cambridge, MA: Harvard University Press. Friedrich, J. (1993). “Primary error detection and minimization PEDMIN strategies in social cognition: A reinterpretation of confirmation bias phenomena.” Psychological Review 100: 298-319. Frisby, J.P. (1979). Seeing. Oxford: Oxford University Press. Gabbert, F., A. Memon, and K. Allan (2003). “Memory conformity: Can eyewitnesses influence each other’s memories for an event?” Applied Cognitive Psychology 17: 533-544. Gallagher, S. (2005). How the Body Shapes the Mind. Oxford: Oxford University Press. Gallese, V. (2003a). “The manifold nature of interpersonal relations: The quest for a common mechanism.” Philosophical Transactions of the Royal Society of London B 358: 517-528. Gallese, V. (2003b). “A neuroscientific grasp of concepts: From control to representation.” Philosophical Transactions of the Royal Society of London B 358: 1231-1240.
REFERENCES
333
Gallese V. (2003c). “The roots of empathy: The shared manifold hypothesis and the neural basis of intersubjectivity.” Psychopathology 36: 171-80. Gallese V. (2004). “The manifold nature of interpersonal relations: The quest for a common mechanism.” In C. Frith and D. Wolpert (eds.), The Neuroscience of Social Interaction. Oxford: Oxford University Press. Gallese, V. and A. Goldman (1998). “Mirror neurons and the simulation theory of mindreading.” Trends in Cognitive Sciences 2: 493-501. Gallistel, C. R. (1990). The Organization of Learning. Cambridge, MA: MIT Press. Gallup, G.G. (1977). “Self-recognition in Primates.” American Psychologist 32: 329-338. Gandy, R. (1980). “Church’s thesis and principles for mechanisms.” In J. Barwise, H.J. Keisler, and K. Kunen (ed.), The Kleene Symposium. Amsterdam: North Holland. Garson, J. (2003). “The introduction of information into neurobiology.” Philosophy of Science 70: 926-936. Gattis, M., H. Bekkering, and A. Wohlschläger (2002). “Goal-directed Imitation.” In A.N. Meltzoff and W. Prinz (eds.), The Imitative Mind. Development, Evolution and Brain Bases. Cambridge: Cambridge University Press. Gazzaniga, M.S. (2000). “Cerebral specialization and interhemispheric communication. Does the corpus callosum enable the human condition?” Brain 123: 1293-1336. Gazzaniga, M.S. (2005). “Forty-five years of split-brain research and still going strong.” Nature Reviews Neuroscience 6: 653-659. Gazzaniga, M.S., R.B. Ivry, and G.R. Mangun (eds.) (1998). Cognitive Neuroscience: The Biology of the Mind. New York: Morton. Gazzaniga, M.S. and J. LeDoux (1978). The Integrated Mind. New York: Plenum Press. Gelman, R., F. Durgin, and L. Kaufman (1995). “Distinguishing between animates and inanimates: Not by motion alone.” In D. Sperber, D. Premack, and A.J. Premack (eds.), Causal Cognition: A Multidisciplinary Debate. Oxford: Clarendon Press. Gerard, R.W. (1951). “Some of the problems concerning digital notions in the central nervous system.” In H. von Foerster, M. Mead, and H.L. Teuber (eds.), Cybernetics: Circular Causal and Feedback Mechanisms in Biological and Social Systems. Transactions of the Seventh Conference. New York: Macy Foundation. Gerrig, R.J. (1989). “Empirical constraints on computational theories of metaphors: Comments on Indurkhya.” Cognitive Science 13: 235-241. Gerrig, R.J. and A.F. Haley (1983). “Dual processes in metaphor understanding: Comprehension and appreciation.” Journal of Experimental Psychology: Learning, Memory and Cognition 9: 667-675. Gibbs, R.W. (1980). “Spilling the beans on understanding and memory for idioms in conversation.” Memory and Cognition 8: 449-456. Gibbs, R.W. (1990). “Comprehending figurative referential description.” Journal of Experimental Psychology: Learning, Memory and Cognition 16: 56-66.
334
REFERENCES
Gibson, J.J. (1972). “A theory of direct visual perception.” In J.J. Royce and W.W. Rozeboom (eds.), The Psychology of Knowing. New York: Gordon & Breach; also in Noë and Thompson (2002). Gibson, J.J. ([1979] 1986). The Ecological Approach to Visual Perception. Hillsdale, NJ: Erlbaum. Gigerenzer, G. and K. Hug (1992). “Domain-specific reasoning: Social contracts, cheating, and perspective change.” Cognition 43: 127-171. Gilbert, M. (1989). On Social Facts. Princeton University Press. Gilbert, M. (2000). Sociality and Responsibility. Lanham, MD: Rowman and Littlefield. Gill, D. and R. Mayou (2000). “Delirium.” In The New Oxford Textbook of Psychiatry. Oxford: Oxford University Press. Ginet, C. 1990. On Action. Cambridge: Cambridge University Press. Giros, B., M. Jaber, S.R. Jones, R.M. Wightmann, and M.G. Caron (1996). “Hyperlocomotion and indifference to cocaine and amphetamine in mice lacking the dopamine transporter.” Nature 379: 606-612. Glennan, S.S. (2002). “Rethinking mechanistic explanation.” Philosophy of Science 64: 605-206. Globus, G.G. (1992). “Towards a noncomputational cognitive neuroscience.” Journal of Cognitive Neuroscience 4: 299-310. Glock, H.J. (2000). “Animals, thoughts, and concepts.” Synthese 123: 35-64. Glucksberg, S., P. Gildea, and H.B. Bookin (1982). “On understanding nonliteral speech: Can people ignore metaphors?” Journal of Verbal Learning and Verbal Behaviour 21: 85-98. Gola, E. (2005). Metafora e mente meccanica. Cagliari: CUEC. Goldman, A. (1970). A Theory of Action. Princeton: Princeton University Press. Goldman, A. (1993) “The psychology of folk psychology.” Behavioral and Brain Sciences 16: 15-28. Goldman, A. (1995) “Interpretation psychologized.” In M. Davies and T. Stone (eds.), Folk Psychology. Oxford: Blackwell. Goldman, A. and C.S. Sripada (2205). “Simulationist models of face-based emotion recognition.” Cognition 94: 193-213. Gopnik, A. (1993). “How we know our minds: The illusion of first person knowledge of intentionality.” Behavioral and Brain Sciences 16: 1-14. Gopnik, A. and A. Meltzoff (1994). “Minds, bodies, and persons: Young children’s understanding of the self and others as reflected in imitation and theory of mind research.” In S. Parker, R. Mitchell, and M. Boccia (eds.), Self-Awareness in Animals and Humans. Cambridge: Cambridge University Press.
335
REFERENCES
Gordon, R.M. (1995). “Folk psychology as simulation.” In M. Davies and T. Stone (eds.), Folk Psychology. Oxford: Blackwell. Gordon, R.M. (1996). “Radical simulationism.” In P. Carruthers and P.H. Smith (eds.), Theories of Theories of Mind. Cambridge: Cambridge University Press. Graham, G. and R. Kennedy (2004). “Review of Being no one, a book by Thomas Metzinger.” Mind 113: 369-372. Gray, J.A. (1998). “Creeping up on the hard question of consciousness.” In S.R. Hammerhoff, A.W. Kaszniak, and A.C. Scott (eds.), Toward a Science of Consciousness II: The Second Tucson Discussions and Debates. Cambridge MA: MIT Press. Gray, J.A. (2003). “How are qualia coupled to functions?” Trends in Cognitive Sciences 7: 192-194. Gray, J.A., S. Chopping, J. Nunn, D. Parslow, L. Gregory, S. Williams, M.J. Brammer, and S. Baron-Cohen (2002). “Implications of synaesthesia for functionalism: Theory and experiments.” Journal of Consciousness Studies 9: 5-31. Gray, J.A, S.C.R. Williams, J. Nunn, and S. Baron-Cohen (1997). “Possible implications of synaesthesia for the hard question of consciousness.” In Baron-Cohen and Harrison (1997). Gray, R. (2001a). “Cognitive modules, synaesthesia and the constitution of psychological natural kinds.” Philosophical Psychology 14: 65-82. Gray, R. (2001b). “Synaesthesia and misrepresentation.” Philosophical Psychology 14: 339-346. Gregory, R.L. (1970). The Intelligent Eye. New York: McGraw-Hill. Gregory, R.L. (1980). “Perceptions as hypotheses.” Philosophical Transactions of the Royal Society B 290: 181-97; also in Noë & Thompson (2002). Grice P. (1975). “Logic and conversation.” in P. Cole and J.L. Morgan (eds.), Syntax and Semantics: Speech Acts, Vol. 3. New York: Academy Press. Griffiths, P.E. (1997). What Emotions Really Are. Chicago: Chicago University Press. Griffiths, P.E. and K. Stotz (2000). “How the mind grows: A developmental perspective on the biology of cognition.” Synthese 122: 29-51. Grossenbacher, P.G. and C.T. Lovelace (2001). “Mechanisms of synesthesia: Cognitive and physiological constraints.” Trends in Cognitive Sciences 5: 36-41. Grünbaum, A. (1984). The Foundations of Psychoanalysis. Oxford: Oxford University Press. Grush, R. (2000). “Self, world, and space: the meaning and mechanisms of ego- and allocentric spatial representation.” Brain and Mind 1: 59-92. Grush, R. (2002). “Cognitive science.” In P. Machamer and M. Silberstein (eds.), Guide to Philosophy of Science. Oxford: Blackwell. T
T
Grush, R. (2003). “In defense of some ‘Cartesian’ assumptions concerning the brain and its operation.” Biology and Philosophy 18: 53-93.
336
REFERENCES
Haden, C.A., R.A. Haine, and R. Fivush (1997). “Developing narrative structure in parentchild reminiscing across the preschool years.” Developmental Psychology 33: 295-307. Hahn, L.E. (1997). The Philosophy of Roderick M. Chisholm. Chicago, Ill.: Open Court. Halbwachs, M. ([1950] 1980). In M. Douglas (ed.), The Collective Memory. New York: Harper and Row. Hanson, N.R. (1961). Patterns of Discovery. Cambridge: Cambridge University Press. Happé, F. (1995). “Understanding minds and metaphors: Insights from the study of figurative language in autism.” Metaphor & Symbolic Activity 10: 275-295. Harding, C.L. (1988). Color for Philosophers: Unweaving the Rainbow. Indianapolis: Hackett. Harley, K. and E. Reese (1999). “Origins of autobiographical memory.” Developmental Psychology 35: 1338-1348. Harman, G. (1982). “Conceptual role semantics.” Notre Dame Journal of Formal Logic 23: 242-256. Harman, G. (1986). Change in View: Principles of Reasoning. Cambridge, MA: MIT Press. Harman, G. (1999). “Moral philosophy meets social psychology: Virtue ethics and the fundamental attribution error.” Proceedings of the Aristotelian Society 99: 315-331. Harnad, S. (1996). “Computation is just interpretable symbol manipulation; cognition isn’t.” Minds and Machines 4: 379-90. Harnish, R. (2002). Minds, Brains, Computers: The Foundations of Cognitive Science. Oxford: Blackwell. Harrison, J.E. (2001). Synaesthesia: The Strangest Thing. Oxford: Oxford University Press. Harrison, J.E. and S. Baron-Cohn (1997). “Synaesthesia: An introduction.” In Baron-Cohen and Harrison (1997). Hart, J.T. (1965). “Memory and the feeling of knowing experience.” Journal of Educational Psychology 56: 208-216. Hartshorne, H. and M. May (1928). Studies in the Nature of Character, Vol. 1: Studies in Deceit. New York: MacMillan. Hatfield G. (1995). “Philosophy of psychology as philosophy of science.” In D. Hull, M. Forbes, and R. Burian, PSA 1994, Vol. 2. East Lansing, MI: Philosophy of Science Association. T
T
Hatfield G. (2002). “Psychology, philosophy, and cognitive science: reflections on the history and philosophy of experimental psychology.” Mind and Language 17: 207-232. Haugeland, J. (1997). “What is mind design?” In J. Haugeland (ed.), Mind Design II. Cambridge, MA: MIT Press. Haugeland, J. (1998). “Representational genera.” In Id., Having Thought. Cambridge, MA: Harvard University Press.
REFERENCES
337
Hauser, L. (2005). “Behaviorism.” The Internet Encyclopedia of Philosophy (http://www.iep.utm.edu/b/behavior.htm). Heal, J. (1995). “Simulation and cognitive penetrability.” Mind and Language 11: 44-67. Heider, F. (1958). The Psychology of Interpersonal Relations. New York: Wiley. Heider, F., and M. Rimmel (1944). “An experimental study of apparent behavior.” American Journal of Psychology 57: 243-259. Heil, J. (1992). The Nature of True Minds. Cambridge: Cambridge University Press. Heyes, C.M. (1993). “Anecdotes, training, trapping and triangulating: Do animals attribute mental states?” Animal Behavior 46: 177-188. Heyes, C.M. (1998). “Theory of mind in nonhuman primates.” Behavioral and Brain Sciences 21: 101-148. Hirst, W. and D. Manier (1996). “Remembering as communication: A family recounts its past.” In D.C. Rubin (ed.), Remembering our Past. Cambridge: Cambridge University Press. Hirst, W., D. Manier, and I. Apetroaia (1997). “The social construction of the remembered self: Family recounting.” In J. Snodgrass and R. Thompson (eds.), The Self Across Psychology. New York: Academy of Sciences. Hirst, W., D. Manier, and A. Cuc (2003). “The construction of a collective memory.” In B. Kokinov and W. Hirst (eds.), Constructive Memory. Sofia: New Bulgarian University. Hodgson, D. (2002). “Quantum physics, consciousness, and free will.” In Kane (2002). Hoerl, C. (1999). “Memory, amnesia, and the past.” Mind and Language 14: 227-251. Hoerl, C. and T. McCormack (2005). “Joint reminiscing as joint attention to the past.” In N. Eilan, C. Hoerl, T. McCormack, and J. Roessler (eds.), Joint Attention, Communication, and Other Minds: Issues in Philosophy and Psychology. Oxford: Oxford University Press. Holt, D.L. (1993). “Rationality is hard work: An alternative interpretation of the disruptive effects of thinking about reasons.” Philosophical Psychology 6: 251-266. Holt, D.L. (1989). “Social psychology and practical reasoning: An empirical challenge to the possibility of practical reasoning.” The Philosophical Forum 20: 311-325. Holt, R. (1989). Freud Reappraised. New York: Guilford. Honderich, T. (1988). A Theory of Determinism. Oxford: Oxford University Press. Hopfield, J.J. (1982). “Neural networks and physical systems with emergent collective computational abilities.” Proceedings of the National Academy of Sciences 79: 2554-2558. Horgan, T. and J. Tienson (1996). Connectionism and the Philosophy of Psychology. Cambridge, MA: MIT Press. T
T
338
REFERENCES
Horst, S.W. (1996). Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley: University of California Press. Horst, S.W. (1999). “Computational theory of mind.” In R.A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press. Horst, S.W. (2005). “The computational theory of mind.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/fall2005/entries/computational-mind/). T
T
TH
TH
Hull, C.L. (1943). The Principles of Behavior. New York: Appleton-Century-Crofts. T
T
Hume, D. ([1739-40] 2000). A Treatise of Human Nature. Oxford: Oxford University Press. Hume, D. ([1748] 2000). An Enquiry Concerning Human Understanding. Oxford: Oxford University Press. Humphrey, N.K. (1976). “The social function of intellect.” In P. Bateson and R.A. Hinde (eds.), Growing Points in Ethology. Cambridge: Cambridge University Press. Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press. Hurley, S. (1994). “Unity and objectivity.” In C. Peacocke (ed.), Objectivity, Simulation, and the Unity of Consciousness. Oxford: Oxford University Press. Hurley, S. (1998). Consciousness in Action. Cambridge, MA: Harvard University Press. Hurley, S. (2000). “Clarifications: Responses to Kobes and Kinsbourne.” Mind and Language 15: 556-561. Hurley, S. (2003). “Animal action in the space of reasons.” Mind and Language 18: 231-256. Hurley, S. (forthcoming). “Making sense of animals.” In S. Hurley and M. Nudds (eds.), Rational Animals? Oxford: Oxford University Press. Hurley, S. and A. Noë (forthcoming). “Can hunter-gatherers hear colour?” In G. Brennan, R. Goodin and M. Smiths (eds.), Common Minds: Essays in Honor of Philip Pettit. Oxford: Oxford University Press. Iacoboni, M., I. Molnar-Szakacs, V. Gallese, G. Buccino, J.C. Mazziotta, and G. Rizzolatti (2005). “Grasping the intentions of others with one’s own mirror neuron system.” Philosophical Biology 3: E79. Inhelder, B. and J. Piaget (1955). The Growth of Logical Thinking from Childhood to Adolescence. London: Routledge 1958. Isen, A.M. and H. Levin (1972). “Effect of feeling good on helping: Cookies and kindness.” Journal of Personality and Social Psychology 21: 384-388. Jackson, F. (1986). “What Mary didn’t know.” Journal of Philosophy 83: 291-295. Jacob, P. (1996). “The dependence of thought on social practice.” Unpublished manuscript. Jacob, P. and M. Jeannerod (2003). Ways of Seeing, the Scope and Limits of Visual Cognition. Oxford: Oxford University Press. Jacob, P. and M. Jeannerod (2005). “The motor theory of social cognition: A critique.” Trends in Cognitive Sciences 9: 21-25.
REFERENCES
339
James, W. ([1892] 1961). Psychology: The Briefer Course. New York: Harper. James, W. ([1890] 1984). The Principles of Psychology. Cambridge, MA: Harvard University Press. Janowsky, J.S., A.P. Shimamura, and L.R. Squire (1989). “Memory and metamemory: Comparisons between patients with frontal lobe lesions and amnesic patients.” Psychobiology 17: 3-11. Jellema T., C.I. Baker, M.W. Oram, and D.I. Perrett (2002). “Cell populations in the banks of the superior temporal sulcus of the macaque and imitation.” In A. Meltzoff (ed.), The Imitative Mind: Development, Evolution and the Brain Bases. Cambridge: Cambridge University Press. Jervis, G. (1993). Fondamenti di psicologia dinamica. Milano: Feltrinelli. Johnson, M.K. (1988). “Discriminating the origin of information.” In F. Oltmanns and B. Mahers (eds.), Delusional beliefs. New York: Wiley. Johnson, M.K., M.A. Foley, and A.G. Suengas (1988). “Phenomenal characteristics of memories for perceived and imagined autobiographical events.” Journal of Experimental psychology: General 117: 371-376. Johnson, M.K., S. Hashtroudi, and D.S. Lindsay (1993). “Source monitoring.” Psychological Bulletin 114: 3-28. Johnson, D.M. and C.E. Erneling (eds.) (1997). The Future of the Cognitive Revolution. Oxford: Oxford University Press. Johnson-Laird, P.N. (1983). Mental Models. Cambridge: Cambridge University Press. Johnson-Laird, P.N. and B. Bara (1984). “Syllogistic inference.” Cognition, 16: 1-61. Johnson-Laird, P.N. and R.M. Byrne (1991). Deduction. Hillsdale, NJ: Erlbaum. Kahneman, D., P. Slovic and A. Tversky (eds.) (1982). Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press. Kamtekar, R. (2004). “Situationism and virtue ethics on the content of our character.” Ethics 114: 458-491. Kane, R. (1996). The Significance of Free Will. Oxford: Oxford University Press. Kane, R. (ed.) (2002). The Oxford Handbook of Free Will. Oxford: Oxford University Press. Kane, R. (2005). A Contemporary Introduction to Free Will. Oxford: Oxford University Press. Kant, I. ([17872] 1958). Critique of Pure Reason. London: Macmillan. P
P
Kant, I. ([1788] 1956). Critique of Practical Reason. Indianapolis: Bobbs-Merrill. Kapitan, T. (2002). “A master argument for incompatibilism?” In Kane (2002). Keeley, B.L. (2002). “Making sense of the senses: Individuating modalities in humans and other animals.” Journal of Philosophy 99: 5-28.
340
REFERENCES
Keltner, D., P. Ellsworth, and K. Edwards (1993). “Beyond simple pessimism: Effects of sadness and anger on social perception.” Journal of Personality and Social Psychology 64: 740-752. Kenny, A. (1963). Action, Emotion, and Will. London: Routledge & Kegan Paul. Kenny, A. (1988). The Self. Marquette: Marquette University Press. Kihlstrom, J.F. (1987). “The cognitive unconscious.” Science 237: 1445-1452. Kim, J. (1993). Supervenience and Mind. Cambridge: Cambridge University Press. Kitcher, P. (1992). Freud’s Dream: A Complete Interdisciplinary Science of Mind. Cambridge, MA: MIT Press. Klauer, K.C. (1999). “On the normative justification for information gain in Wason’s selection task.” Psychological Review 106: 215-222. Klein, K.L. (2000). “On the emergence of memory in historical discourse.” Representations 69: 127-150. Koch, C. (1999). Biophysics of Computation: Information Processing in Single Neurons. Oxford: Oxford University Press. Koch, C. (2004). The Quest for Consciousness: A Neuroscientific Approach. Denver, CO: Roberts & Co. Kohler, I. (1961). “Experiments with goggles.” Scientific American 206: 62-86. Kohler, E., M.A. Umiltà, C. Keysers, V. Gallese, L. Fogassi, and G. Rizzolatti (2001). “Auditory mirror neurons in the ventral premotor cortex of the monkey.” Society of Neuroscience Abstracts. 27: 129.9. Kolher, E., C. Keysers, M.A. Umiltà, L. Fogassi, V. Gallese, and G. Rizzolatti (2002). “Hearing sounds, understanding actions: Action representation in mirror neurons.” Science 297: 846-848. Koriat, A. (1993). “How do we know that we know? The accessibility model of the feeling of knowing.” Psychological Review 100: 609-639. Kovach, A. and C. DeLancey (2005). “On emotion and the explanation of behavior.” Nous 39: 106-122. Kriegel, U. (2004). “Consciousness and self-consciousness.” The Monist 87: 185-209. Krueger, J. and D. Funder (2004). “Toward a balanced social psychology: Causes, consequences, and cures for the problem-seeking approach to social behavior and cognition.” Behavioral and Brain Sciences 27: 313-367. Kuhn, T. (1962). The Structure of Scientific Revolutions. Chicago: University of Chicago Press. Kunda, Z. (1990). “The case for motivated reasoning.” Psychological Bulletin 108: 480-98. Kunda, Z. (1999). Social Cognition. Cambridge, MA: MIT Press. Lackner, J.R. (1973). “Resolving ambiguity: Effect of biasing context in the unattended ear.” Cognition 1: 359-372.
341
REFERENCES
Lachter, J. and F. Durgin (1999). “Metacontrast masking functions: A question of speed?” Journal of Experimental Psychology: Human Perception and Performance 25: 936-947. Lachter, J., F. Durgin, and T. Washington (2000). “Disappearing percepts: Evidence for retention failure in metacontrast masking.” Visual Cognition 7: 269-279. Landau, B. and L. Gleitman (eds.) (1994). The Acquisition of the Lexicon. Cambridge, MA: MIT Press. Latane, B. and J. Darley (1968). “Bystander intervention in emergencies: Diffusion of responsibility.” Journal of Personality and Social Psychology 8: 377-383. Latane, B. and J. Darley (1970). The Unresponsive Bystander: Why Doesn’t He Help? New York: Appleton-Century Crofts. Lazarus, R.S. (1982). “Thoughts on the relations between emotion and cognition.” American Psychologist 37: 1019-1024. Lazarus, R.S. (1991). Emotion and Adaptation. Oxford: Oxford University Press. Leslie, A.M. (1987). “Pretense and representation: The origins of ‘theory of mind’.” Psychological Review 94: 412-426. Levy, J. (1977). “Manifestations and implications of shifting hemi-inattention in commissurotomy patients.” Advances in Neurology 18: 83-92. Levy, J. (1990). “Regulation and generation of perception in the asymmetric brain.” In C. Trevarthen (ed.), Brain Circuits and Functions of the Mind: Essays in Honour of Roger W. Sperry. Cambridge: Cambridge University Press. Levy, J. and C. Trevarthen (1976). “Metacontrol of hemispheric function in human split-brain patients.” Journal of Experimental Psychology: Human Perception and Performance 2: 299-312. Levy, J., C. Trevarthen, and R.W. Sperry (1972). “Perception of bilateral chimeric figures following hemispheric deconnexion.” Brain 95: 61-78. Lewis, D. (1972). “Psychophysical and theoretical identifications.” Australasian Journal of Philosophy 50: 249-258. T
T
Lewis, D. (1980). “Mad pain and Martian pain.” In N. Block (ed.), Readings in Philosophy of Psychology, Vol. 1. Cambridge MA: Harvard University Press. Lewis, D. (1981). “Are we free to break the laws?” Theoria 47: 291-298. Lewis, M. and J. Brooks-Gunn (1979). Social Cognition and the Acquisition of the Self. New York: Plenum Press. Libet, B. (1981). “The experimental evidence for subjective referral of a sensory experience backwards in time.” Philosophy of Science 48: 182-197. Libet, B. (1985). “Unconscious cerebral initiative and the role of the conscious will in voluntary action.” The Behavioral and Brain Sciences 8: 529-566. Libet, B. (2002). “Do we have free will?” In Kane (2002).
342
REFERENCES
Libet, B., A. Freeman, and K. Sutherland (eds.) (1999). The Volitional Brain: Towards a Neuroscience of Free Will. Thorverten: Imprint Academic. Lipowski, Z.J. (1990). Delirium: Acute Confusional States. New York: Oxford University Press. Lloyd, D. (2004). Radiant Cool: A Novel Theory of Consciousness. Cambridge, MA: Bradford Book/ MIT Press. Loar, B. (1980). Mind and Meaning. Cambridge: Cambridge University Press. Locke, J. ([1690] 1975). An Essay Concerning Human Understanding. Oxford: Oxford University Press. Lockwood, M. (1989). Mind, Brain and the Quantum: The Compound “I”. Oxford: Blackwell. Loftus, E. (2003). “Our changeable memories: Legal and practical implications.” Nature Reviews Neuroscience 4: 231-234. Loftus, E. and K. Ketcham (1994). The Myth of Repressed Memory. New York: Griffin. Logothetis, N.K., D.A. Leopold, and D.L. Sheinberg (2003). “Neural mechanisms of perceptual organization.” In N. Osaka (ed.), Neural basis of consciousness. Advances in Consciousness Research, Vol. 49. Amsterdam: John Benjamins. Looren de Jong, H. (2001). “Introduction: A symposium on explanatory Pluralism.” Theory & Psychology 11: 731-735. Lormand, E. (1985). “Toward a theory of moods.” Philosophical Studies 47: 385-407. Lowe, E.J. (2000). An Introduction to the Philosophy of Mind. Cambridge: Cambridge University Press. Lucas, J.R. (1996). “Minds, machines, and Gödel: A retrospect.” In P.J.R. Millikan and A. Clark (eds.), Machines and Thought: The Legacy of Alan Turing. Oxford: Clarendon. Lurjia, A.R. (1976). Cognitive Development. Its Cultural and Social Foundations. Cambridge, MA: Harvard University Press. Machamer, P.K., L. Darden and C. Craver (2000). “Thinking about mechanisms.” Philosophy of Science 67: 1-25. MacLeod, C.M. and K. Dunbar (1988). “Training and Stroop-like interference: Evidence for a continuum of automaticity.” Journal of Experimental Psychology: Learning, Memory, and Cognition 14: 126-135. Macmillan, M. (1991). Freud Evaluated. Amsterdam: North-Holland. MacNamara J. (1986), A Border Dispute. The Place of Logic in Psychology. Cambridge, MA: MIT Press. Malcolm, N. (1972). “Thoughtless brutes.” Proceedings and Address of the American Philosophical Association 46: 5-20.
REFERENCES
343
Marcel, A. (1993). “Slippage in the unity of consciousness.” In G.R. Bock and J. Marsh (eds.), Experimental and Theoretical Studies of Consciousness. Chichester: John Wiley and Sons. Marcel, A. (1994). “What is relevant to the unity of consciousness?” In C. Peacocke (ed.), Objectivity, Simulation, and the Unity of Consciousness. Oxford: Oxford University Press. Marcel, A., R. Tegnér, and I. Nimmo-Smith (2004). “Anosognosia for plegia: Specificity, extension, partiality and disunity of bodily awareness.” Cortex 40: 19-40. Marconi, D. (1997). Lexical Competence. Cambridge, MA: MIT Press. Marconi, D. (2001). Filosofia e scienza cognitiva. Roma-Bari: Laterza. Marconi, D. (forthcoming). “Contro la mente estesa.” Sistemi Intelligenti. Marinelli, M. and F.J. White (2000). “Enhanced vulnerability to cocaine self-administration is associated with elevated impulse activity of midbrain dopamine neurons.” Journal of Neuroscience 20: 8876-8885. Marks, L. ([1975] 1997). “On coloured-hearing synaesthesia: Cross-modal translations of sensory dimensions.” Reprinted in Baron-Cohen and Harrison (1997). Marks, C. (1981). Commissurotomy, Consciousness and Unity of Mind. Cambridge, MA: MIT Press. Marks, J. (1982). “A theory of emotion.” Philosophical Studies 42: 227-242. Marr, D. (1982). Vision. New York: Freeman. Martin, M.G.F. (2001). “Out of the past: Episodic recall as retained acquaintance.” In C. Hoerl and T. McCormack (eds.), Time and Memory: Issues in Philosophy and Psychology. Oxford: Oxford University Press. Mason, K., C. Sripada, and S. Stich (forthcoming). “The Philosophy of psychology.” In D. Moran (ed.), Routledge Companion to Twentieth-Century Philosophy. Mattingley, J.B., A.N. Rich, and G. Yelland (2001). “Unconscious priming eliminates automatic binding of colour and alphanumeric form in synaesthesia.” Nature 410: 579-582. Maudlin, T. (1989). “Computation and consciousness.” Journal of Philosophy 86: 407-432. McCauley, R. (1996). “Explanatory pluralism and the co-evolution of theories in science.” In R. McCauley (ed.), The Churchlands and their Critics. Oxford: Blackwell. McCormack, T. and C. Hoerl (1999). “Memory and temporal perspective: The role of temporal frameworks in memory development.” Developmental Review 19: 154-182. McCormack, T. and C. Hoerl (2001). “The child in time: Temporal concepts and selfconsciousness in the development of episodic memory.” In C. Moore and K. Lemmon (eds.), The Self in Time: Developmental Perspectives. Hillsdale, NJ: Erlbaum.
344
REFERENCES
McCormack, T. and C. Hoerl (2005). “Children’s reasoning about the causal significance of the temporal order of events.” Developmental Psychology 41: 54-63. McCulloch, W.S. and W.H. Pitts (1943). “A logical calculus of the ideas immanent in nervous activity.” Bulletin of Mathematical Biophysics 7: 115-133. McDowell, J. (1992). Mind and World. Cambridge (MA): Harvard University Press. McDowell, J. (1994). “The content of perceptual experience.” Philosophical Quarterly 44: 190-205; also in Noë and Thompson (2002). McGilvray, J. (2001). “Chomsky on the creative aspects of language use and its implications for lexical semantic studies.” In P. Bouillon and F. Busa (eds.), The Language of Word Meaning. Cambridge: Cambridge University Press. McGinn, C. ([1988] 1997). “Consciousness and content.” Reprinted in N.J. Block, O. Flanagan, and G. Guzeldere (eds.), The Nature of Consciousness: Philosophical Debates. Cambridge MA: MIT Press. McGinn, C. (1999). The Mysterious Flame: Conscious Minds in a Material World. New York: Basic Books. McGurk, H. and J. MacDonald (1976). “Hearing lips and seeing voices.” Nature 264: 746-748. McIntosh, N.J. (1994). “Intelligence in evolution.” In J. Khalfa (ed.), What is intelligence? Cambridge: Cambridge University Press. McNally, R.J. (2003). Remembering Trauma, Cambridge, MA: Harvard University Press. Medin, D. and C. Aguilar (1999). “Categorization.” In R. Wilson and F. Keil (eds.), The MIT Encyclopedia of Cognitive Science. Cambridge, MA: MIT Press. Meini, C. (2001). La psicologia ingenua. Milano: McGraw-Hill. Mele, A. (1987). Irrationality: An Essay on Akrasia, Self-Deception, and Self-Control. New York: Oxford University Press. Mele, A. (1995). Autonomous Agents: From Self-Control to Autonomy. New York: Oxford University Press. Mele, A. (1997). “Strength of motivation and being in control. Learning from Libet.” American Philosophical Quarterly 34: 319-332. Mele, A. (1999). “Twisted self-deception.” Philosophical Psychology 12: 117-137. Mele, A. (2001). Self-Deception Unmasked. Princeton: Princeton University Press. Mele, A. (2003). “Emotion and desire in self-deception.” In A. Hatzimoysis (ed.), Philosophy and the Emotions. Cambridge: Cambridge University Press. Mele, A. (forthcoming). “Self-deception and delusions.” In J. Fernandez and T. Bayne (eds.), Delusions, Self-Deception, and Affective Influences on Belief-Formation. New York: Psychology Press. Mellor, D.H. (1989). “How much of the mind is a computer?” In P. Slezak and W.R. Albury (eds.), Computers, Brains and Minds. Dordrecht: Kluwer, 47-69.
345
REFERENCES
Meltzoff A.N. (2002). “Elements of a developmental theory of imitation.” In A.N. Meltzoff and W. Prinz (eds.), The Imitative Mind. Development, Evolution and Brain Bases. Cambridge: Cambridge University Press. Merritt, M. (2000). “Virtue ethics and situationist personality psychology.” Ethical Theory and Moral Practice 3: 365-383. Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity. Cambridge: MIT Press. Metzinger, T. (2004). “The subjectivity of subjective experience.” Networks 3-4: 33-64 (http://lgxserve.ciseca.uniba.it/lei/ai/networks/04/). Michaels, C. and C. Carello (1981). Direct Perception. Englewood Cliffs, NJ: Prentice-Hall. Michotte, A. (1963). The Perception of Causality. New York: Basic Books. Middleton, D. and S.D. Brown (2005). The Social Psychology of Experience: Studies in Remembering and Forgetting. London: Sage. Milgram, S. (1969). Obedience to Authority. New York: Harper and Row. Miller, C. (2003). “Social psychology and virtue ethics.” The Journal of Ethics 7: 365-392. Millikan, R. (1984). Language, Thought, and Other Biological Categories. Cambridge, MA: The MIT Press. Mills, C.B., E.H. Boteler, and G.K. Oliver (1999). “Digit synaesthesia: A case study using a Stroop-type test.” Cognitive Neuropsychology 16: 181-191. Minsky, M. and S. Papert (1969). Perceptrons. Cambridge, MA: MIT Press. Moor, J. (1982). “Split-brains and atomic persons.” Philosophy of Science 49: 91-106. Morgan, D., K.A. Grant, H.D. Gage, R.H. Mach, J.R. Kaplan, O. Prioleau, S.H. Nader, N. Buchheimer, R.L. Ehrenkaufer, and M.A. Nader (2002). “Social dominance in monkeys: dopamine D2 receptors and cocaine self-administration.” Nature Neuroscience 5: 169-174. B
B
Moscovitch, M. and C. Umiltà (1989). “Modularity and neuropsychology: The organisation of attention and memory.” In M. Schwartz (ed.), Modular Processes in Dementia. Cambridge, MA: MIT Press. Motluk, A. (1994). “The sweet smell of purple.” New Scientist 143: 32-37. Mullen, M.K. and S. Yi (1995). “The cultural context of talk about the past: Implications for the development of autobiographical memory.” Cognitive Development 10: 407-419. Murphy, G.L. (2002). The Big Book of Concepts. Cambridge, MA: MIT Press. Nadel, L. and M. Piattelli-Palmarini (2003). What is cognitive science? In L. Nadel (ed.), Encyclopedia of Cognitive Science. London: Macmillan. T
T
Nagel, T. (1974). “What is it like to be a bat?” Philosophical Review 83: 435-451. Nagel, T. (1985). The View from Nowhere. Oxford: Oxford University Press
346
REFERENCES
Nahmias, E. (2002). “When consciousness matters: A critical review of Daniel Wegner’s The Illusion of Conscious Will.” Philosophical Psychology 15: 527-541. Nahmias, E. (in preparation). “Free will and the threat of social psychology.” Nash, R. (1989). “Cognitive theories of emotion.” Nous 23: 481-504. Neisser, U. (1976). Cognition and Reality. San Francisco: Freeman. Nelkin, D. (forthcoming). “Freedom, responsibility, and the challenge of situationism.” Midwest Studies in Philosophy. Nelson, K. and R. Fivush (2004). ‘The emergence of autobiographical memory: A social cultural developmental theory.” Psychological Review 111: 486-511. Newell, A., and H.A. Simon (1972). Human Problem Solving. Englewood Cliffs, N.J.: Prentice-Hall. Newell, A. and H.A. Simon (1976). “Computer science as an empirical enquiry: Symbols and search.” Communications of the ACM 19: 113-126. Nichols, S. and S. Stich (2003). Mindreading. Oxford: Claredon Press. Nisbett, R. and N. Bellows (1977). “Verbal reports about causal influences on social judgments: Private access versus public theories.” Journal of Personality and Social Psychology 35: 613-624. Nisbett, R. and T. Wilson (1977). “Telling more than we can know: Verbal reports on mental processes.” Psychological Review 84: 231-259. Nisbett, R. and L. Ross (1980). Human Inference: Strategies and Shortcomings of Social Judgment. Englewood Cliffs: Prentice-Hall. Noë, A. and E. Thompson (eds.) (2002). Vision and Mind. Cambridge, MA: MIT Press. Norman, J. (2001). “Two visual systems and two theories of perception: An attempt to reconcile the constructivist and ecological approaches.” Behavioral and Brain Sciences 24: 73-137. T
T
Nozick, R. (1981). Philosophical Explanations. Cambridge: Harvard University Press. Nudds, M. (2003). The significance of the senses.” Proceedings of the Aristotelian Society 104: 32-51. Nunn, J.A., L.J. Gregory, M. Brammer, S.C.R. Williams, D.M. Parslow, M.J. Morgan, R.G. Morris, E.T. Bullmore, S. Baron-Cohen, and J.A. Gray (2002). “Functional magnetic resonance imaging of synaesthesia: Activation of V4/V8 by spoken words.” Nature Neuroscience 5: 371-375. Oaksford, M. and N. Chater (1998). Rationality in an Uncertain World. Hove: Psychology Press. Oaksford, M. and N. Chater (2003). “Optimal data selection: Revision, review, and reevaluation.” Psychonomic Bulletin and Review 10: 289-318. O’Brien, G. (1998). “Connectionism, analogicity and mental content.” Acta Analytica 22: 111-131.
REFERENCES
347
O’Connor, T. (2000). Persons and Causes. Oxford: Oxford University Press. Odgaard, E.C., J.H. Flowers, and H.L. Bradman (1999). “An investigation of the cognitive and perceptual dynamics of a colour-digit synaesthete.” Perception 28: 651-664. Ofshe, R. and E. Watters (1994). Making Monsters: False Memories, Psychotherapy and Sexual Hysteria. New York: Scribner’s. Öhman, A., U. Dimberg, and F. Esteves (1989). “Preattentive activation of aversive emotions.” In T. Archer and L. G. Nilsson (eds.), Aversion, Avoidance, and Anxiety. Hillsdale, NJ: Erlbaum. Öhman, A. and J.J.F. Soares (1993). “On the automatic nature of phobic fear: Conditioned electrodermal responses to masked fear-relevant stimuli.” Journal of Abnormal Psychology 102: 121-132. O’Leary-Howthorne, J. (1993). “Belief and behavior.” Mind & Language 8: 461-486. Olson, E. (1997). The Human Animal: Personal Identity without Psychology. Oxford: Oxford University Press. O’ Nualláin, S., P. Mc Kevitt, and E. Mac Aogain (eds.) (1997). Two Sciences of Mind: Readings in Cognitive Science and Consciousness. Philadelphia: John Benjamins. O’Regan, J.K. and A. Noë (2001). “A sensorimotor account of vision and visual consciousness.” Behavioral and Brain Sciences 24: 939-1031. Origgi, G. and D. Sperber (2000). “Evolution, communication and the proper function of language.” In P. Carruthers and A. Chamberlain (eds.), Evolution and the Human Mind. Modularity, Language and Meta-Cognition. Cambridge: Cambridge University Press. Ortony, A., G.L. Clore, and A. Collins (1988). The Cognitive Structure of Emotions. Cambridge: Cambridge University Press. Palmer, S. (1999). Vision Science. Cambridge, MA: MIT Press. Panksepp, J. (1998). Affective Neuroscience. Oxford: Oxford University Press. Papineau, D. (2000). “Functionalism.” In Concise Routledge Encyclopedia of Philosophy. London: Routledge. Parfit, D. (1984). Reasons and Persons. Oxford: Clarendon Press. Peacocke, C. (1992). A Study of Concepts. Cambridge, MA: MIT Press. Penrose, R. (1989). The Emperor’s New Mind. Oxford: Oxford University Press. Penrose, R. (1994). Shadows of the Mind. Oxford: Oxford University Press. Pereboom, D. (1995). “Conceptual structure and the individuation of content.” Philosophical Perspectives 9: 401-428. Pereboom, D. (2002). Living without Free Will. Cambridge: Cambridge University Press. Pereboom, D. and H. Kornblith (1991). “The metaphysics of irreducibility.” Philosophical Studies 61: 131-151.
348
REFERENCES
Perkel, D.H. (1990). “Computational neuroscience: Scope and structure.” In E.L. Schwartz (ed.), Computational Neuroscience. Cambridge, MA: MIT Press. Perner, J. (1991). Understanding the Representational Mind. Cambridge, MA: MIT Press. Perrett, D.I., E.T. Rolls, and W. Caan (1982). “Visual neurones responsive to faces in the monkey temporal cortex.” Experimental Brain Research 47: 329-342. Peterfreund, E. (1978). “Some critical comments on psychoanalytic conceptions of infancy.” International Journal of Psychoanalysis 59: 427-441. Peters, W. and Y. Wilks (2003). “Data-driven detection of figurative language use in electronic language resources.” Metaphor and Symbol 18. Pettit, P. (2001). A Theory of Freedom: From the Psychology to the Politics of Agency. Cambridge: Polity Press. Piaget, J. (1956), Logic and Psychology. Manchester: Manchester University Press. Piccinini, G. (2004a). “The first computational theory of mind and brain: A close look at McCulloch and Pitts’s ‘Logical calculus of ideas immanent in nervous activity’.” Synthese 141: 175-215. Piccinini, G. (2004b). “Functionalism, computationalism, and mental contents.” Canadian Journal of Philosophy 34: 375-410. Piccinini, G. (2004c). “Functionalism, computationalism, and mental states.” Studies in the History and Philosophy of Science 35: 811-833. Piccinini, G. (2004d). “Computers.” (http://philsci-archive.pitt.edu/archive/00002016/). Piccinini, G. (2006). Computation without Representation. Philosophical Studies. Piccinini, G. (forthcoming). “Computational modeling vs. computational explanation: Is everything a Turing machine, and does it matter to the philosophy of mind?” Australasian Journal of Philosophy. Piccinini, G. (unpublished). “Symbols, strings, and spikes.” Pietromonaco, P. and R. Nisbett (1992). “Swimming upstream against the fundamental attribution error: Subjects’ weak generalizations from the Darley and Batson study.” Social Behavior and Personality 10: 1-4. Pinker, S. (2002). The Blank Slate. The Modern Denial of Human Nature. New York: Penguin. Polanyi, M. (1966). The Tacit Dimension. New York: Doubleday. Politzer, G. and I.A. Noveck (1991). “Are conjunction rule violations the result of conversational rule violations?” Journal of Psycholinguistic Research, 20: 83-103. Port, R.F. and T. van Gelder (eds.) (1995). Mind and Motion: Explorations in the Dynamics of Cognition. Cambridge, MA: MIT Press. Pour-El, M.B. and J.I. Richards (1989). Computability in Analysis and Physics. Berlin: Springer.
349
REFERENCES
Povinelli, D.J., A.M. Landry, L.A. Theall, B.R. Clark, and C.M. Castille (1999). “Development of young children’s understanding that the recent past is causally bound to the present.” Developmental Psychology 35: 1426-1439. Premack, D. (1988). “Does the chimpanzee have a theory of mind? Revisited.” In Byrne and Whiten (1988). Premack, D. and A.J. Premack (1995). “Intention as psychological sause.” In D. Sperber, D. Premack, and A.J. Premack (eds.), Causal Cognition: A Multidisciplinary Debate. Oxford: Clarendon Press. Premack, D. and G. Woodruff (1978). “Does the chimpanzee have a ‘Theory of mind’?” Behavioral and Brain Sciences 4: 515-526. Prinz, J. (2002). Furnishing the Mind. Cambridge, MA: MIT Press. Puccetti, R. (1981). “The case for mental duality: Evidence from split-brain data and other considerations.” Behavioral and Brain Sciences 4: 93-123. Pullum, G.K. and B.C. Scholz (2002). “Empirical assessment of stimulus poverty arguments.” The Linguistic Review 19: 9-50. Putnam, H. (1967). “Psychological predicates.” In W.H. Capitan and D.D. Merrill (eds.), Art, Mind and Religion. Pittsburgh: Pittsburgh University Press. T
T
Putnam, H. (1975). “The meaning of ‘meaning’.” In Id., Philosophical Papers, Vol. II: Mind, Language, and Reality. Cambridge: Cambridge University Press. T
T
Putnam, H. (1988). Representation and Reality. Cambridge, MA: MIT Press. Putnam, H. (1999). The Threefold Cord. New York: Columbia University Press. Pylyshyn, Z.W. (1984). Computation and Cognition. Cambridge, MA: MIT Press. Pynte, J., M. Besson, F. Robichon, and J. Poli (1996). “The time-course of metaphor comprehension: An event-related potential study.” Brain and Language 55: 293-316. Quine, W.V.O. ([1951] 1953). “Two dogmas of empiricism.” In Id., From a Logical Point of View. Cambridge, MA: Harvard University Press. T
T
Quine, W.V.O. ([1953] 1966). “On mental entities.” In Id., TThe Ways of Paradox. New York: Random House. Quine, W.V.O. ([1956] 1966). “Quantifiers and propositional attitudes.” In Id., TThe Ways of Paradox. New York: Random House. Quine, W.V.O. (1960). Word and Object. Cambridge, MA: MIT Press. Quine, W.V.O. (1987). Quiddities: An Intermittently Philosophical Dictionary. Cambridge, MA: Belknap Press of Harvard University Press. Ramachandran, V.S. and E.M. Hubbard (2001a). “Psychophysical investigations into the neural basis of synaesthesia.” Proceedings of the Royal Society of London B 268: 979-983.
350
REFERENCES
Ramachandran, V.S. and E.M. Hubbard (2001b). “Synaesthesia—A window into perception, thought and language.” Journal of Consciousness Studies 8: 3-34. Ramachandran, V.S. and E.M. Hubbard (2003). “The phenomenology of synaesthesia.” Journal of Consciousness Studies 10: 49-57. Ravenscroft, I. (2004). “Folk psychology as a theory.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, (http://plato.stanford.edu/archives/spr2004/entries/folkpsych-theory/). T
T
Reese, E. (2002a). “Social factors in the development of autobiographical memory: The state of the art.” Social Development 11: 124-142. Reese, E. (2002b). “A model of the origins of autobiographical memory.” In J.W. Fagen and H. Hayne (eds.), Progress in Infancy Research, vol. 2. Hillsdale, NJ: Erlbaum. Reese, E., C.A. Haden, and R. Fivush (1993). “Mother-child conversations about the past: Relationships of style and memory over time.” Cognitive Development 8: 403-430. Reali, F. and M.H. Christiansen (2003). “Reappraising poverty of stimulus argument: A corpus analysis approach.” Proceedings of the 28th Boston University Conference on Language Development, online supplement, (http://cnl.psych.cornell.edu/people/florencia.html). P
P
Rey, G. (1988). “Toward a computational account of akrasia and self-deception.” In B. McLaughlin and A. Rorty (eds.), Perspectives on Self-Deception. Berkeley: University of California Press. Rey, G. (1997). Contemporary Philosophy of Mind: A Contentiously Classical Approach. Oxford: Blackwell. T
T
Rich, A.N. and J.B. Mattingley (2002). “Anomalous perception in synaesthesia: A cognitive neuroscience perspective.” Nature Reviews Neuroscience 3: 43-52. Rich, E. (1983). Artificial Intelligence. New York: McGraw Hill. Reichenbach, H. (1958). The Philosophy of Space and Time. New York: Dover. Ricoeur, P. (1969). “La question du sujet: le défi de la sémiologie.” In Id., Le conflit des interprétations. Paris: Le Seuil. Rips, L.J. (1994). The Psychology of Proof: Deductive Reasoning in Human Thinking. Cambridge, MA: MIT Press. Rivlin, R. and K. Gravelle (1984). Deciphering the Senses: The Expanding World of Human Perception. New York: Simon and Schuster. Rizzolatti, G., R. Camarda, L. Fogassi, M. Gentilucci, G. Luppino, and M. Matelli (1988). “Functional organization of inferior area 6 in the macaque monkey: II. Area F5 and the control of distal movements.” Experimental Brain Research 71: 491-507. Rizzolatti, G., L. Fadiga, L. Fogassi, and V. Gallese (2002). “From mirror neurons to imitation: Facts and speculations.” In A.N. Meltzoff and W. Prinz (eds.), The Imitative Mind. Development, Evolution and Brain Bases. Cambridge: Cambridge University Press. Rock, I. (1983). The Logic of Perception. Cambridge, MA: MIT Press.
REFERENCES
351
Rock, I. (1997). Indirect Perception. Cambridge, MA: MIT Press. Rohrer, T. (2001). “The cognitive science of metaphor from philosophy to neuroscience.” Theoria et Historia Scientarium 6: 27-42. Rogers, Y., M. Scaife, and A. Rizzo (2005). “Interdisciplinarity: An emergent or engineered process?” In S.J. Derry, M.A. Gernsbacher, and C.D. Schunn (eds.), Interdisciplinary Collaboration: An Emerging Cognitive Science. Hillsdale, NJ: Erlbaum. Romo, R., A. Hernández, A. Zainos, and E. Salinas (1998). “Somatosensory discrimination based on microstimulation.” Nature 392: 387-390. Romo, R., A. Hernández, A. Zainos, C. Brody, and L. Lemus (2000). “Sensing without touching: Psychophysical performance based on cortical microstimulation.” Neuron 26: 273-278. Rorty, A. (1988). “The deceptive Self: Liars, layers, and lairs.” In B. McLaughlin and A. Rorty (eds.), Perspectives on Self-Deception. Berkeley: University of California Press. Rosenthal, D.M. (1996). A Theory of Consciousness. The Nature of Consciousness. Cambridge, MA: MIT Press. Ross, L. and R. Nisbett (1991). The Person and the Situation: Perspectives of Social Psychology. New York: McGraw-Hill. Routley, R. (1981). “Alleged problems in attributing beliefs, and intentionality, to animals.” Inquiry 24: 385-417. Rowlands, M. (1999). The Body in Mind: Understanding Cognitive Processes. Cambridge: Cambridge University Press. Rubel, L.A. (1985). “The brain as an analog computer.” Journal of Theoretical Neurobiology 4: 73-81. Rubin, N. (2003). “Binocular rivalry and perceptual multi-stability.” Trends in Neurosciences 26: 289-291. Rumelhart, D.E. and J.M. McClelland (1986). Parallel Distributed Processing. Cambridge, MA: MIT Press. Russell, B. ([1912] 1967). The Problems of Philosophy. Oxford: Oxford University Press. Russell, B. ([1918] 1986). Mysticism and Logic and Other Essays. London: Unwin Paperbacks. Russell, B. (1959). My Philosophical Development. London: Allen and Unwin. Russell, P. (1992). “Strawson’s way of naturalizing responsibility.” Ethics 102: 287-302. Ryle, G. (1949). The Concept of Mind. Chicago: The University of Chicago Press. Sabini, J., M. Siepmann, and J. Stein (2001). “The really fundamental attribution error in social psychological research.” Psychological Inquiry 12: 1-15. Sacks, O. (1998). A Leg to Stand On. New York: Simon & Schuster.
352
REFERENCES
Sagiv, N. and L.C. Robertson (2005). “Synaesthesia and the binding problem.” In L.C. Robertson and N. Sagiv (eds.), Synaesthesia: Perspectives from Cognitive Neuroscience. New York: Oxford University Press. Sartre, J.-P. (1992). The Transcendence of the Ego. New York: Farrar, Strous, and Giroux. Savage-Rumbaugh, S. (1986). Ape Language: From Conditioned Response to Symbol. New York: Columbia Univerity Press. Savage-Rumbaugh, S. and K. McDonald (1988). “Deception and social manipulation in symbol-using apes.” In Byrne and Whiten (1988). Schacter, D.L. (1983). “Feeling of knowing in episodic memory.” Journal of Experimental Psychology, Learning, Memory, and Cognition 9: 39-54. Schacter, D.L. (1989). “On the relation between memory and consciousness: Dissociable interactions and conscious experience.” In H.L. Roediger III and F.I.M. Craik (eds.), Varieties of Memory and Consciousness: Essays in honour of Endel Tulving. Hillsdale, NJ: Erlbaum. Schacter D.L. (1991). “Unawareness of deficit and unawareness of knowledge in patients with memory disorders.” In G. Prigatano and D.L. Schacter (eds.), Awareness of Deficit After Brain Injury. New York: Oxford University Press. Schacter, D.L. (1995). “Memory distortion: History and current status.” In Schacter (ed.), Memory distortion: How minds, brains, and societies reconstruct the past. Cambridge, MA: Harvard University Press. Schacter, D.L. (1996). Searching for Memory: The Brain, the Mind, and the Past. New York: Basic Books. Schacter, D.L. and J.R. Worling (1985). “Attribute information and the feeling-of-knowing.” Canadian Journal of Psychology 39: 467-475. Schechtman, M. (1994). ‘The truth about memory.” Philosophical Psychology 7: 3-18. Scheutz, M. (ed.) (2002). Computationalism. New Directions. Cambridge, MA: MIT Press. Schmitt, Frederick F. (ed.) (2003). Socializing Metaphysics: The Nature of Social Reality. Lanham, MD: Rowman and Littlefield. Schlosser, G. (1998). “Self-re-production and functionality: A systems-theoretical approach to teleological explanation.” Synthese 116: 303-354. Schwartz, B. and H. Schuman (forthcoming). ‘History, commemoration, and belief: Reformulating the concept of collective memory.” American Sociological Review. Schwartz, R. (1994). Vision. Oxford: Blackwell. Seager, W. (1999). Theories of Consciousness: An Introduction and Assessment. London: Routledge. Searle, J.R. (1980). “Minds, brains, and programs.” Behavioral and Brain Sciences 3: 417-424. Searle, J.R. (1983). Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press.
REFERENCES
353
Searle, J.R. (1992). The Rediscovery of the Mind. Cambridge, MA: MIT Press. Searle, J.R. (1993). “Metaphor.” In A. Ortony (ed.), Metaphor and Thought. Cambridge: Cambridge University Press. Sechehaye, M. (1951). Autobiography of a Schizophrenic Girl. New York: Signet. Segal, G. (1991). “Defence of a reasonable individualism.” Mind 100: 485-493. Segal, G. (1997). “Synaesthesia: Implications for the modularity of mind.” In Baron-Cohen and Harrison (1997). Sellars, W. (1956), “Empiricism and the philosophy of mind.” In H. Feigl and M. Scriven (eds.), Minnesota Studies in the Philosophy of Science, vol. 1: The Foundations of Science, Minneapolis: University of Minnesota Press. Shadlen, M.N. and W.T. Newsome (1998). “The variable discharge of cortical neurons: Implications for connectivity, computation, and information coding.” Journal of Neuroscience 18: 3870-3896. Shagrir, O. (2001). “Content, computation and externalism.” Mind 110: 369-400. Shanks, R.D. and M.F. St.John (1994). “Characteristics of dissociable human learning systems.” Behavioral and Brain Sciences 17: 367-447. Shapiro, S. (2001). “Modeling and normativity. Agora – Papeles de Filosofia 20: 159-173. Sharpsteen, D. and L. Kirkpatrick (1997). “Romantic jealousy and adult romantic attachment.” Journal of Personality and Social Psychology 72: 627-640. Sheets-Johnstone, M. (1999). The Primacy of Movement. Amsterdam: John Benjamins. Sherry D.F. and D.L. Schacter (1987). “The evolution of multiple memory systems.” Psychological Review 94: 439-454. Shevrin, H. (1992). “The Freudian unconscious and the cognitive unconscious: identical or fraternal twins?” In J. Barron, M. Eagle and D. Wolitzky (eds.), Interfacies of Psychoanalysis and Psychology. American Psychological Association. Shimamura, A.P. and L.R. Squire (1986). “Memory and metamemory: A study of the feelingof-knowing phenomenon in amnesic patients.” Journal of Experimental Psychology: Learning, Memory, and Cognition 12: 452-460. Shih, J.C., K. Chen, and M.J. Ridd (1999) “Monoamine oxidase: From genes to behavior.” Annual Review of Neuroscience 22: 197-217. Shoemaker, S. (1963). Self-Knowledge and Self-Identity. Ithaca, NY: Cornell University Press. Shoemaker, S. (1991). “Qualia and consciousness.” Mind 100: 507-524. Shoemaker, S. (1996). The First-Person Perspective and Other Essays. Cambridge: Cambridge University Press. Shoemaker, S. (2003). “Consciousness and co-consciousness.” In A. Cleeremans (ed.), The Unity of Consciousness: Binding, Integration and Dissociation. Oxford: Oxford University Press.
354
REFERENCES
Siegelmann, H.T. (1999). Neural Networks and Analog Computation: Beyond the Turing Limit. Boston, MA: Birkhäuser. Singer, W. (1993). “Synchronization of cortical activity and its putative role in information processing and learning.” Annual Review of Physiology 55: 349-374. Smart, J.J.C. (1959). “Sensations and brain processes.” The Philosophical Review 68: 141-156. Smilansky, S. (2001). “From nature to illusion.” Proceedings of the Aristotelian Society 101: 71-95. Smilek, D. and M.J. Dixon (2002). “Towards a synergistic understanding of synaesthesia: Combining current experimental findings with synaesthetes’ subjective descriptions.” Psiche 8 (http://psyche.cs.monash.edu.au/v8/psyche-8-01smilek.hml). Smith, A.D. (2002). The Problem of Perception. Cambridge, MA: Harvard University Press. Smolensky, P. (1995). “On the projectable predicates of connectionist psychology: A case for belief.” In C. Macdonald e G. Macdonald (eds.), Connectionism. Debates on Psychological Explanation. Oxford: Blackwell. Solomon, R. (1977). The Passions. Garden City, NY: Anchor Books. Solomon, R. (1993). “The philosophy of emotions.” In M. Lewis and J. Haviland (eds.), Handbook of Emotions. New York: Guildford Press. Solomon, R. (2003). “Emotions, thoughts, and feelings: What is a ‘cognitive theory’ of the emotions and does it neglect affectivity?” In A. Hatzimoysis (ed.), Philosophy and the Emotions, Royal Institute of Philosophy Supplement 52. Cambridge: Cambridge University Press. Spelke, E. (1995). “Discussion.” In D. Sperber, D. Premack, and A.J. Premack (eds.), Causal Cognition: A Multidisciplinary Debate. Oxford: Clarendon Press. Spence, D. (1994). The Rhetorical Voice of Psychoanalysis. Cambridge, MA: Harvard University Press. Sperber, D. (1994a). “The modularity of thought and the epidemiology of representation.” In L.A. Hirschfeld and S.A. Gelman (eds.), Mapping the mind. Domain Specificity in Cognition and Culture. Cambridge: Cambridge University Press. Sperber, D. (1994b). “Understanding verbal understanding.” In J. Khalfa (ed.), What is Intelligence? Cambridge: Cambridge University Press. Sperber, D. (1996). Explaining Culture: A Naturalistic Approach. Oxford: Blackwell. Sperber, D. (2000). “Metarepresentations in an evolutionary perspective.” In Id. (ed.), Metarepresentations. A Multidisciplinary Perspective. Oxford: Oxford University Press. Sperber, D. (2005) “ ‘Mirror neurons’ or ‘concept neurons’?” (http://www.interdisciplines.org/mirror/papers/1>). Sperber, D., F. Cara, and V. Girotto (1995). “Relevance theory explains the selection task.” Cognition 57: 31-95.
355
REFERENCES
Sperber, D. and D. Wilson (1986). Relevance: Communication and Cognition. Oxford: Blackwell. Sperber, D. and D. Wilson (1995). Relevance: Communication and Cognition, 2nd Ed. Oxford: Blackwell. P
P
Sperber, D. and D. Wilson (2004). “Relevance theory.” In L.R. Horn and G. Ward (eds.), The Handbook of Pragmatics. Oxford: Blackwell. Spinner, B. (1981). “Subject’s access to cognitive processes: Demand characteristics and verbal report.” Journal of Theory of Social Psychology 11: 31-52. Stalnaker, R. (1984). Inquiry. Cambridge, MA: MIT Press. Stephan, A. (1999). “Are animals capable of concepts?” Erkenntnis 51: 79-92. Stephens, G.L. and G. Graham (2004). “Reconceiving delusion.” International Review of Psychiatry 16: 236-241. Sterelny, K. (2003). Thought in a Hostile World. Oxford: Blackwell. Stich, S. (1978). “Belief and subdoxastic states.” Philosophy of Science 45: 499-518. Stich, S. (1979). “Do animals have beliefs?” Australasian Journal of Philosophy 57: 15-28. Stich, S. (1983). From Folk Psychology to Cognitive Science. Cambridge, MA: MIT Press. T
T
Stich, S. (1996). Deconstructing the Mind. Oxford: Oxford University Press. Stich, S. (1999). “Eliminative materialism.” In R.A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press. Stich, S. and I. Ravenscroft (1994). “What is Folk Psychology?” Cognition 50: 447-468. T
T
Stich, S. and S. Nichols (1995). “Second thoughts on simulation.” In M. Davies and T. Stone (eds.), Folk Psychology. Oxford: Blackwell. Strange, D., M.P. Gerrie, and M. Garry (2005). “A few seemingly harmless routes to a false memory.” Cognitive Processing 6. Strawson, G. (1986). Freedom and Belief. Oxford: Oxford University Press. Strawson, G. (1994). Mental reality. Cambridge, MA: MIT Press. Strawson, G. (2005). “Free will.” in E. Craig (ed.), The Shorter Routledge Encyclopedia of Philosophy. London: Routledge. Strawson, P.F. (1959). Individuals: An Essay in Descriptive Metaphysics. London: Methuen. Strawson, P.F. (1962). “Freedom and resentment.” Proceedings of the British Academy 48: 1-25. Strawson, P.F. (1966). The Bounds of Sense: An Essay on Kant’s “Critique of Pure Reason”. London: Methuen. Strawson, P.F. (1971). “Identifying reference and truth-values.” In Id., Logico-Linguistic Papers. London: Methuen.
356
REFERENCES
Strawson, P.F. (1998). “Reply to David Pears.” In L.E. Hahn (ed.), The Philosophy of P.F. Strawson. La Salle: Open Court. Stroud, B. (1996). “The charm of naturalism.” Proceedings of the American Philosophical Association 70: 43-55. Suddendorf, T. (1999). “The rise of metamind.” In M.C. Corballis and S.E.G. Lea (eds.), The Descent of Mind. Psychological Perspectives on Hominid Evolution. Oxford: Oxford University Press. Suddendorf, T. and A. Whiten (2001). “Mental evolution and development: Evidence for secondary representation in children, great apes, and other animals.” Psychological Bulletin 127: 629-650. Suddendorf, T. and M.C. Corballis (1997). “Mental time travel and the evolution of the human mind.” Genetic, Social, and General Psychology Monographs 123: 133-167. Suengas, A.G. and M.K. Johnson (1988). “Qualitative effects of rehearsal on memories for perceived and imagined complex events.” Journal of Experimental psychology: General 117: 377-389. Sutton, J. (2002). “Cognitive conceptions of language and the development of autobiographical memory.” Language and Communication 22: 375-390. Sutton, J. (2003). “Memory.” In E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (http://plato.stanford.edu/archives/spr2003/entries/memory/). Sutton, J. (2004). “Representation, reduction, and interdisciplinarity in the sciences of memory.” In H. Clapin, P. Staines, and P. Slezak (eds.), Representation in Mind. Amsterdam: Elsevier. Sutton, J. (2006). “Exograms and interdisciplinarity: History, the extended mind, and the civilizing process.” In R. Menary (ed.), The Extended Mind. Aldershot: Ashgate. Tager-Flusberg, H. (2000). “Language and understanding minds: Connections in autism.” In S. Baron-Cohen, H. Tager-Flusberg, and D.J. Cohen (eds.), Understanding Other Minds. Perspectives from Developmental Cognitive Neuroscience. Oxford: Oxford University Press. Taube, M. (1961). Computers and Common Sense: The Myth of Thinking Machines. New York: Columbia University Press. Taylor, C. ([1977] 1982). “Responsibility for self.” Reprinted in G. Watson (ed.), Free Will. Oxford: Oxford University Press. Taylor, J.G. (1962). The Behavioural Basis of Perception. New Haven, CN: Yale University Press. Taylor, J.S. (2005). “Introduction.” In Id. Personal Autonomy: New Essays on Personal Autonomy and Its Role in Contemporary Moral Philosophy. Cambridge: Cambridge University Press. Taylor, R. (1966). Action and Purpose. Englewood Cliffs: Prentice Hall. Thelen, E. and L. Smith (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: MIT Press.
REFERENCES
357
Thurber, J. (1985). 92 Stories. New York: Harper & Row. Tomberlin, J. (ed.) (1992). Philosophical Perspectives, 6: Ethics. Atascadero: Ridgeview Publisher. Tononi, G. and G.M. Edelman (1998). “Consciousness and complexity.” Science 282: 1846-1851. Tooby, J. and L. Cosmides (1992). “The psychological foundation of culture.” In Barkow, L. Cosmides, and J. Tooby (eds.), The Adapted Mind. Oxford: Oxford University Press. Trautteur, G. (2005). “Beyond the Super-Turing Snare: Analog Computation and Digital Virtuality.” In S.B. Cooper, B. Löwe, and L. Torenvliet (eds.), New Computational Paradigms. Berlin: Springer. Trenholme, R. (1994). “Analog simulation.” Philosophy of Science 61: 115-131. Trevarthen, C. (1974). “Functional relations of disconnected hemispheres with the brain stem, and with each other: monkey and man.” In M. Kinsbourne and W. Lynn Smith (eds.), Hemispheric Disconnection and Cerebral Function. Springfield, IL: Charles C. Thomas. Trope, Y., B. Gervey, and N. Liberman (1997). “Wishful thinking from a pragmatic hypothesis-testing perspective.” In M. Myslobodsky (ed.), The Mythomanias: The Nature of Deception and Self-Deception. Mahwah, NJ: Erlbaum. Trope, Y., and A. Liberman (1996). “Social hypothesis testing: Cognitive and motivational mechanisms.” In E.T. Higgins and A. Kruglanski (eds.), Social Psychology: Handbook of Basic Principles. New York: Guilford Press. Tulving, E. (1985). “Memory and consciousness.” Canadian Psychology 26: 1-12. Tulving, E. (2002). “Episodic memory: From mind to brain.” Annual Review of Psychology 53: 1-25. Turing, A.M. ([1936] 1965). “On computable numbers, with an application to the Entscheidungsproblem.” Reprinted in M. Davis (ed.), The Undecidable. Hewlett, NY: Raven Press 1965. Tye, M. (1995). Ten Problems of Consciousness. A Representational Theory of the Phenomenal Mind. Cambridge, MA: MIT Press. Tye, M. (2003). Consciousness and Persons. Cambridge, MA: MIT Press. Umiltà, M.A., E. Kohler, V. Gallese, L. Fogassi, L. Fadiga, C. Keysers, and G. Rizzolatti (2001). “‘I know what you are doing’: A neurophysiological study.” Neuron 32: 91-101. Unger, P.K. (1979). “I do not exist.” In G. Macdonald (ed.), Perception and Identity. Ithaca: Cornell University Press. Unger, P.K. (2004). “The mental problems of the many.” In D.W. Zimmerman (ed.), Oxford Studies in Philosophy, Vol. 1. Oxford: Oxford University Press.
358
REFERENCES
van Cleve, J. (1999). Problems from Kant. Oxford: Oxford University Press. van der Malsburg, C. (1986). “Am I thinking assemblies?” In G. Palm and A. Aertsen (ed.), Proceedings of the 1984 Trieste Meeting on Brain Theory. Heidelberg: Springer Verlag. van Gelder, T.J. (1995). “What might cognition be, if not computation?” The Journal of Philosophy 92: 345-381. van Gelder, T.J. (1998a). “The dynamical hypothesis in cognitive science.” Behavioral and Brain Sciences 21: 1-14. van Gelder, T.J. (1998b). “Disentangling dynamics, computation, and cognition.” Behavioral and Brain Sciences 21: 40-7. van Gelder, T.J. (1999). “Dynamic approaches to cognition.” In R. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press. van Gelder T.J. and R. Port (1995). “It’s about time: An overview of the dynamical approach to cognition.” In R. Port R. and T. van Gelder (eds.), Mind as Motion. Cambridge, MA: MIT Press. van Inwagen, P. (1983). An Essay on Free Will. Oxford: Oxford University Press. van Inwagen, P. (1989). “When is the will free?” in Tomberlin (1992). van Inwagen, P. (1994). “When is the will not free?”, Philosophical Studies 75: 95-114. van Inwagen, P. (1998). “The mystery of metaphysical freedom” in P. van Inwgen and D. Zimmerman (eds.), Metaphysics: The Big Questions. Oxford: Blackwell. van Inwagen, P. (2002). “What do we refer to when we say ‘I’?” In R.M. Gale (ed.), The Blackwell Guide to Metaphysics. Oxford: Blackwell. Varela, F.J., E. Thompson, and E. Rosch (1991). The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. von Eckardt, B. (2001). “Multidisciplinarity and cognitive science.” Cognitive Science 21: 453-470. von Neumann, J. (1958). The Computer and the Brain. New Haven, CN: Yale University Press. Vuilleumier, P. (2004). “Anosognosia: The neurology of beliefs and uncertainties.” Cortex 40: 9-17. Vygotskij, L.S. (1978). Mind in Society. The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press. Wager, A. (1999). “The extra qualia problem: Synaesthesia and representationism.” Philosophical Psychology 12: 263-281. Wager, A. (2001). “Synaesthesia misrepresented.” Philosophical Psychology 14: 347-351. Wallman, J. (1992). Aping Language. Cambridge, MA: Cambridge University Press. Walter, H. ([1998] 2001). Neurophilosophy of Free Will. From Libertarian Illusions to a Concept of Natural Autonomy. Cambridge, MA: MIT Press.
REFERENCES
359
Walter, H., M. Adenzato, A. Ciaramidaro, I. Enrici, L. Pia, and B. Bara (2004). “Understanding intentions in social interaction: The role of the anterior paracingulate cortex.” Journal of Cognitive Neuroscience 16: 1854-1863. Ward, J. (2004). “Emotionally mediated synaesthesia.” Cognitive Neuropsychology 21: 761772. Wason, P.C. (1968). “Reasoning about a rule.” Quarterly Journal of Experimental Psychology 20: 273-281. Wason, P.C. and P.J. Brooks (1979). “THOG: The anatomy of a problem.” Psychological Research 41: 79-90. Watson, G. ([1975] 1982). “Free agency.” In Watson (1982). Watson, G. (ed.) (1982): Free Will. Oxford: Oxford University Press. Watson, G. (1987). “Free action and free will.” Mind 96: 145-172. Weatherford, R. (1991). The Implications of Determinism. London: Routledge. Wegner, D. (2002). The Illusion of Conscious Will. Cambridge, MA: MIT Press. Wegner, D.M. and Commentators (2004). Behavioral and Brain Sciences 27: 649-692. Weiskrantz, L. (1986). On Blindsight. A case Study and Implications. Oxford: Oxford University Press. Welzer, H. and H.J. Markowitsch (2005). “Towards a bio-psycho-social model of autobiographical memory.” Memory 13: 63-78. Weihrauch, K. (2000). Computable Analysis: An Introduction. Berlin: Springer. Wetherick, N.E. and K.J. Gilhooly (1990). “Syllogistic reasoning: Effects of premise order.” In K. Gilhooly, M. Keane, R. Logie and G. Erdos (eds.), Lines of Thinking, Vol.1. Chichester: Wiley, 99-108. Wetherick, N. E. and K.J. Gilhooly (1995). “‘Atmosphere’, matching, and logic in syllogistic reasoning.” Current Psychology 14: 169-178. Wheelock Jr., A.K. (1977). Perspective, Optics, and Delft Artists Around 1650. New York: Garland. White, S.L. (2004). “Subjectivity and the agential perspective.” In De Caro and Macarthur (2004). Whiten, A. (ed.) (1991). Natural Theories of Mind. Oxford: Blackwell. Whiten, A. (2000), Chimpanzee Cognition and the Question of Mental Re-presentation, in Dan Sperber A cura di), Metarepresentations. A Muldidisciplinary Perspective, Oxford: Oxford University Press. Whiten, A. and R.W. Byrne (1988). “Tactical deception in primates.” Behavioral and Brain Sciences 11: 233-273. Whiten, A. and R.W. Byrne (1991). “The emergence of metarepresentation in human ontogeny and primate phylogeny.” In A. Whiten (ed.), Natural Theories of Mind:
360
REFERENCES
Evolution, Development, and Simulation of Everyday Mind-Reading. Oxford: Blackwell. Whiten, A. and R.W. Byrne (eds.) (1997). Machiavellian Intelligence 2: Evaluations and Extensions. Cambridge, MA: Cambridge University Press. Widerker, D. and M. McKenna (eds.) (2002). Moral Responsibility and Alternative Possibilities. Aldershot: Ashgate Publishing. Wiener, N. (1948). Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press. Williams, B. (1978). Descartes: The Project of Pure Enquiry. Atlantic Highlands, N.J.: Humanities Press. Wilson, M. (2002). “Six views of embodied cognition.” Psychonomic Bulletin & Review 9: 625-636. Wilson R.A. (1999). “Philosophy: Introduction.” In R.A. Wilson and F. Keil (eds.), The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press. Wilson, R.A. (2004). Boundaries of the Mind: The Individual in the Fragile Sciences. Cambridge: Cambridge University Press. Wilson, R.A. (2005a). “Collective memory, group minds, and the extended mind thesis.” Cognitive Processing 6. Wilson R.A. (2005b). “Introduction: Philosophy of psychology.” In S. Sarkar and J. Pfeiffer (eds.), The Philosophy of Science: An Encyclopedia. New York: Routledge. Wilson, T.D. (1985). “Strangers to ourselves: The origins and accuracy of beliefs about one’s own mental states.” In J.H. Harvey and G. Weary (eds.), Attribution in Contemporary Psychology. New York: Academic Press. Wilson, T.D. (2002). Strangers to Ourselves: Discovering the Adaptive Unconscious. Cambridge, MA: Belknap Press. Wilson, T.D. and J. Schooler (1991). “Thinking too much: Introspection can reduce the quality of preferences and decision.” Journal of Personality and Social Psychology 60: 181-192. Wilson, T.D., D. Dunn, L. Kraft, and D. Lisle (1989). “Introspection, attitude change, and attitude-behavior consistency: The disruptive effects of explaining why we feel the way we do.” In L. Berkowitz (ed.), Advances in Experimental Social Psychology, Vol. 22. New York: Academic Press. Wilson, T.D., D. Kraft, and D. Dunn (1989). “The disruptive effects of explaining attitudes: The moderating effects of knowledge about the attitude object.” Journal of Experimental Social Psychology 25: 379-400. Winner, E. and H. Gardner (1977). “The comprehension of metaphor in brain-damaged patients.” Brain 100: 510-535. Winograd, T. (1978). Language as a Cognitive Process: Syntax. Reading, MA: AddisonWesley. Wittgenstein, L. (1951). Tractatus Logico-Philosophicus. New York: Humanities Press.
REFERENCES
361
Wittgenstein, L. (1953). Philosophical Investigations. Oxford: Blackwell. Wolf, S. (1990). Freedom Within Reason. New York: Oxford University Press. Wolfram, S. (2002). A New Kind of Science. Champaign, IL: Wolfram Media. Wollen, K.A. and F.T. Ruggiero (1983). “Colored-letter synaesthesia.” Journal of Mental Imagery 67: 83-86. Woodworth, R.S. and S.B. Sells (1935). “An atmosphere effect in syllogistic reasoning.” Journal of Experimental Psychology 18: 451-460. Wright, C. (1995). “Intuitionists are not (Turing) machines.” Philosophia Mathematica 3: 86102. Whyte, L.L. (1960). The Unconscious Before Freud. New York: Basic Books. Young, A.W. and K.M. Leafhead (1996). “Betwixt life and death: Case studies of the Cotard delusion.” In P. Halligan and J. Marshall (eds.), Method in Madness: Case Studies in Cognitive Neuropsychiatry. Hove, Sussex: Psychology Press. Zajonc, R. (1980). “Feeling and thinking: Preferences need no inferences.” American Psychologist 21: 1848-1858. Zajonc, R. (1984). “On the primacy of affect.” American Psychologist 39: 117-123. Zalla, T. (1996). Unité et multiplicité de la conscience: une étude critique des théories contemporaines à la lumière d’une hypothèse modulariste. PhD Dissertation, École Polytechnique, Paris. Zalla, T. and A.P. Palma (1995). “Feeling of knowing and phenomenal consciousness.” Behavioral and Brain Sciences 18: 271-272. Zelazo, P.D. (1996). “An age-related dissociation between knowing rules and using them.” Cognitive Development 11: 37-63. Zelazo, P.D. (2004). “The development of conscious control in childhood.” Trends in Cognitive Sciences 8: 12-17.
INDEX OF NAMES
Aguilar, C., 108
Britten, K., 315
Aristotle, 203, 264
Broad, C.D., 94-95
Arnold, M., 94
Brooks, P.J., 128
Ayer, A.J., 229
Burge, T., 14
Baars, B., 202
Bush, V., 45-46
Baron-Cohen, S., 66,
Byrne, R., 299
Bayne, 224n20, 267n27
Cacioppo, J., 309, 311-312, 314
Bechtel, W., 41-44, 48n21
Campbell, J., 82, 84-86
Bellows, N., 176-177
Campbell, S., 89
Bermúdez, J., 113
Cartwright, N., 263
Berthoz, A., 299
Chalmers, D., 212-213, 217-218
Bickle, J., 315
Clark, A., 15-16, 211-213, 217-218, 221222, 297-298, 307n10
Blake, R., 74 Changeux, J-P., 194 Block, N., 106, 190, 193-194 Cheney, D.L., 110 Bowlby, J., 151 Chisholm, R., 232, 235-237 Bradley, B., 246-248
363
364
INDEX OF NAMES
Chomsky, N., 7, 132, 134-136, 276
Durkheim, E., 296
Churchland, P.M., 5, 16-18, 22n75
Edelman, G., 133
Churchland, P.S., 16-18, 22n81, 35n23, 44, 45, 47n4
Engel, S., 87 Ekman, P., 101
Clore, G., 94 Eslinger, P.J., 310 Cosmides, L., 296-297 Evans, G., 216 Coulson, S., 141 Fivush, R., 83 Craik, K., 7, 19n20, 44-45 Flanagan, O., 229, 259, 265-266 Crick, F., 192 Csibra, G., 292
Fodor, J.A., 5, 10-12, 14, 16-18, 23, 3738, 105-106, 113, 119, 191
Cumming, B., 205-206
Frege, G., 133, 251
Cytowic, R., 65
Freud, S., 12, 147-155, 169
Czoty, P.W., 311
Friedrich, J., 162-163
Damasio, A., 192, 310
Gallese, V., 287, 289-290, 300-301
Darwin, C., 12, 271
Gandy, R., 39
Davidson, D., 94, 272-274, 276, 279
Garry, M., 89
Dehaene, S., 194
Gelman, R., 243
Dennett, D.C., 12, 222, 231, 238, 279, 297-299
Gibbs, R.W., 139
Descartes, R., 5, 54, 148, 229-230, 232, 271
Gibson, J.J., 8, 15, 55-56, 58-59, 243, 249 Gilbert, M., 90
de Villiers, J.A., 304 Gildea, P., 139 de Villiers, P.A., 304 Gilhooly, K.J., 124 Dixon, M.J., 67 Giros, B., 316 Dretske, F., 110, 113, 190 Girotto, V., 128-129 Dunbar, R., 299 Glucksberg, S., 139 Durgin, F., 243
365
INDEX OF NAMES
Goldman, A., 233, 237, 284-285, 287289, 292-293
Jung, C.G., 150 Kahneman, D., 155
Gordon, R., 284-286, 289-290, 292 Kane, R., 262-263 Gray, J., 72-73, 76-77 Kant, I., 232, 264 Grice, P., 139, 143n24, 305 Kaufman, L., 243 Griffiths, P.E., 101 Kelvin, W.T., 45 Halbwachs, M., 87, 89 Kenny, A., 94, 96, 230 Hanson, N.R., 252 Kim, J., 312 Harding, C.L., 247-248 Kirkpatrick, L., 163 Harrison, J.E., 66 Kitcher, P., 90-91 Hartshorne, H., 175 Koch, C., 192 Hauser, L., 6 Kuhn, T., 252 Heider, F., 4 Lazarus, R., 96-98 Hirst, W., 88 Legrenzi, P., 128-129 Hoerl, C., 82, 84-86 Leibniz, W., 148 Hubbard, E.M., 68-69, 74 Leslie, A., 198n17, 283, 303 Hume, D., 157, 243, 252, 255-256, 260 Levin, H., 174 Humphrey, N.K., 299 Levy, J., 208-209 Hutchins, E., 90 Liberman, N., 162 Isen, A.M., 174 Lichtenberg, G., 234-237, 239 Jacob, P., 291 Locke, J., 241 James, W., 5, 198n29, 233, 237 Malcolm, N., 271-272, 275 Janet, P., 148 Marcel, A., 206 Jeannerod, M., 291 Marks, J., 94 Johnson, M.K., 194 Marr, D., 9, 16, 37, 39, 42-44, 56, 59, 62 Johnson-Laird, P.N., 45, 123
366
INDEX OF NAMES
May, M., 175
Nisbett, R., 13, 21n55, 176-178, 182n10, 182n12, 182n19, 183n22, 183n29, 183n32, 184n33, 184n41, 185n50
McCauley, R., 17
Noë, A., 59
McCormack, T., 82, 85-86
O’Connor, T. 263
McCulloch, W., 23
Origgi, G., 305
McDowell, J., 59
Ortony, A., 94
McGinn, C., 235
Palma, A., 193
Medin, D., 108
Panksepp, J., 101
Meini, C., 292
Parfit, D., 218, 220, 230-231, 234
Mele, A., 159, 161-162
Pavlov, I., 6
Metzinger, T., 231
Penrose, R., 259
Michotte, A., 243, 247
Perner, J., 302
Milgram, S., 174
Premack, D., 271, 299
Millikan, R., 113
Prigogine, I., 263
Morgan, D., 310-311
Prinz, J., 108
Moscovitch, M., 194
Putnam, H., 14, 55, 57-58, 61
Murphy, G.L., 108
Pylyshyn, Z., 11, 37-38
Nader, M., 310-316
Quine, W.V.O., 10, 238, 265, 279-280
Nagel, T., 190
Ramachandran, V., 68-69, 74
Nelson, K., 83
Reese, E., 84
Newell, A., 37
Rich, E., 74
Newsome, B., 315
Rips, L.J., 122, 124
Nichols, S., 284
Rizzolatti, G., 301
Nietzsche, F., 148
Romo, R., 315
Mattingley, J.B., 74
Rorty, A., 165
367
INDEX OF NAMES
Russell, B., 133, 153, 231, 272
Strawson, P.F., 232, 237, 239, 264, 266n13, 272
Sacks, O., 248-250 Suddendorf, T., 302-303 Sartre, J.P., 244 Tager-Flusberg, H., 304 Schank, R., 174 Tolman, E.C., 7 Schacter, D.L., 194, 196 Thompson, E., 59 Schwartz, B., 87 Thorndike, E.L., 6 Schuman, H., 87 Tooby, J., 296-297 Searle, J., 21n47, 35n11, 119, 281n14 Trevarthen, C., 208-209 Sejnowski, T., 22n81, 35n23, 44 Trope, Y., 162, 166n10 Sellars, W., 256 Tulving, E., 82, 195 Seyfarth, R.M., 110 Shannon, C., 46
Turing, A., 8, 11, 30, 38-40, 47n4, 47n6, 48n7, 138
Sharpsteen, D., 163
Tversky, A., 155
Sherry, D.F., 196
Tye, M., 80n49, 190, 198n4
Shih, J.C., 316
Umiltà, C., 194, 198n29
Shoemaker, S., 196, 209n2, 235
Unger, P., 230, 234, 240n6
Simmel, M., 4
van Cleve, J., 236-237
Simon, H., 20n26, 37
van Gelder, T.J., 40-41, 43-44, 48n17, 48n18
Skinner, B.F., 6, 169 Solomon, R., 94-95 Spelke, E., 243 Sperber, D., 90, 137, 142n11, 198n17, 305 Stich, S., 5, 14, 19n3, 19n10, 21n55, 21n59, 284 Strawson, G., 235, 267n18
van Inwagen, P., 230, 268n44 van Wezel, R., 315 von Helmholtz, H., 8 von Neumann, J., 17, 38-39 Vygotskij, L.S., 297-298, 306 Walter, H., 267n27, 292
368
INDEX OF NAMES
Ward, J., 68
Wilson, D., 142n11, 305
Wason, P., 125, 128
Wilson, R., 87
Watson, J., 5-6
Wilson, T.D., 13-14, 21n55, 34n1, 178, 180, 184n33, 184n47, 184n48, 184n49
Weiskrantz, L., 156 Wittgenstein, L., 106, 133, 157, 231 Wetherick, N.E., 124 Woodruff, G., 299 Whiten, A., 299, 302-303 Zalla, T., 193, 223n16 Williams, B., 236
INDEX OF SUBJECTS
of mindreading and language, 304-306
active externalism, 8, 15-16, 87, 212 action-oriented representations, 16, 89 external scaffolding, 297-298, 304 see also extended cognition
cognitive dissonance, 156 cognitive science(s), 3-4, 9, 11, 13, 15-16, 34, 37-45, 47, 53, 55, 57, 58, 63, 81, 86-87, 90-91, 93, 105-106, 131-134, 136-138, 153, 211-213, 295, 297 classical, 55 philosophy of, 3, 81 post-classical, 15, situated, 211, 213
affordances, 16, 243, 248-249, 252 animals, 7, 23, 84, 95, 97, 109-110, 113, 153, 170, 271-280, 299-303, 305-306 belief, 271-280, 303 Machiavellian intelligence, 299 mindreading, 299 secondary representation, 303
autonomous agency, 169-181
cognitive psychology, 3, 5-7, 9, 12-14, 18, 65, 86, 88, 90, 134, 154, 298 as an anti-phenomenology, 12 and cognitive science, 9 cognitivism, 7-8, 136 history of, 3-9 social, 88
behaviorism, 4-7, 16, 18, 101 analytic, 6 eliminative, 5-6 methodological, 6
cognitivism (about emotions), 93-102 explanatory vs. descriptive, 95-99 strong, 95-97, 101 weak, 93-97
categorization, 105-115, 201 see also concepts
computation, 8-9, 17, 23-35, 37-47, 56 analog, 30, 38-39, 44-46 Church-Turing thesis, 11, 38-39, 45 digital, 11, 24, 30, 37-41, 44-46 Turing-computable functions, 39-40
anti-individualism, 87 artificial intelligence (AI), 9, 15, 37, 131-132, 134, 136
co-evolution, 17-18, 302, 304-306 explanatory pluralism, 17-18
369
370
INDEX OF SUBJECTS
Turing machines, 8, 11, 38, 40 computational explanation, 23-34 modeling view of, 26-28 semantic view of, 24-25 computational functionalism, 8-9, 14, 16-17 individualism, 14-15, 89 representational and computational theory of mind (RCTM), 11-17 Marr's theory of vision, 9, 16, 56, 59, 62 computational psychology, 9-10, 16-18, 20n34, 63 computational theory of mind (CTM) 11-12, 23-27, 29-30, 32-34 see also computational functionalism computationalism 9-10, 37-47, 55, 58, 60-61 paradigm of the computer (PoC), 37-39, 41 see also computational functionalism concepts, 105-115, 135, 175, 271-273 abstraction, 108, 110, 112, 114 directedness, 108-110, 114 informational atomism, 106-107 mental, 283-284, 288, 290 multimodality, 110, 114 off-line processing, 112-114, 116n18 see also categorization connectionism, 16-17, 24, 30, 33, 37-38, 42, 44, 47, 48n16, 89-90 eliminative, 16-18 and propositional attitudes, 17n73 representations, 17, 42 the Churchlands, 16-18 computations, 17 consciousness, 4-5, 12-13, 19n4, 24, 62, 147-157, 189-197, 198 n. 4, 201-209, 213-215, 221-222, 223n15, 223n16, 224n20, n29, 231-240 access (a-consciousness), 19 n4, 190, 223n15
binding problem, 191-192, 201 Dice model, 194 eliminativism of, 237-239 and evolution, 195-197 feeling of knowing, 193 global workspace models of, 192, 194, 197, 204 modularity, 191-193 phenomenal (p-consciousness), 4, 55-56, 58, 62, 71, 74-76, 94, 189-197 and the unconscious, 12, 147-157 unity of, 201-209 see also source monitoring dynamical systems, 15, 37, 40-41, 43-44, 47 and Marr's computational theory, 42-44 Watt’s regulator, 41 eliminativism, 4-5, 18 see also eliminative behaviorism, eliminative connectionism, eliminativism of consciousness emergence, 214, 220, 224 n22, 224n30, 225n50 emergentism, 223n4 emotions, 68, 70-71, 83-86, 93-102, 150, 154, 163-165, 285, 301, 310 affect program theory, 101 core relational themes, 96 face-based, 285 see also cognitivism experience, 229-240, 243-253 rich, 243-253 explanation, computational, 23-34, 39-44 dynamicist, 15, 40-44 folk psychological, 3-4 functional, 28-29 Marr's model of, 9, 42 mechanistic, 15, 23-34, 41, 43-44 psychological, 6, 14, 16, 20n34
371
INDEX OF SUBJECTS
extended cognition, 212, 215, 222 folk psychology, xv-xvi, 3-22, 71, 172, 176, 178, 211, 284, 288 see also eliminativism, mindreading, simulation-theory free will, 169, 181n2, 182n9, 258-270
acquisition, 7, 221 and artificial intelligence (AI), 131-132, 134, 136 and mindreading, 300, 302-304 comprehension, 131-143 event-related brain potentials, 140 figurative language, 138-141 linguistic relativism, 294 linguistic determinism, 294 metaphor, 37-38, 86, 128, 132-134, 138-141, 245 poverty of the stimulus argument, 7-8, 307n10 Natural Language Understanding, 136-137 of thought (LoT), 10, 12, 16-17, 22n74, 38, 113, 137 universal grammar, 7, 134 see also co-evolution
free will, theories of, compatibilism, 257-258, 261, 263-264 hard determinism, 258 incompatibilism, 257-258, 261 interactionism, 258, 262-264 libertarianism, 258, 259-261, 262263, 267n33 philosophical isolationism, 258, 264-266 scientific isolationism, 258, 259-261 skepticism, 258-261, 264-266, 267n18
manifest image, 256-258, 264-266
functionalism, 71-72, 79n29 computational, 8-9, 15-17 functional explanation, 28-29 see also synaesthesia
mechanistic explanation 15, 24, 27-34, 41, 43-44, 313-314 and computational explanation 28-30 in neuroscience 30-32, 314
generalization, and concepts, 116n21 existential, 281n11, 282n35, 290, about propositional attitudes, 10
memory, 154-155, 81-92 autobiographical, 81-85, 88, 90, 195 false recollections, 150-151 social, 81, 86-89
global workspace, 192, 194, 197, 204 homunculus fallacy 6, 10-11 intentional stance, 299 intentionality, 12, 35n11, 105, 119, 253, 272, 280, 281n11 see also propositional attitudes introspection, 5, 13, 65, 147, 150, 153-157, 172, 176, 178-180, 184n44, 204, 232-233 see also self-knowledge language, 7, 84, 86, 94-95, 107-109, 113, 118, 133-145, 148-149, 155, 211-213, 221-222, 249-253, 271-272, 276-280, 296-298, 302
mind extended, 81, 87, 211-223, 225n44, 225n46, 225n49 personal, 213-214, 216-224, 225n20, 226n25, 226n38, 227n43 social, 295-308 mindreading, 3-5, 13, 299 intentional stance, 299 metarepresentation, 298-303, 305 secondary representation, 302-303 theory of mind mechanism (ToMM), 283 theory-theory, 13 see also folk psychology, simulationtheory
372
INDEX OF SUBJECTS
neuroscience computational, 16, 18 mechanistic explanation, 30-34, 313 social behavior, 309-316 spikes vs. symbols, 31-32
performance errors, 121-123, 125-126, 128 selection task, 124-125 thog problem, 128-129
perception, 249-250, 254
reduction(ism), 8, 18, 34, 82, 90, 94-95, 196, 264, 310, 312-314 approximate microreduction, 17 in the philosophy of emotion, 94-95
personal mind, 211-225
reflective equilibrium, 18, 262
phenomenology, 5, 13, 59-61, 66-80, 154, 204-205, 207, 209, 213, 220, 222, 235-236, 240, 244-245, 248-253
representational theory of mind (RTM), 10-12
neurosis, 151
philosophy of cognitive science, 3 of psychology, 3, 14 of neuroscience, 3 of science, 3, 17, 82, 252 propositional attitudes, 4-5, 10, 12-13, 18, 22n75, 93-95, 190, 218, 272, 280, 285, 289-300, 302-304 psychoanalysis, 147-158 psychology autonomy of, 16-17 and neuroscience, 16-18, 23-36, 311-318 see also cognitive, computational, folk, social psychology rationality, 10, 94, 159, 213, 220-221, 223, 244, 253, 309
rich perception, 241-252 transcendental argument for, 241-242, 247, 250-251 scientific image, 256-258, 263-266, 268n44 self, 14, 83-86, 153-157, 215, 221-222, 229-242, 240n18, 252-253, 287, 300, 303 physical self-monitoring, 153 recognition through the mirror, 303 transcendental argument, 243-244, 249-251 see also self-awareness, self-denial self-awareness, 5, 12, 147, 153-155 complete, 153-154 non-reflective, 233 self deception, 159-167 FTL theory of, 162-164 straight cases of, 159-161, 163 twisted cases of, 159, 163
rationalization, 14, 156, 171-174, 181n6 self-denial, 229-242 realism, 241 intentional, 14, 18 direct, 53-60, 62 indirect, 53-60, 62 reasoning, 10-12, 85, 113-114, 117-131, 155, 157, 170, 178, 180, 190-193, 197, 216-217, 221, 286, 288, 290-291, 293, 303-305 competence errors, 121-124, 129
self-knowledge, 5, 12 see also introspection simulation-theory, 45, 132-137, 283-294, 300-301 embodied simulation, 301 mirror neurons, 286-287, 289, 300 moderate, 284-285, 289, 293n21
INDEX OF SUBJECTS
we-centric space hypothesis, 287, 289-290 radical, 284-286, 289 situationism, 173, 182n10, 183n27 social psychology, 4, 13, 81, 87, 152, 169-170, 172-181, 182n10 attribution theory, 4-5 fundamental attribution error, 176 Standard Social Science Model, 296 see also self-deception, FTL theory of source monitoring, 189, 194-197, 223n16 split-brain syndrome, 13, 207-209 subpersonal, 3, 19n3, 58-61, 212, 214-218, 220, 222, 225n45, 247-248 synaesthesia, 65-80 functionalism, 66, 71-73, 76-77 McCollough after effect, 75 McGurk effect, 70 and modularity, 65-66 vs. non-synaesthetic cross-modal illusions, 69-71, 73, 75-77 Stroop test, 67, 74, 78n18, 78n19 testimony, 247-249, 250 token-identity theory, 16 type-identity theory, 16
373
unconscious (the), 12, 14, 21n48, 21n50, 147-158, 169-170, 174, 191, 196, 212, 224n20 belief, 165-166 blindsight, 156, 192 content, 204, 206 dichotic listening, 156 emotion, 94, 97 inference, 8, 55 post-hypnotic suggestion, 156, 248 rationalisation, 156 visual perception, 54 see also self-deception unity of consciousness, 189, 191-192, 201-210, 244 availability thesis, 204-207, 209 consistency thesis, 202-204 Dimensional Change Card Sort task, 205-206 split-brain syndrome, 207 see also split-brain syndrome, unity of mind, unity of experience unity of experience, 235-237 unity of mind, 211-225 vision, 8-9, 49n27, 53-64, 65, 67, 113, 202, 246, 248 computational vision, 56, 58-60 constructive theories, 55 ecological optics, 8, 48n21, 55 higher order invariants, 56 see also affordances, unconscious inference