Consciousness
Controversies in Science Animals and Science, Niall Shanks The EvolutionWars, Michael Ruse Homosexuality and Science, Vernon A. Rosario Women’s History as Scientists, Leigh Ann Whaley Forthcoming: Cosmologies in Collision, George Gale Experimenting on Humans, Susan E. Lederer Extraterrestrial Life, Carol E. Cleland
Consciousness A Guide to the Debates
Anthony Freeman
Santa Barbara, California • Denver, Colorado • Oxford, England
Copyright 2003 by Anthony Freeman All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except for the inclusion of brief quotations in a review, without prior permission in writing from the publishers. Library of Congress Cataloging-in-Publication Data Freeman, Anthony, 1946– Consciousness : a guide to the debates / Anthony Freeman. p. cm. Includes bibliographical references and index. ISBN 1-57607-791-8 (alk. paper) ISBN 1-57607-792-6 (e-book) 1. Consciousness. I. Title. B808.9.F74 2003 126–dc21 2003011638E
07 06 05 04 03 10 9 8 7 6 5 4 3 2 1 This book is also available on the World Wide Web as an e-book. Visit www.abc-clio.com for details. ABC-CLIO, Inc. 130 Cremona Drive, P.O. Box 1911 Santa Barbara, California 93116-1911 This book is printed on acid-free paper I. Manufactured in the United States of America
For Jacqueline
Contents Preface
ix
1
The “Impossible” Science
1
2
The Biological Brain
21
3
From Light to Sight
43
4
The Conscious Brain
61
5
The Mind-Body Problem
81
6
The (Un)Conscious Computer
101
7
Embodied Consciousness
119
8
The Once and Future Self
137
9
Quantum Physics and Consciousness
157
10
Decision Time
177
11
Dreams,Visions, and Art
197
12
What Is It Like to Be Conscious?
219
References
239
Further Reading
251
Chronology
253
Glossary
261
vii
viii • Contents
Documents
277
Index
329
About the Author
339
Preface omething is conscious if there is “something it is like to be” that thing. This surprising but widely accepted definition of consciousness was put forward by philosopher Thomas Nagel in 1974 in an article titled “What Is It Like to Be a Bat?” (see Document 5). This definition has three great virtues. First, it avoids the circularity found in most attempts to define consciousness (for example, to be conscious is to have perceptions; to have perceptions is to be aware; to be aware is to be conscious; etc., etc.). Second, Nagel cleverly manages to imply a sense of personal experience, which for many people is the defining quality of consciousness, without actually claiming that consciousness is first-person by definition. This keeps his definition neutral in a major controversy in consciousness studies concerning its subjective and/or objective nature. And third, by concentrating on the verb “to be conscious” rather than the noun “consciousness,” Nagel has neatly sidestepped another divisive debate as to whether or not consciousness itself actually exists (Nagel 1974). The importance of not overdefining the topic ahead of investigating it is routinely stressed by John Searle, another philosopher much involved in consciousness studies. He distinguishes between an analytic definition, which is arrived at only at the end of an investigation, and a commonsense definition, which comes at the start and serves to identify the target of the research program. He advises taking as a starting point a simple statement such as this one: Consciousness consists in those states of awareness that begin in the morning when we awake from a dreamless sleep and continue throughout the day until we fall asleep again, or fall into a coma, or die (Searle 1998). In this book I look at something of the history and the present state of the scientific study of consciousness, thus broadly defined. Chapter 1 is a brief overview of the story of consciousness research from the ancient Greeks to the present day. It provides a map of the territory to be covered in detail in the rest of the book and serves to introduce some of the major landmarks that will feature in our story. In particular, it highlights the central problem of relating objective science to subjective experience. Chapters 2–4 deal with aspects of the brain, which is the part
S
ix
of the body most closely associated with consciousness. In Chapter 2, the focus is on its physical structure and function and the techniques developed over the years for studying them. Chapter 3 considers in detail the visual system and the way by which the physical fact of light falling on the eye becomes transfigured into the conscious experience of seeing color, shape, texture, movement, and all the many elements that make up our visual consciousness. Then Chapter 4 begins to tackle, from the scientific side, the big question of how physical events in the brain relate to conscious events in the mind. The same question is pursued further in Chapter 5, this time from the viewpoint of philosophy. Attention then turns to a number of specific areas of interest that have emerged as the science of consciousness has developed. The fascination of conscious machines and chess-playing computers is discussed in Chapter 6, and then in Chapter 7 the emphasis shifts in the opposite direction, with a look at those scholars who argue that embodiedness is an essential feature of consciousness. This discussion leads naturally to the question of who we are, and Chapter 8 looks at the central role of memory in our identification of ourselves as conscious beings. Science takes center stage in Chapter 9 in the form of quantum physics, a subject that some believe to be closely entwined with consciousness at the deepest levels of reality. Among the problems that might be solved by quantum theory is the vexed question of how the experience of free choice is compatible with an orderly world that obeys the laws of physics. This paradox is explored in Chapter 10, along with related matters concerning our consciousness of time. In Chapter 11, I look at three areas of conscious experience that might appear to be at the opposite end of the spectrum from scientific study: the strange world of dreams, the heightened states of consciousness associated with certain religious practices and traditions, and the appreciation of artistic beauty. It turns out that none of these aspects of consciousness has escaped the attention of the neuroscientists. Then finally, in Chapter 12, we return to Thomas Nagel’s basic question, What is it like to be conscious? Here we pick up a number of philosophical matters left unresolved in Chapter 5 and consider some responses to the “hard problem” of consciousness:Why and how is there conscious experience in the universe at all? I am grateful to my colleagues Keith Sutherland and Joseph Goguen at the Journal of Consciousness Studies for their encouragement to write this book and to a number of scholars who have read and commented on parts of the draft manuscript, especially Bernard Baars, Jean Burns, Rodney Cotterill, Stanley Krippner, Jonathan Lowe, Geraint Rees, Henry Stapp, and John G. Taylor. x • Preface
1
The “Impossible” Science
cience and consciousness make strange bedfellows. We think of science as a series of precise, measured, repeatable experiments, all carefully ordered and recorded. Scientific issues might be hotly debated and the outcomes of research condemned or applauded, but the exercise itself is almost by definition a matter of detached, impartial, and objective study. Conscious experience is quite the reverse. My consciousness is an unending stream of sounds and colors and shapes and moods and feelings. Only with the greatest effort can my perception of these things be ordered and regulated, and they come to me quite unbidden. These experiences are quite at odds with the planned, unruffled procedures of the laboratory, and they are all my own. They are there when I am alone; they are there when I dream; and even in company, my experience is not yours, and yours is not mine. Where science is objective and public, conscious experience is subjective and private. So a “science of consciousness” is a contradiction in terms. There, in a nutshell, is the case against the “impossible” science of consciousness. It is simple and obvious; who could deny its logic? Yet since the earliest stirrings of scientific study way back in ancient Greece, natural philosophers (as scientists were then called) have been fascinated by the nature of the conscious mind and its relation to the physical world. This book is about the ongoing battle between the logic that says there can be no scientific study of consciousness and the human spirit of inquiry that defies the logic and goes ahead with the study anyway. Science—for all its boasted success—can still make spectacular mistakes. There are no assured results. The constant process of seeking, searching, asking, and questioning means that researchers will sometimes have to give up a cherished belief, even after it has been held for centuries, in the face of new and apparently incontrovertible
S
1
evidence, perhaps to see their new theories overthrown after only a few years. Physics, the most basic of all the natural sciences, provides an excellent example of such turns of fortune. The history of the atomic theory of matter demonstrates two things that are important for the present work. First, it illustrates the seesaw quality of scientific thinking, which should teach us never to dismiss an old idea just because it is out of fashion. And second, it introduces an aspect of today’s science that has an immediate bearing on consciousness and has forced scientists to take it seriously.
Atomic Theory:A Roller-coaster History The story starts with the Greek philosopher Leucippus of Miletus (c. 440 B.C.E.). He proposed that all matter is composed of tiny particles of identical “stuff ” existing in a vacuum of totally empty space and that physical objects get their different qualities according to the number, size, and arrangement of their constituent particles. It was an essential feature of his theory that although these particles might differ from one another in size, each one was in itself indestructible and indivisible (atome in Greek). In consequence of this last feature, they were named atoms. The atomic theory of matter was quickly championed by another philosopher, Democritus (460–361 B.C.E.), with whose name it is most often associated, and for a while it flourished. But after a century of dominance it fell into disrepute, because the famous Aristotle (384–322 B.C.E.), pupil of Plato and teacher of Alexander the Great, opposed it. Democritus and Aristotle agreed about a very strange prediction of atomic theory: if it were true, then all objects in a vacuum would fall at an equal rate, a feather as fast as a stone. They also agreed that in practice, heavy objects fall through the air faster than light objects. Where they disagreed was over the reason. Democritus said it was because the light objects were slowed down by collisions with atoms of air and that in a true vacuum (which he of course lacked the technology to produce) all objects would fall equally fast, as predicted by his theory. Aristotle said that would be absurd, and therefore the theory must be wrong. Because the question could not be settled experimentally, Aristotle’s influence carried the day. Atomism had been knocked down and stayed on the canvas for nearly two millennia, but it was not knocked out. At the end of the sixteenth century, Galileo (1564–1642) proved Democritus right in the matter of equal rates of falling, and by the first decade of the nineteenth century, the English chemist John Dalton (1766–1844) was putting atomic theory right back on its feet. He proposed, on the 2 • The “Impossible” Science
basis of careful experiments, that all physical substances were either “elements” or “compounds,” compounds being made up of two or more different elements (for example, water is a compound composed of the two elements hydrogen and oxygen). Furthermore, according to Dalton, his results would be explained if each element consisted of a cluster of identical basic particles (which he called atoms), each element having a basic particle of different weight. After 2,000 years, atomic theory was back in business. For the next hundred years, this modernized version of atomism underpinned all chemistry and physics. It gave us the “ball and stick” type of molecular model, familiar to generations of students and used by James Watson and Francis Crick in the 1950s to crack the doublehelix structure of DNA. Even today, such models still provide a useful description of the physical world at a certain level. But as early as the opening of the twentieth century, it was already clear that Dalton’s “atoms” were not indivisible after all. They were themselves composed of even tinier and more fundamental particles. So a new picture of the atom—like a miniature solar system, with miniscule electrons orbiting a central nucleus—has become another icon of modern physics. It is as instantly recognizable as the ball-and-stick molecule and well known to many who have not the slightest notion what the figure represents. If that were all that had changed, then even though the chemist’s “atom” had ceased to be the smallest known particle, the essence of atomism—the idea that the physical world is composed of tiny indestructible bits of matter flying around in a void—would still have remained unscathed. But that was not all that changed. Further developments in the early twentieth century not only upset the newly restored atomic theory but also brought the question of consciousness to the heart of physical science. First of all, Albert Einstein (1879–1955) showed in 1905 that tiny bits of matter are not indestructible after all: they can be converted into huge amounts of energy in accordance with the famous equation E = mc2 (which, in English, says that the amount of energy produced by such a conversion equals the amount of material destroyed, multiplied by the speed of light and then multiplied by the speed of light again). Twenty years later came “quantum theory” and Erwin Schrödinger’s “wave equation,” which says that subatomic entities like electrons should not be thought of as “particles” at all. A particle would be a tiny bit of material flying through space, and at any instant it would have a precise position, speed, and direction. But quantum theory (at least one influential version of it) says that none of this is true of electrons and the like. At the subatomic quantum level, The “Impossible” Science
• 3
we are told, the physical world is not made up of tiny individual bits and pieces in empty space but is a continuous interweaving whole, like the waves on the surface of the ocean. It is only when it is “observed” that this wavelike and ever-spreading continuum “collapses” and has the appearance of a particle in a fixed position. Atomism— championed by Democritus, scorned by Aristotle, and triumphantly revived by Dalton—seemed due for another spell in exile. This tale illustrates well the dizzy lifestyle of a scientific hypothesis, but what does it have to do with consciousness? The answer lies in the idea that only when the quantum world is observed does it take definite shape. So long as nobody is looking, say some quantum theorists, the subatomic world is a vast array of possibilities, with nothing fixed or certain. Conscious observation is the event that turns this wealth of possibilities into the single actual world we know. It has to be granted that quantum theory and its interpretation is fraught with problems and disagreements, but this unlikely proposal remains a persistent theme in all the arguments and debates. If it is true, then conscious minds, far from being detached observers of a preexisting universe, are in a continual process of creating the very world they inhabit. These are contentious issues, and their detailed role in the study of consciousness is discussed further in Chapter 9. For the moment, I note that they throw into question the assumption at the start of this chapter that science and the world it investigates are totally objective. That is sufficient to dent the claim that a science of consciousness is self-contradictory. For the remainder of this chapter, I trace the study of the mind from ancient times, through medieval Europe and the Enlightenment, and to the beginnings of modern psychology at the end of the nineteenth century and the battle throughout the twentieth century to establish a scientific study of consciousness.
Body, Mind, and Soul in Ancient and Medieval Times It is hard, if not impossible, to study pre-Enlightenment thinkers without reading back into their words the categories and concepts of more recent years. Even to ask a question such as, “How did Aristotle think the body and mind were related?” is already to assume that the mind and body are two different things, maybe two completely different kinds of things, that must be related in some way.Yet that way of thinking about human beings and their makeup might be quite alien to ancient ways of thinking. If that is so, then any attempt to answer the question as posed will be a distortion of Aristotle’s teaching. Consequently, today’s scholars have created no simple and uncontroversial 4 • The “Impossible” Science
account of how the philosophers of old understood human nature or how they would have tackled what we now call “the mind-body problem.” However, the writings of the ancient Greeks were very influential in the European Middle Ages, out of which the modern world grew, so some understanding of the great thinkers of the past is necessary if we are to appreciate how present-day views evolved. An immediate example of the problem we confront is a confusion in terminology. The nearest Greek equivalent to our concept of the conscious “mind” is probably the word “psyche,” which has given us the modern English word “psychology” for the scientific study of the mind. However, in English translations of Greek philosophy, psyche is more often rendered as “soul,” which to modern ears has a separate and narrower meaning. Furthermore,Aristotle used the term in ways different from the modern uses of either mind or soul. In many cases, “life” would seem a better translation. Given these difficulties, I shall follow a common practice in philosophy textbooks and use the terms “mind” and “soul” interchangeably, depending on the context. Aristotle rejected the notion that the distinctive features of an object resulted from the number and nature of the atoms that were alleged to make it up. For him, the essence of an object was not the material it was made of, but the functional structure or “form” imposed on the material. In the case of a living thing, he called this essence or form its “soul.” For Aristotle, this form (or soul or essence) does not refer—as the English word might seem to imply—to the object’s shape or its constituent material but to its power to exercise certain functions.As he once said, if an axe had a soul, that soul would be chopping. In fact, of course, he did not attribute a soul to an article such as an axe but only to living things: to plants a “nutritive” soul that sustained them as living organisms; to animals a “nutritive and sensitive” soul that sustained life and also made it possible for the creatures to respond to sensations and move about unaided; and to humans a
For Aristotle, the essence of an object was not the material it was made of, but the functional structure or “form” imposed on the material. In the case of a living thing, he called this essence or form its “soul.” (Courtesy of Thoemmes Press)
The “Impossible” Science
• 5
soul that was “nutritive, sensitive, and rational.” This last—restricted to humans alone—allowed thinking to take place, in addition to sustaining life and permitting movement (see Honderich 1995 for a summary of Aristotle’s discussion of the soul). The key thing to note here is that all these uses of the word “soul” related to the organization of the physical body. The soul was not therefore the same as matter or composed of matter, but neither could it exist independently of matter. This was a crucial point of difference between Aristotle and his teacher Plato (428–347 B.C.E.). Plato believed in the existence of a nonmaterial soul, which was itself the essential person. The soul’s association with a physical body was a temporary and in some ways unfortunate occurrence. This difference between the teachings of the two ancient philosophers posed problems for medieval Christianity when it tried to combine their ideas in a single unified scheme. The medieval scholar who set himself the task of reconciling current Christian teaching, which was based on the Bible and some second-hand ideas from Plato, with the then newly rediscovered philosophy of Aristotle, was St. Thomas Aquinas (1225–1274). It took him 8 million words (Magee 1988, 60). Prior to Aquinas, St. Augustine (354–430) had been the major force in shaping Western Christian doctrine. Augustine did not read Greek but had adopted a revised version of Platonism that was currently available in Latin translation, including the claim that the human soul was immortal and lived on after the death of the body. By the thirteenth century, that was official church teaching, despite its apparent conflict with the Bible, which spoke of the afterlife in terms of the resurrection of the body rather than the immortality of the soul. Aquinas needed to bring together into one system the contradictory views presented by Plato, Aristotle, and the Bible. The synthesis he achieved formed the backdrop to the modern study of consciousness. Here is how he did it. Basically, St. Thomas kept to Aristotle’s definition of the soul as the form of a living organism, apart from which it was unable to exist. So when a plant or animal died and ceased to exist physically, its soul also ceased to exist. But humans were different, because their souls were not only “nutritive and sensitive” but also rational, which meant they engaged in thinking. Aquinas noted that intellectual activity was unlike anything that plants or animals did, because it was not in itself a bodily process. Anything done by a plant—taking up water, growing, wilting, producing flowers, and so on—was a process bringing about a change in the plant’s physical state. In the same way, anything done by an animal—feeding, moving, fighting, breeding, and so on—involved a 6 • The “Impossible” Science
change in the animal’s body. So it followed that the governing principle of both plants and animals, their soul, had no role apart from the physical organism with which it was associated. But humans were different. They could think. And so far as Aquinas could tell, thinking—that is, things like imagining, deciding, or planning—involved no necessary bodily process or change. So unlike the situation with plants and animals, in humans the governing principle—the rational soul— did have a role over and above that of directing the body and its organic processes. This discovery gave Aquinas the opening he needed. If the rational soul could do things that did not directly bring about changes in the body, then it was not entirely nonsensical (as it would have been in the case of plants and animals) to think of that soul as continuing in existence even after the body had died and been destroyed. This conclusion had a further consequence. According to Aquinas’s way of thinking, it would not be possible for the soul to survive the death of the body if it had come into being along with the body, simply as part of the natural process. A soul produced naturally (like those of plants and animals) would be subject to the natural process of death and decay. So if the human soul really could survive bodily death, as now seemed probable, then it must have been directly created by God outside the natural course of events. But because this rational soul was still first and foremost the life-principle of a human body, rather than a Platonist’s free-floating spirit, it was not immediately clear how exactly it could exist in isolation from the body. This problem, however, turned out to be a blessing. Dealing with it enabled Aquinas not only to reconcile Aristotle with the church’s official teaching but also to resolve the contradiction that still existed in Christian teaching on the afterlife between the bodily resurrection found in the Bible and the immortality of the soul inherited from Plato. The rational soul, said Thomas, must be able to maintain some kind of existence without a body, but it was a very unsatisfactory state
Plato believed in the existence of a nonmaterial soul, which was itself the essential person; the soul’s association with a physical body was a temporary, and in some ways, an unfortunate occurrence. (Courtesy of Thoemmes Press)
The “Impossible” Science
• 7
for it to be in. It would not be able to do anything except think. It needed a body to receive information through the senses, express itself, act, communicate with others, and so on. What the rational soul needed, in short, was to be reunited with its body in order to restore the whole person. Here, then, was the explanation, lacking in the Platonist version of Christianity, for the resurrection of the body. It would be the occasion for the reuniting of the body and soul of those who had died, ready to face the final judgment and either eternal bliss or damnation (see Richardson and Bowden 1983 for more on Aquinas’s discussion of the soul).
René Descartes and the Enlightenment St.Thomas Aquinas believed that it would not be possible for the soul to survive the death of the body if it had come into being along with the body. (Courtesy of Thoemmes Press)
Brilliant as Aquinas’s synthesis was, it was not perfect. His theory of an intellectual element that rendered the soul independent of the body still left awkward questions. For example, was this view really compatible with Aristotle’s notion of the soul as the “form” of the body? And could the rational soul really function—even temporarily—when it was cut off from the bodily senses, which (according to both Aristotle and Aquinas) were its sole means of gaining knowledge? The answer given by René Descartes (1596–1650) to these two questions marks a watershed between the later Middle Ages and the Enlightenment. Quite simply, Descartes dispensed with Aristotle. It had been a mistake, he said, to suppose that the rational soul (that is, the thinking mind) and the physical body were bound together by some kind of necessity. There was a close working relationship between them, certainly, but each existed quite independently of the other. In particular, it was the mind and not the body that constituted the person, the human subject. The “I,” of whom Descartes famously wrote, “I think, therefore I am,” was his mind alone (see Cottingham et al. 1985–1991 and Document 1 in this volume). By proposing a sharp distinction between a nonphysical “thinking
8 • The “Impossible” Science
stuff ”—minds/souls—and a physical “material stuff,” of which everything else was made, Descartes made it possible for the physical sciences to blossom. Seventeenth-century Europe was a scene of religious turmoil, but the “Cartesian cut” between mind and material, between soul and body, meant that science could keep apart from the bloody religious conflicts of the times, claiming to be concerned only with the physical world. But the same division that allowed science to flourish without interference from religious questions about the soul also had a less fortunate result. It established the idea, stated at the outset of this chapter, that subjective experience—because it relates to the mind—is not a proper subject for scientific inquiry. Because this attitude has been so powerful and has influenced so strongly the story told in this book, it is worth noting that Descartes himself was aware of a basic problem with his ideas. He had moved decisively away from the Aristotelian view that the mind was dependent for all its knowledge on the bodily senses. On the contrary, said Descartes, the thinking mind is self-sufficient. In 1637 he published an introduction to his philosophical ideas, called the Discourse on Method. It was aimed at a popular market and was written in his native French rather than the Latin still used at that time for scholarly works. In the book he summed up his new approach by saying, “I am a substance the whole nature or essence of which is to think.” (Discourse On Method, Part Four). And this thinking mind, which was the essential person, was not tied to any physical location and did not require physical senses like touch and sight to gain information. It “does not need any place or depend on any material thing,” he insisted (Discourse on Method, Part Four). But there was one difficulty. Even though a body might not be essential for a mind to exist and think, the fact remains that all of us—including Descartes himself— have bodies with which we (that is, our thinking minds) are closely associated. It might be possible in theory for us to function as pure minds, without bodies, but in practice we don’t. Our minds and bodies are not independent. My body is getting dehydrated, and my mind registers that I am thirsty. So my mind decides to have a glass of beer, but my body has to respond by getting up from my desk and going through to the kitchen. Then if I cut myself opening the can, I feel the pain as mine. I don’t merely observe it as damage to something else, my body, but I experience it myself. Descartes was aware of this relationship, and it bothered him. In a famous passage from his sixth Meditation, he admitted that such experiences taught him “that I am not just lodged in my body, like a pilot in a ship, but that I am very closely united to it, and as it were so intermingled with it that I seem to compose with it The “Impossible” Science
• 9
one whole.” But he still maintained that mind and body were quite different kinds of “stuff ” and never could explain how they might interact. He seems not really to have accepted the reality and consequences of such interaction and muttered about “confused modes of thought” and “apparent” intermingling of mind and body. A philosopher of the next generation, Gottfried Wilhelm Leibniz (1646–1716) summed the matter up when he complained, “So far as we can tell from his writings, Descartes had given up the game on that point” (Weiner 1951, 113). The unresolved problem posed by mind-body interaction and various philosophical alternatives to Cartesian dualism (such as idealism and various versions of materialism) are explored in Chapter 5. For the present, I simply note again that Descartes’s sharp divide between the physical and mental/spiritual worlds allowed scientific investigation of the former to proceed unhampered by disputes in philosophy and religion, but at this price: the mind was assumed to be—and indeed was—excluded from scientific study. Only toward the end of the nineteenth century, with the beginnings of modern psychology, was this state of affairs challenged, and even then—as we shall see—a fierce battle lay ahead.
The Origins and Development of Psychology Psychology is the application of scientific methods and standards to the study of the mind. It should be distinguished from two other disciplines: psychiatry, which is concerned with clinical diagnosis and treatment of mental disorders; and neuroscience, which is the study of the physical brain and nervous system. It is obviously a matter of great interest to psychologists to learn what kinds of changes in the brain are associated with particular changes in a person’s mental state. But the primary concern of psychologists is the way the mind works at the mental level, irrespective of any related physical effects in the brain. The first question facing psychologists is where to get their data. To be an investigating scientist, rather than just another philosopher spinning theories out of thin air, a psychologist must have measurable information about the mind with which to work. A modern brain scientist has a host of techniques for looking at and measuring the physical brain (see Chapter 2), but how does a “mind scientist” set about the task of “looking at and measuring” someone’s mind? Traditionally, there have been two approaches: either ask the person to report directly what they are thinking or feeling (introspectionism) or observe what they do and deduce from their actions what their mental states must be (behaviorism). Neither is without its drawbacks, and in practice most 10 • The “Impossible” Science
psychologists use a combination of both, but historically the conflict between the two has been ferocious. Looking back, we can see that nothing did more to hold back the development of a science of consciousness in the twentieth century than the dominance of behaviorism for most of that period. It need not have been like that. The “father” of experimental psychology was Wilhelm Wundt (1832–1920), who, on his appointment as professor of physiology at Leipzig in 1875, at once established the first Institute of Experimental Psychology, where he used introspection to time the thought processes of volunteer experimental subjects. The speed with which experimental subjects reported seeing a flash or hearing a buzz gave Wundt a measurement of how long it took between a sensation arriving at the eye or ear and being registered in consciousness. More complex mental tasks were used to time processes such as memory recall. This work was conducted with the utmost rigor, and the participants underwent intensive training to ensure the reliability of results. All this was an essential part of the attempt to get psychology detached from philosophy and recognized as an experimental science in its own right. The aim was to produce a mental equivalent of chemistry’s periodic table of the elements, setting out an array of basic mental sensations. One science historian of this period records that no subject left Wundt’s establishment without providing at least 10,000 “data points,” each signifying a distinct sensation in the subject’s mental landscape (Boring 1929/1950). Wundt soon attracted graduate students from several U.S. as well as European universities, and his laboratory and techniques became models for other centers that developed in the following years. Two important figures to earn their doctorates under Wundt were Oswald Külpe (1862–1915), who founded what became known as the Würzburg School, and Edward Titchener (1867–1927), who was born in my own hometown of Chichester in the south of England, but whose working life was spent in the United States at Cornell University. Both
WilhelmWundt, a German physiologist and psychologist, was the founder of experimental psychology.Wundt measured the time it took between a sensation arriving at the eyes or ears and its being registered in consciousness. (National Library of Medicine)
The “Impossible” Science
• 11
pursued their teacher’s quest for an exhaustive table of “atomic units” of sensation, but despite (or possibly because of) the intense care and training put into their efforts, the laboratories on either side of the Atlantic came up with wildly conflicting results. Külpe’s published results indicated less than 12,000 “discriminably distinct” sensations, whereas Titchener’s team at Cornell claimed more than 40,000 such data points. Another public dispute arose over alleged “imageless thought,” which Titchener held to be impossible and Külpe’s subjects claimed to have experienced on various occasions. At the time such open disagreement was damaging to introspection’s reputation, although 100 years later, we can see it as just one particular manifestation of a centuries-old and ongoing debate. (For instance, the British philosophers John Locke and George Berkeley argued about the status of “abstract ideas” in very similar terms at the end of the seventeenth century; and here at the opening of the twenty-first, the same essential points are raised by the disputed claim of Robert Forman [1998] that some mystical states constitute a “pure consciousness event” devoid of all contents.) Another significant landmark in the rise of introspectionism, fifteen years after the founding of Wundt’s institute, was the publication of William James’s Principles of Psychology (1890). While his younger brother Henry explored the human psyche in his novels and short stories, through the medium of literature,William adopted a strictly scientific approach. And for him that meant just one thing: “Introspective Observation,” he wrote, “is what we have to rely on first and foremost and always. . . . [It means] looking into our own minds and reporting what we there discover. Everyone agrees that we there discover states of consciousness” (James 1890, 185). Introspection was the only show in town.Yet just fourteen years later, this same William James (1842–1910) was claiming: “For twenty years past I have mistrusted ‘consciousness’ as an entity. . . . It seems to me that the hour is ripe for it to be openly and universally discarded” (James 1904/1934, 4). And when Wundt died in 1920, less than fifty years after the founding of his famous institute, introspectionism—as a serious method of psychology—was itself already dead. What went wrong? The basic reason why introspectionism fell out of favor was that it failed to live up to its promise to deliver scientific respectability to psychology. That failure was due partly to internal weaknesses—the lack of reliable, repeatable experimental results, the lack of a robust theoretical underpinning, and so on—and partly to an external change in the intellectual climate. This change was brought about by 12 • The “Impossible” Science
the rise of positivism, an intellectual movement that was the brainchild of the French philosopher Auguste Comte (1798–1857). His work was popularized in the English-speaking world only decades after his death, as a result of its being abridged and translated in the 1890s. It portrayed the history of human thought as an evolution from religion through philosophy to science, and so it gave psychology an added impetus to cut its past ties with philosophy and strengthen those with the burgeoning sciences. In addition, Comte had invented the term “sociology” and favored the external study of humans in their public lives rather than the method of individual introspection. The movement he founded both reflected and encouraged this prejudice. Even so, introspectionism might yet have survived, had there been nothing more attractive to take its place. But waiting in the wings was a movement whose time had come and whose subsequent dominance of psychology was to drive the very terms “introspection” and “consciousness” out of the discipline’s leading reference books until the late 1980s. That movement was behaviorism.
The Dominance and Decline of Behaviorism The rallying call to abandon introspectionism came in 1913 from John Watson (1878–1958), a professor of psychology at John Hopkins University. He had taken up the post five years earlier, having abandoned an early interest in philosophy in favor of the study of animal learning, the subject of his doctoral thesis. Disillusioned by the methods of introspectionism, Watson asked why psychologists should not study humans in exactly the same way as they studied animals. People could not ask an animal what it saw or heard or felt, so they studied its behavior to learn about its mental state. Since its behavior could easily be observed and measured by any number of experimenters whose results could be compared and replicated, this method made animal psychology genuinely scientific. The same could be true of human psychology, if only it would abandon its reliance on individual introspective accounts and concentrate instead on externally observable behavior. Early in 1913 Watson gave a course of invited lectures before a public audience at Columbia University. Despite the opposition of his colleagues—including his former supervisor—Watson used these lectures to publicize his radical ideas. The first lecture, delivered on February 24, with the title “Psychology As the Behaviorist Views It,” was published later in the year in the Psychological Review (Watson 1913) and became known as the “Behaviorist Manifesto.” By the time a scandalous divorce forced Watson out of academic life eight The “Impossible” Science
• 13
years later, he was acknowledged as one of the most influential psychologists in the United States. While still a doctoral student, Watson had told a friend that his aim was nothing less than to remodel the whole of psychology. Now he was achieving that aim. External behavior was not simply to be a tool for studying internal mental states. Behavior itself was to be the proper object of investigation, and the prediction and control of behavior would be the goal of psychology, just as the prediction and control of chemical reactions was the goal of chemistry. In Watson’s view, prediction and control were the hallmarks of proper science, and what Ivan Pavlov had achieved in manipulating the behavior of dogs,Watson intended to do with humans. He once boasted that if he were given a dozen healthy infants and the complete freedom to bring them up as he pleased, then regardless of their natural abilities, he could guarantee to make any one of them, randomly chosen, into any kind of expert he chose—doctor, artist, lawyer, or even a beggar or a thief. Although he later modified this claim, it was typical of many extreme statements he made to emphasize what he saw as the overriding importance of reflexes and conditioning in determining behavior. The societies described in Aldous Huxley’s Brave New World and George Orwell’s 1984 (first published in 1932 and 1949, respectively) are fictional expressions of this ultimate behaviorist universe. In the process of making behavior the true object of psychological study, Watson needed to eliminate the role of consciousness because in his opinion it was the barrier that still stood between psychology and other sciences. It was this insistence that was to turn behaviorism from a useful methodology into a dogma that denied the reality of the conscious mind. The stages of Watson’s argument went like this: Scientists observed some external stimulus and the person’s subsequent external behavior. This observation was originally meant to provide information about the unseen mental states in between. But this unseen mental stage added nothing. So why bother with it? Was it not really the case that both animals and humans may be read as just highly sophisticated machines, responding to a given input with a corresponding output? The conscious mind had nothing to do with any of this. Consciousness did not really exist. Hardly surprisingly, Watson made enemies. Foremost among them was William McDougall (1871–1938), a British-born psychologist of high reputation, who held posts at both Oxford and Cambridge Universities before crossing the Atlantic in 1920 to succeed William James as professor of psychology at Harvard University. He was the opposite of Watson in every way, except that they were both 14 • The “Impossible” Science
forceful characters with strongly held opinions. Watson had rebelled against the puritanical Baptist faith of his mother and was morally closer to his father, whose marital infidelities had led eventually to the breakup of their family when John was thirteen years old. All this had a bad effect on the boy’s schoolwork, and the younger Watson held a lifelong grudge against his father, despite following his example by leaving his own wife and children to marry one of his graduate students. McDougall, by contrast, was a man of high moral principles and religious convictions, and they carried over into his work in psychology. A strong advocate of free will, psychical research, and life after death, he opposed the extreme version of behaviorism, which denied all these things. How different from Watson, who shortly before his death burned all his unpublished papers, declaring: “When you’re dead, you’re all dead” (quoted in Kensicki 2003). These two heavyweights came face to face on February 5, 1924, in a public confrontation hosted by the Psychology Club in Washington, D.C. (see Document 3, this volume). One thousand people turned up to watch, and at the end of the debate they voted McDougall the winner. He had not entirely denied any value to behaviorism but had argued on commonsense grounds that it needed to be supplemented by attention to the personal feelings and intentions that inform and direct human actions. The audience agreed with him. But by now the focus of Watson’s career had changed. The scandal surrounding his remarriage had forced him out of academic life, and he was applying his scientific psychology—“predicting and controlling behavior”—with great success in the advertising world. In the year of the McDougall debate, he became a vice president of the J. Walter Thompson Agency and supervised the accounts of many householdname products, such as Johnson and Johnson’s baby powder and Pond’s Extra. The combination of the obvious flaws of introspectionism and the allure of a truly scientific methodology for psychology catapulted behaviorism into a dominant position that, under Watson’s successors—most notably B. F. Skinner (1904–1990)—it maintained throughout the middle part of the twentieth century. Its own inherent weakness was, of course, its refusal to acknowledge the one thing that makes up the whole of people’s lives: conscious experience. But so successful was the anticonsciousness propaganda that when, inevitably, the subject had once again to be addressed by psychologists, it could not be raised openly in its traditional form. It had to be smuggled in from an indisputably respectable and scientific source. The required Trojan horse was supplied in the 1940s by an unlikely conThe “Impossible” Science
• 15
tender: the embryonic field of research that was to develop into computer science and artificial intelligence. We have seen that behaviorists treated human beings as machines that respond to a given input (for example, the sound of someone saying, “Dinner is served”) with a corresponding output (for example, walking into the dining room). Even if the role of the conscious mind is eliminated from this little scene, there is still a perfectly respectable and totally scientific question to be explored: How does the human “machine” turn the sound of the message that the meal is on the table into the action of moving to where the table is? What is going on in the ears, brain, legs, and feet to make this happen? By the middle of the twentieth century, neuroscientists had known for some time that the brain and nervous system were made up of millions of nerve cells (or neurons) and that electrical or chemical signals could pass quickly from one to another. Now Warren McCulloch, a psychiatrist at the University of Illinois, wondered whether these neurons in the brain might work in a way similar to the electronic units in the new computers, which had been developed during World War II to help with military calculations and code breaking.Was it possible that the brain processed information just as the computer did? Together with mathematician Walter Pitts, McCulloch set about working on a mathematical model of such a “brain” (McCulloch and Pitts 1943). Meanwhile, the researchers developing the electronic computers were also struck by the possible links between their machines and human minds. The story of the two-way traffic between psychologists and researchers in artificial intelligence that helped to develop the new discipline of cognitive science is told in Chapter 6. Here I simply note that the mathematicians were far less bashful than the psychologists about discussing mental states and consciousness—indeed, they were fascinated by the possibility of building a machine that could think—and this research offered a new way into a science of consciousness. Meanwhile, the psychologists researching brain activity linked to thinking (who were now being called cognitive psychologists) stuck firmly to the notion of “information processing,” producing impressive-looking theoretical flowcharts to track the progress of the brain’s work without broaching the awkward question of how— or even whether—any of this involved consciousness. The setting sun of behaviorism still cast a long shadow.
16 • The “Impossible” Science
The Bandwagon Starts to Roll The computational model of the mind opened the way to cognitive science, which in turn made it respectable again for psychologists to look at more than just the “stimulus input” and “behavioral output” in human activity. To that extent, it did a favor to consciousness research. But not everyone was happy with the new emphasis. Philosopher John Searle, at the University of California at Berkeley, was and remains a scathing critic of the claim that the mind might work like the software on a computer. Such a setup might be able to process information, but it could never understand what it was doing in the way that a human mind does. In a famous analogy, Searle likened the computer-model of the mind and brain to a non-Chinese-speaking person, who can give the appearance of conversing in Chinese but who understands nothing of the language (Searle 1980). Briefly, the deception is achieved by placing the person in a room and passing back and forth cards bearing Chinese characters. The person in the room has a set of rules (written in English) saying which “response” card should be passed out as the reply to any given “input” card. That is the equivalent of a computer running according to a particular program. The computer does not understand what it is doing, any more than the person in the “Chinese Room” understands that language (see Document 6, this volume). An equally forceful and sustained attack on the computer model has come from Sir Roger Penrose at the Mathematical Institute at Oxford University. His argument also relates to the impossibility of a computer—however large and complex—understanding what it is doing. He likes to quote the example of the powerful chess-playing computer Deep Thought, which could beat a grand master in a straight game (one in which players just obeyed the rules) but still failed to solve a simple chess problem that a schoolchild could get right, since the solution to the problem required a genuine understanding of the game (Penrose 1994b, 242). Yet a third assault has come from those like psychologist George Lakoff, whose complaint with the computer model is that it treats the mind as though it were a free-floating abstraction rather than an embodied biological entity. It is ironic that the computational view of the mind, which has indirectly led the way toward a science of consciousness, is in some respects quite close to the dualism of Descartes, which was largely responsible for the split between science and consciousness in the first place. More significant than any of these particular attacks on cognitive The “Impossible” Science
• 17
In this chess problem the white player must play and draw. Human players can easily see that the white player is safe so long as its “pawn cordon” is intact, but the Deep Thought computer took the castle and lost. Roger Penrose uses this as evidence that the computer does not understand chess, even though it can win games. (Imprint Academic)
science—or indeed any of the particular defenses and positive results arising from it—is the fact that these matters are being debated at all. After the wilderness years of behaviorism, consciousness is back on the scientific agenda, and back with a central role to play such as it has never known before. From being the area of research that dare not speak its name, it has become a seemingly unstoppable bandwagon attracting high-profile names like Nobel laureates Francis Crick and Gerald Edelman, who have already made their reputations in other fields and now wish to have a crack at the “big one.” It has become the meeting place for so many different areas of research and scholarship that the regular international conferences “Toward a Science of Consciousness” (first held in Tucson, Arizona, in 1994) now offer talks and workshops on almost 100 different topics. And the Journal of Consciousness Studies (also founded in 1994) is specially dedicated to providing a forum where specialists in disciplines that would normally have no contact can have a dialogue with each other. Perhaps the final proof that the “impossible” science had come into being was the formation, in the wake of the first Tucson conference, of the learned and international Association for the Scientific Study of Consciousness. One of its cofounders recalls the felt need to ensure that their membership really did represent the developing global field of consciousness research. There was a perceived danger at that time of its being dominated by a relatively small group of American—especially Californian—researchers. To prevent such an imbalance, they decided to hold their conferences alternately in North America and Europe, and they have also tried to maintain a balance on their board between scientists from the two continents. However, it is perhaps typical of this story that the founding of the association was marked by a vigorous debate as to its name, especially whether the word “scientific” should be included or not (Baars, personal communication). Those who supported its inclusion believed both that the new society’s emphasis should be on scientific re-
18 • The “Impossible” Science
search and that it should proclaim this intention in its name. Those who were opposed believed that since all serious study must by definition be scientific, the word’s inclusion would be superfluous. More than that, they felt it might be taken to imply that there could be serious nonscientific study, an impression they did not wish to convey. As things have turned out, there is often a predominance of philosophy at their meetings, albeit philosophy dedicated to the cause of scientific research, so perhaps the word’s inclusion is a useful reminder of the association’s primary focus. Be that as it may, the debate on the name was a timely reminder that, even in its apparent hour of triumph, the “impossible” science could still take nothing for granted.
The “Impossible” Science
• 19
2
The Biological Brain
n October 25, 1906, a group of professors sat down at the Karolinska Institute in Stockholm to determine that year’s winner of the Nobel Prize in physiology or medicine. It was by no means their first meeting, and time was running out. The award ceremony was to take place in less than two months, and a decision had to be made. Before the committee was a shortlist of just two names: two men who had both made their reputations researching the structure and working of the nervous system and the brain but were bitter opponents. The first name was Camillo Golgi (1843–1926), a professor at the University of Pavia in Italy, where he had succeeded his former teacher, Mantegazza Bizzozero, in the chair of general pathology. Although most of his working life was spent in Pavia, Golgi had been forced by financial difficulties in the early 1870s to interrupt his academic life and accept the better-paid post of chief medical officer at the hospital for the chronically ill in nearby Abbiategrasso. Denied the research facilities of the university, he turned a small kitchen into a makeshift laboratory and there developed a method of staining and observing nerve cells that made him famous and is still used today. Other body tissue was already being studied under the microscope, but the soft material of the brain was too densely packed for its structure to be seen by the normal methods of the day. By a process of trial and error, Golgi eventually hit upon the two-stage process that, for reasons that are not fully understood even now, led to a small proportion of the brain’s gray matter—only 5 percent or less—being stained black against a yellow background. The combination of the intense contrast and the comparatively small amount of material affected meant that the structure stood out clearly. Like most staining methods, Golgi’s “black reaction” depended on silver nitrate, a compound already familiar from the photographic techniques developed toward the middle of the nineteenth century.
O
21
Italian physician Camillo Golgi was the first to use silver nitrate to stain nerve tissue for study. (National Library of Medicine)
His unique contribution was to harden the nervous tissue in another chemical—potassium bichromate —before impregnating it with the silver that would darken it. Golgi first published his findings in 1873 in a short article titled, “On the Structure of the Brain Gray Matter.” The idea that our bodies are made up of a large number of small individual units called cells was already quite well established by 1870, having been first put forward some thirty years earlier. But whether nerve tissue was also cellular was still an open question, since nobody had been able to study it in sufficient detail. On the basis of his own observations, Golgi believed that the brain’s gray matter was not composed of individual cells but rather should be thought of as a single network of nervous material extending right across the brain. This became known as the “reticular theory” (from the Latin rete, meaning “net”) and Golgi clung to it for the whole of his life. The alternative view that the brain—like other tissue—was made up of millions of separate cells was known as the “neuron theory.” Its chief champion was Golgi’s rival for the Nobel Prize, the Spanish neuroscientist Santiago Ramón y Cajal (1852–1934). To add insult to injury, it was on the basis of observations made using Golgi’s own staining technique that Cajal based his opposition to the reticular theory (Jones 1999). Like Golgi, Cajal trained as a doctor but spent most of his working life as a professional academic. He held professorships successively at Valencia, Barcelona, and Madrid. Cajal was quick to appreciate the importance of Golgi’s black reaction as a technique for studying the structure of the brain, but he suspected that the appearance of a continuous network was an illusion produced by imperfections in the staining method. He worked to refine the process and became convinced that there were indeed separate nerve cells—or neurons—between each of which there was a small gap that he called the synapse. Golgi had been nominated for the Nobel Prize in each of the five
22 • The Biological Brain
years since its first presentation in 1901, and Cajal’s name had also been put forward previously. The committee had commissioned one of their number, Emil Holmgren, to draw up a report comparing the strengths and weaknesses of the two candidates. A few years earlier Holmgren had been a decided supporter of Golgi, but by 1906 his massive comparison of the two candidates—fifty typed pages in all— came down in favor of Cajal, chiefly on the grounds that he had “built almost the whole structure of our framework of thinking” about the anatomy of the brain. In addition, Cajal was constantly innovating and improving both observations and interpretations. Golgi, by contrast, was still defending old views that were now largely discredited—not least as a result of Cajal’s research. There was no denying that without the singular breakthrough in experimental technique made by Golgi, none of Cajal’s advances would have been possible. Faced with this impasse, the committee that gathered on that late October day voted on a proposal to award the prize jointly to both men.Although this possibility had been considered before, there had been no occasion in the fiveyear history of the Nobel Prize when a joint award had actually been made. The committee voted—by a majority verdict—to set the precedent, and the two contenders were duly notified of their shared honor. Neither laureate was exactly overjoyed. Not only was there a professional rivalry between them, but also there was a personal antagonism. Golgi was cold and aloof, whereas Cajal had an impulsive, artistic temperament, and it was rumored they might refuse to be seen together on the same platform. Both did attend the award ceremony on December 11, 1906, although reports in the press noted the obvious bad blood between them, and Golgi’s Italian supporters regarded his acceptance lecture on “The Doctrine of the Neuron: Theory and Facts” as a thorough demolition job on his opponent’s position (for the presentation speech, see Document 2, this volume). It was perhaps with the magnanimity that comes from having one’s own views vindicated that Cajal, writing in his autobiography many years later,
The net-like appearance of the brain’s neurons shown by Camillo Gogli’s staining method. Clarity is achieved as less than 5 percent of the nerve tissue takes up the stain. (Imprint Academic)
The Biological Brain
• 23
was able to say this in connection with the novelty of a joint award: “The other half was very justly adjudicated to the illustrious professor of Pavia, Camillo Golgi, the originator of the method with which I accomplished my most striking discoveries” (Cajal 1989, 546).
Brain Anatomy and Physiology I have started this chapter with an account of the Golgi–Cajal controversy because by the end of the eighteenth century, the brain had become established as the part of the body most closely linked with the conscious mind, and that has remained the case up to the present time, although there is currently a greater emphasis on the role of the whole body in relation to conscious experience. Consequently, it is hard to discuss the science of consciousness without getting involved in some basic brain anatomy (a description of the brain’s structure and its various component parts). Closely linked with anatomy is physiology, the study of how these parts function. If we look at a complete human brain that has been removed from the skull, most of what we see is the part known as the cerebral cortex, or cortex for short. It is a wrinkled sheet of soft tissue, just a few millimeters thick, made up of several layers of nerve cells or neurons. The outer portion consists of “gray matter,” so called from its appearance and often associated in the popular imagination with intellectual ability. The underlying “white matter,” so called because of its paler color, consists of the connections of the cell bodies in the overlaying gray matter. The whole cortex is divided into two symmetrical “hemispheres,” left and right, which are joined by a large bundle of fibers known as the “corpus callosum.” It allows communication between the two halves. In evolutionary terms, the cortex is the most recent part of the brain to have developed, and it is proportionally much larger in humans than in any other creature. Underneath it are the evolutionarily more primitive parts of the brain. The terminology used to indicate the parts of the brain grew up over several centuries and can seem quite bewildering, but some familiarity with the more common terms is helpful (see the Glossary at the end of this book).As a general rule, the older names refer to the physical appearance of the various parts and the newer names to what is thought to be their function. Cortex itself, for instance, is an old name, given because “cortex” is Latin for the bark of a tree, which has a wrinkled look similar to that of the outer layer of the brain. Different areas of the cortex are generally called by newer terms that relate to function. The most significant of them will be introduced below in the discussion of brain 24 • The Biological Brain
LEFT HEMISPHERE
Frontal Lobe
RIGHT HEMISPHERE
Prefrontal Cortex (PFC)
Infero-Temporal Cortex (IT)
Temporal Lobe Parietal Lobe
Occipital Lobe
Primary Visual Cortex (V1)
Primary Motor Cortex
Prefrontal Cortex (PFC)
Primary Somatosensory Cortex
Primary Visual Cortex (V1)
Broca’s Area Wernicke’s Area
Cerebellum
CENTER OF BRAIN LOOKING FROM THE LEFT
Cerebral Cortex Corpus Callosum Basal Ganglia Hypothalamus (Amygdala below) Brainstem
Three views of the human brain (Adapted from Cotterill [1998] by kind permission.)
Thalamus including ILN & LGN (Hippocampus also in this region)
“mapping.” Important features of the older brain regions are marked on the figure on page 25 and are introduced in later sections or chapters as their roles become relevant to our discussion of consciousness. They are the thalamus, including the lateral geniculate nucleus (LGN); the hippocampus (Latin for “seahorse,” so named because early anatomists thought its shape resembled that creature); and the cerebellum. To appreciate how the brain actually works and how its functioning might relate to conscious experience, we need to look in more detail at the structure and functioning of the neurons, or nerve cells, that make it up. There is a “cell body,” containing the nucleus, and from that grow two types of structures that allow communication with other cells. Signals are received through a structure termed a “dendrite” and passed on via one termed the “axon.” Although in general an individual human cell is very small and needs a microscope to be seen, the axon can be quite long—several centimeters—and thus permits physical contact between neurons whose cell bodies are considerable distances apart. A neuron will only have one axon, but the end is branched so that it can send a signal to more than one other cell. Each neuron may have many dendrites, so that it can also receive signals simultaneously from more than one source. There is a small gap between the axon of one cell and the dendrite of its neighbor. As we saw above, it was Cajal’s discovery of this gap, which he named the synapse, that demonstrated the error in Golgi’s reticular theory. In fairness to Golgi, however, it has to be said that the synapse not only acts as a physical break between neurons but also provides the means for them to be “in touch,” as we might say. So although he was technically wrong about their structure, in terms of the practical way that brain cells function, Golgi’s idea of a network was right on the mark. A hundred years after his theory was propounded, “neural nets” are still very much part and parcel of the language of cognitive science and its models of how the mind and the brain work. The way that neurons communicate is by producing, releasing, and taking up certain chemicals known as neurotransmitters. There are several different transmitters, acetylcholine and serotonin being two of the commonest. The stimulus for a neuron to release its transmitter is an electrical signal inside the cell, known as an action potential since it causes the cell to act. In the neuroscientist’s less formal jargon, this process is referred to as a cell’s “firing” and is the electrical activity in the brain that can be measured by the electroencephalograph (EEG), described below in the section on brain-mapping techniques. When a cell fires, it releases its transmitter from the synaptic ter26 • The Biological Brain
minals at the end of its axon. Across the synaptic gap, the dendrites of the neighboring neuron (the “postsynaptic cell”) have receptors that can take up the transmitter. It used to be thought that any given nerve cell could produce only one kind of transmitter, but the situation is now known to be more complicated. Some cells are specific to one transmitter, but many are more flexible. The postsynaptic cell will also probably have dendritic receptors for more than one kind of transmitter and may have sufficient dendrites to receive transmitters from tens of thousands of adjacent neurons simultaneously.With sufficient stimulation (“depolarization”), this postsynaptic cell will itself develop an action potential and fire, releasing neurotransmitters of its own from its axon and passing a signal further along the neural pathway. Once they have done their work, the dendrites allow the transmitter chemical to fall back into the fluid surrounding the cells. From there it may either be reabsorbed by the cell that produced it or broken down to make waste products called metabolites. However, these important chemicals of the brain, such as serotonin and acetylcholine, are now understood to have a more general role than simply “transmitting messages” between cells, and they may spend quite a lot of time simply floating in the fluid surrounding the neurons and influencing (or modulating) the overall activity level of the brain. For this reason, they tend nowadays to be called “neuromodulators.”
Brain-Mapping Techniques As early as the end of the eighteenth century, the brain had been closely linked with the conscious mind, having defeated the rival claims of the heart in that respect. But at that time there was no direct way of studying the nature of the connection between the mind and the physical brain. The earliest attempt was phrenology, developed by Franz Joseph Gall (1758–1828). He became convinced that the characters and dispositions of people depended in a significant way upon their brain and that information about the brain could be reliably inferred from a study of the bumps and depressions on the outside of the head. Gall’s ideas had a brief popularity in the Vienna of the 1790s, but his lectures were banned in 1802 as being dangerous to religion. Although long discredited as a system, phrenology’s model heads—with areas marked on the surface representing the position of the various mental faculties—hold a continuing fascination and are still sold as desk ornaments today. Its importance for our story is that it was the first serious attempt to find a correlation between the physical properties of the brain (as deduced from the skull surrounding it) The Biological Brain
• 27
Franz Joseph Gall was convinced that the character and disposition of a person depended significantly upon his or her brain, and such information could be reliably inferred from studying the bumps and depressions outside the head. (National Library of Medicine)
and the mental states of the person (as reported by them or deduced from their behavior). By the second half of the nineteenth century, physical examination of the brain itself—although limited to postmortem examination—was beginning to provide the first clear evidence of a link between certain mental powers and particular parts of the brain. The first detailed reports in the 1860s and 1870s showed that damage to the left side of the brain disrupted a patient’s language ability. If the damage was in Broca’s area, part of the left frontal lobe (named after Paul Broca, 1824–1880), speech production was affected; speech perception was impaired if the injury was further back in what became known as Wernicke’s area (after Carl Wernicke, 1848–1905, who published the relevant details when aged only twenty-six in 1874). This research was the start of the theory that we draw on the left side of the brain for our rational, logical, intellectual powers and on the right side for our creative, instinctive, emotional qualities (for more on Broca’s and Wernicke’s research, see Bechtel et al. 2001). Nobel Prize–winning research by psychoneurologist Roger Sperry (1913–1994) on “split brain” subjects in the 1960s confirmed the left-brain–right-brain distinction (see Document 7, this volume). This idea caught the public imagination and burgeoned into a veritable industry during the last decades of the twentieth century. At my office, we even had a novelty coffee mug, marked up with left-brain and right-brain sides, to be used according to the mood of the drinker. It often happens that just as an idea catches on with the general public, the experts who first propounded it begin to have second thoughts. Such has been the case here, and the left-brain–rightbrain dichotomy is now treated with less enthusiasm by some cognitive scientists (Squire 1998, 65), but it does still have its champions. Whatever the outcome of that particular debate, the work of Broca and Wernicke and their like was an important stage in what is called
28 • The Biological Brain
the “mapping” of the brain, namely the relating of physical regions of the brain to particular mental processes or cognitive functions. Indeed, researchers using modern scanning methods described below have been able to confirm and further refine the correlation between brain areas and the processing of words first predicted by Broca and Wernicke. Two major developments in the way mental events can be linked to specific sites in the brain occurred in the decade before World War II. One of them capitalized on the brain’s electrical activity; the other was a spinoff from clinical surgery on the brain. Perhaps surprisingly, the brain itself contains no pain-sensitive nerves. Consequently—alarming as it sounds—it is possible to open up the skull under local anesthetic and operate on the cortex while the patient is awake and alert. From the early 1930s onward, the Canadian brain surgeon Wilder Penfield (1891–1976) exploited this fact to talk with his patients (who included his own sister) during surgery (Penfield and Rasmussen 1950; Penfield 1958; see Leitch 1978 about Penfield’s sister). Of particular interest was his treatment of epilepsy. Penfield would stimulate different sites in the cortex with a mild electric shock, and each probe would cause the patient either to have some unbidden thought or experience some sensation.When the reported thought or sensation matched one previously associated by that patient with the onset of a fit, Penfield assumed that he had found an area of the brain likely to be implicated in causing the epilepsy. He would then carefully remove the problem area of brain tissue.As a byproduct of this diagnostic work, he found himself in possession of a large amount of evidence that showed certain areas of the brain’s cortex were associated with specific kinds of thoughts or sensations or memories. Dramatic as these stories of brain surgery may be, the greater contribution to brain-mapping studies has come from the EEG, which was being developed by Hans Berger (1873–1941) in Germany and
Canadian brain surgeon Wilder Penfield.As a byproduct of his diagnostic work, Penfield possessed evidence showing that certain areas of the brain’s cortex were associated with specific thoughts, sensations, or memories. (National Library of Medicine)
The Biological Brain
• 29
by Edgar Douglas Adrian (1889–1977) in England at the same time Penfield was carrying out his operations. In the second half of the twentieth century, it became a widely used tool, both for clinical diagnosis and for research into the neural correlates of consciousness (NCCs). This research seeks to discover which areas of the human brain are active during conscious events such as visual awareness. It is a major task of cognitive science, although, as we shall see in later chapters, it still leaves open the more difficult and important question of whether neuronal activity causes a conscious event or is even identical with it, just because it occurs at the same time or is in some other way associated with it. Nonetheless, the EEG has played a significant role in facilitating experimental work in this area. The great advantage of the EEG over Penfield’s methods and also over more recent attention-grabbing scanning techniques is that it is completely noninvasive. The apparatus consists of a cap worn on the head, rather like an old-fashioned bathing cap, fitted with a number of electrodes that pick up electrical signals generated by the firing of the neurons. Nothing is pushed into or through the skull, not even an external electrical current is passed into the head; EEG is simply a matter of passively recording electrical activity in the brain in the course of its normal functioning. There are two other points in favor of this apparatus. First, its measurements relate to the electrical activity of neurons themselves, which offers some advantages over methods in which neuronal activity has to be deduced from some indirect indicator, such as varying rates of blood flow or oxygen uptake in the brain. Second, it has extremely high temporal resolution, meaning it can record changes in electrical potential, as they happen, to an accuracy down to one-hundredth of a second. The major disadvantage is a correspondingly poor spatial resolution. In other words, EEG tells us very precisely when the neurons are active, but it cannot say whereabouts in the brain the action is happening because the sixteen or thirty-two electrodes deployed over the outside of the skull record the overall electrical pattern at the surface; they cannot analyze that total into its constituent parts. A typical change in electrical potential recorded by EEG will result from the simultaneous activity of 10 million synapses in an area of cortex measuring about 1 square centimeter.And more than one such group might be active simultaneously. The apparatus is most sensitive to signals coming to it at right angles through the skull, but it will also pick up more tangential signals. The multiple sources of the signal within the brain results in what is called the inverse problem. Any given effect at the surface could have been caused by any number of 30 • The Biological Brain
A physician monitors an EEG while a nurse attends to the patient. (National Library of Medicine)
different spatial patterns of firing in the cortex, and no amount of mathematical analysis can determine which of the possible causes was the actual cause in any particular case. There is also another problem. As they pass from the soft tissue of the cortex through the harder bone and skin and hair of the head to the surface of the scalp, the EEG signals are spatially distorted. The effect is analogous to the way light is distorted by frosted glass, blurring the scene viewed through it, and this further confuses attempts to tells where in the brain the signals originated. A more recent offshoot of the EEG is the magnetoencephalograph (MEG). As its name suggests, rather than recording electrical potentials, it measures the associated changes in magnetic field at the scalp. Its advantage is that magnetic fields do not suffer from spatial blurring on passing through the skull. On the downside, the brain’s magnetic fields are exceedingly weak (far less than the earth’s magnetic field), and only a limited number of electrical sources within the brain generate a magnetic effect that is detectable on the scalp surface. To pick up any reading at all, it is essential to use amazingly delicate receptors called superconducting quantum interfering devices (SQUIDS), which have to be housed in a specially designed helmet because they can operate only at temperatures close to absolute zero. One beneficial result of the weak signal is that the inverse problem, although still present, is less damaging than with EEG, because there is less competing information. In addition, there being no loss of clarity The Biological Brain
• 31
as the signals pass through the skull, it is possible to employ layers of sensors at slightly different distances from the scalp to help build up a more accurate three-dimensional picture, just as a surveyor triangulates sightings to fix both direction and distance. Even so, the technical problems are formidable. So, valuable as they are, there are limits to the use of both EEG and MEG. Then into the rather quaint world of EEG, with its electrode caps and dull black-and-white squiggly traces, there burst two bright new stars whose computer-generated images of the brain brought glamour and color to neuroscience. Enthusiasts such as Michael Posner, a neuropsychologist at the University of Oregon, declared that observers were “seeing the mind” when they looked at these brain images (Posner 1993). The cause of all the excitement was the arrival in the 1980s of positron emission tomography (PET), followed in 1991 by nuclear magnetic resonance (NMR), two body-scanning systems that had been designed and developed in the cause of diagnosing cancers and other diseases. Clinicians and patients were excited too, and soon every local hospital had its scanner up and running. But they also proved to be an absolute gift to brain researchers. Leaving aside the high-tech displays, the real value of brainimaging scanners was that they were strong exactly where the EEG was weakest. The new machines were able to provide a detailed picture of slice after slice of the brain, thus enabling a three-dimensional model to be built up. This model allowed surgeons to see the exact position of a tumor, for example, before they made their first incision. It also held out to neuroscientists the possibility of locating neural activity with pinpoint accuracy. But these bright machines also have their drawbacks, and in practice they have not put the humbler EEG out of business, nor can overblown claims that they show the mind “lighting up” be substantiated.Yet without doubt, they have revolutionized the mapping of the brain and transformed certain aspects of consciousness science, so it is important to understand both their value and their limitations. The first disadvantage, compared to EEG and MEG, is that the procedures can raise certain ethical issues. PET, for instance, involves administering a radioisotope drip to the patient or volunteer for the duration of the scan. In the case of routine research carried out at the Institute of Neurology in London, volunteers are told to expect to be lying in the scanner for nearly two hours while the radioactive tracer is being fed into their body. The ethical committee’s recommendation—followed by all the researchers—is not to scan women of childbearing age for research purposes. This precaution says more 32 • The Biological Brain
about the extreme care taken in matters of safety than it does about any real danger to volunteers; nonetheless, it represents a certain restriction on the application of this particular imaging technique. It strikes me as ironic that the original name of NMR was changed to magnetic resonance imaging (MRI) to avoid using the word “nuclear,” which had made some people nervous because of negative connotations like radioactive fallout.Yet in reality it is the cozysounding PET technology that involves the injection (albeit in small amounts) of a radioactive substance. In the case of magnetic resonance imaging, the word “nuclear” refers simply to the nucleus, or center, of hydrogen atoms, which occur abundantly and naturally in the human body. Each nucleus acts as a tiny bar magnet and sets up the resonance that can be captured by the scanner. One possible cause of concern with MRI is the subjection of the patient or research volunteer to very strong and rapidly changing magnetic fields. This procedure is harmless enough for normal people, but even they have to be scrupulous in removing every scrap of metal—coins, spectacle frames, or jewelry—before entering the scanner. The utmost care is always taken to protect patients with cardiac pacemakers or any other metallic devices in their bodies, for whom exposure to the apparatus might have extreme consequences. All those who work with a machine of this kind treat it with tremendous respect, calling it simply “the magnet.” A basic limitation on any measuring or recording device is the time it takes to operate. If the shutter of my camera stays open for onehundredth of a second, I can never photograph an event that is over in half that time. All I will get is a blur. This simple fact is the Achilles’ heel of the scanners, when it comes to capturing neural activity. It takes a minimum of several seconds—and even minutes in some cases—to collect the signals to create a PET image, but the brain events associated with conscious states involve changes in the course of a few milliseconds. It would be like trying to photograph a flight of birds with a pinhole camera. Time does not matter when imaging something static like a piece of damaged brain tissue, but it is crucial when the target is the real-time functioning of the brain. But in that case, it is reasonable to ask, how can PET and functional magnetic resonance imaging (fMRI) be of any use at all in consciousness research? (Functional magnetic resonance imaging measures the brain’s activity, whereas MRI captures its static anatomy. fMRI operates somewhat faster than PET but still much slower than the rate of neuronal events.) The answer is that functional brain imaging is possible because neither PET nor fMRI actually measures neuronal activity as it happens.At The Biological Brain
• 33
first glance, that is another grave disadvantage of these newer techniques compared to EEG and MEG—and the reason why talk of directly “seeing the mind” in scanned images is nonsense—but the great compensation is that the time problem is circumvented. Instead of recording the neurons and their impossibly fast events directly, the scanners focus on related changes in the brain’s chemistry or physics, which indicate that there has been neuronal activity in the vicinity. With fMRI, the target is the relative amount of oxygen in the blood that serves the brain because the magnetic properties of oxygenated hemoglobin and deoxygenated hemoglobin in the blood differ slightly. An increase in neuronal activity is associated with an increase in the oxygen level in the local blood vessels, and this increase is the change detected through the fMRI signal and made visible on the computer-generated image. Since the rate of change of oxygenated hemoglobin levels is much slower than the speed of the associated neuronal activity, the relatively long time required by fMRI to record the change is no longer a problem. The measurement in PET scans also relates neuronal activity to changes in the blood supply, but in this case it depends upon the radioactive tracer compound introduced into the body throughout the period of the scan. When the tracer compound (typically water containing a radioactive isotope of oxygen as a “label”) undergoes radioactive decay, it produces a subatomic particle called a positron, which then decays in its own turn to produce a pair of gamma rays, sent off at 180 degrees to each other, that is, in exactly opposite directions. These gamma rays form the “positron emission” that gives PET its name and that the apparatus actually detects and records. The fact of their arriving simultaneously at detectors on opposite sides of the head forms the basis for figuring out where in the brain they originated. It is known that neuronal activity will be reflected in increased blood flow and therefore more of the radioactive tracer will be carried to those areas of the brain where nerve cells are active. This increase will show itself in the production of more radiation to be picked up by the PET apparatus. As with fMRI, the change actually measured—in this case, the rate of gamma ray emission—is not directly related to the speed of the associated brain events, so the problem presented by the slowness of the recording apparatus is sidestepped. It has been said that the great advantage of PET and fMRI over EEG is the spatial accuracy of the former, and the lack of an inverse problem (found only in EEG) is also a plus for these techniques. But the indirect measurement methods just described entail a less precise result than might otherwise be expected. Both scanning techniques 34 • The Biological Brain
measure something originating in the bloodstream—either an oxygen-related magnetic field in the case of fMRI or gamma radiation in the case of PET—and as a result, the location recorded will be the location of the blood vessels supplying the neurons, not of the active neurons themselves. Since the relevant blood vessels take up a space much larger than the neurons do (say 1,000 times bigger), this considerably reduces the precision obtainable. Even so, it is a great improvement upon EEG. It is now time to consider the application of these brain-mapping methods to the study of consciousness. We turn from technical matters to more practical ones relating to the kinds of research projects for which they are useful and the sort of experiments it is possible to carry out, given the physical limitations imposed by the various pieces of apparatus. I have already pointed out the huge problems associated with MEG, which rule out its use at present for routine research. Even the EEG, which appears to use the simplest piece of equipment in the form of the cap with electrodes, imposes considerable constraints. I had expected that EEG would be the preferred technique among volunteers, but that is not the case. The cap with the electrodes takes quite a while to put on, involving the use of a jelly to ensure good readings, and can be uncomfortable for the subjects to wear. Then there must be no movement of the head during recording, which makes conversation with the experimenter impossible, and even eye blinks interfere with the electrical readings. On top of this, the volunteer has to sit in an electrically isolated room, away from direct contact with the experimenter, all of which can make MEG, during which the subject just sits in a chair, or PET—described below— seem quite attractive by comparison. PET scanning requires subjects to lie or sit with their head under a large hood containing the sensing equipment. As with EEG, this arrangement leaves the hands and arms free to undertake tasks, and communication with the researcher is easy, but the head has to be kept perfectly still. Typical tasks will be to look at pictures, think about what is being viewed, make simple hand movements, and report responses by means of a switch. To give an idea of the kind of research undertaken using PET, consider one of the earliest projects, conducted at Washington University in St. Louis during the mid-1980s under the direction of Marcus Raichle, a pioneer of the method. The aim was to compare the areas of brain activity associated with various aspects of speech, to see whether the results from the PET apparatus confirmed the involvement of those parts of the cortex already associated with these functions, such as Broca’s and Wernicke’s areas. The answer was The Biological Brain
• 35
that these areas were involved but that many other regions appeared to be active as well. In another early experiment, Harvard University psychologist Stephen Kosslyn used PET equipment at Massachusetts General Hospital to show that the section of cortex at the back of the brain, known to be active when we actually look at objects, also becomes active when we just imagine looking at objects. It is a mark of the growing authority of PET results that this phenomenon has now become accepted as a commonplace fact for neuroscientists, yet publication of this work was originally delayed for several years (until 1993), owing to doubts at the time as to whether such surprising results could really be true. The external constraints imposed by the fMRI equipment are altogether more restrictive than those associated with PET or EEG. The subject lies full-length on a special bed, which moves up and into the tunnel of the scanner, a hollow tube about 3 feet across and 8–10 feet long. This is “the magnet.” A special device called a coil is placed over the face, which allows the researcher to access the signal from the brain that will create the computer-generated images. When in use, the scanner is excessively noisy, so the subject—already physically separated from the researcher and support staff—must wear earplugs to make the process bearable. Because the series of experiments might last anywhere from thirty minutes to an hour and a half, this procedure is not for the fainthearted or claustrophobic (although a panic button is supplied for emergencies). During the scanning, a projector screen outside the machine but visible to the subject via a mirror is used to provide images, and the earplugs may be replaced by headphones if aural cues form a part of the tests. The subject has a keypad to report responses in the course of the trials.Valuable as the technique has undoubtedly proved, these practical limitations, taken together with the other shortcomings of fMRI already noted, should make us wary of regarding it as the golden key to unlock the entire science of consciousness.
Single-Cell Investigations The problems faced by neuroscientists investigating the brain’s 50 billion or so neurons can be compared to those encountered by social scientists trying to study the behavior of large populations of people. The task engaging the brain investigator is on a greater scale—there are more cells in a single brain than there are people in the whole world—but the principles are the same. The sociologist who wants to discover, say, the commuting habits of workers in New York has two 36 • The Biological Brain
basic choices. One is to look at the overall picture. Doing so might mean recording, for different times of the day, traffic-flow rates at key junctions, numbers of passengers per hour using the subway, density of pedestrians on certain streets, and so on. The problem with such a broad approach is its lack of precision. There is no obvious way to distinguish the target population of workers from all the sightseers and other users of the city’s thoroughfares. To overcome this problem, our researcher could take the alternative option and start at the other end. The task in this case would be to take detailed information from just a small sample of individuals about their work and travel patterns and then scale up the answers. The danger here is the undue influence that the choice of the sample can have on the final outcome. Just how representative is it? Entire groups of commuters might inadvertently be left out, making the results hopelessly inaccurate. The equivalents of both these approaches—large-scale observation and “opinion poll” sampling—each with its attendant weaknesses, have been used by neuroscientists. Equipment such as the EEG recorder and brain scanners works on the large scale and allows global pictures of the whole brain’s activity to be taken. I now turn to a technique that is much closer to the individual sampling method because it records the activity of single neurons over time. Single-cell recording is done by implanting microelectrodes surgically, one electrode in each of a number of cells. The first great advantage of this method is that it is spatially very precise. The experimenter knows the exact location of the neuron that is being measured. The second great virtue is that it measures the cell’s electrical activity directly, giving millisecond accuracy in timing alongside spatial precision. Against these positive features there is one obvious negative: the implanting of electrodes is highly invasive. Even so, electrodes are routinely implanted in human subjects for presurgical epilepsy mapping, and some of these patients have cooperated in psychological experiments concerned with discovering the neural correlates to consciousness. The classic single-cell work, however, has been done using animals of various kinds—nonmammals as well as mammals—including cats and monkeys. Using animal subjects for tests raises a scientific question and an ethical question, and the two are linked. The scientific question is this: Are the brains of cats and monkeys sufficiently similar to human brains to make the experimental findings relevant to the study of human consciousness? No one can answer that for certain, but the similarity in anatomy and physiology between the animal and human organs implies the answer yes. And clearly the researchers doing this work believe the answer is yes; otherwise they would not The Biological Brain
• 37
spend years training the animals and carrying out experiments with them. But that leads into the ethical question: If animal and human brains are similar enough to yield useful results, is it not likely that the animal and human states of consciousness are also very similar? Does that not make it unethical to carry out any procedures with cats or monkeys that would be prohibited as unethical if carried out on humans? Arguments based on evolutionary continuity and similarity of morphology (i.e., the shape and structure of their bodies) between human and nonhuman animals have been put forward by a number of researchers. The most recent is Bernard Baars at the Neuroscience Institute in California, who has concluded: “There are no known differences in brain mechanisms of sensory consciousness between humans and other mammals” (Baars 2001). The practical implications of these moral issues have been addressed by Harry Bradshaw at the Animal Welfare and Human-Animal Interactions Group, based at the University of Cambridge in England (Bradshaw 1998). He accepts that continuity in morphological characteristics does not necessarily mean continuity of consciousness and feelings, since the whole motivation behind the research is the fact that we do not yet know the precise connection between anatomy, physiology, and consciousness. But, he says, we should not use our lack of certainty as an excuse to assume that animals have either no or very little consciousness. He points out that there is a trade-off between the costs and benefits to humans and the costs and benefits to animals, depending upon where we place the latter on a continuous scale of increasing consciousness. If we assume that animals have very little consciousness, it offers great benefits to humans because then we can use them for research, or intensively farm them, or whatever else seems advantageous to us. But should our assumption be wrong and the animals concerned have a high degree of consciousness—including conscious states that involve suffering— then the cost to the animals is high. Conversely, if we assume that animals have a high degree of consciousness—and treat them accordingly—then the benefit to the animals will be great but so will the cost to humans.We would lose many opportunities for the exploitation of animals, not only in scientific research but also for cheap means of food production, and in other spheres, such as transportation. Bradshaw says that the time has come to abandon the principle, inherited from medieval Christian ethics and very convenient to human beings, that animals (even those quite close to humans in the evolutionary chain) may be treated as unconscious unless it can be proved otherwise. In its place, he wants to apply the “precautionary 38 • The Biological Brain
Surgical illustration of a monkey’s brain. Brain experiments on cats and monkeys raise scientific and ethical questions. Is a monkey’s brain similar enough to a human brain to make experimental findings relevant to the study of human consciousness? And if so, doesn’t that make it unethical to perform a surgical procedure on a monkey that would be deemed unethical if performed on a human? (National Library of Medicine)
principle.” This concept was developed and applied to environmental law in the late twentieth century. In that context, it states that where there is a threat of actual or potential irreversible damage, lack of full scientific certainty should not be used as a reason for postponing measures to avoid or minimize that damage. Applying this “better safe than sorry” approach to animal welfare, Bradshaw proposes that researchers and others should always assume that animals have consciousness and treat them accordingly. If the assumption turns out to be right, it will be to the animals’ benefit; in the case that the assumption is wrong, at least there will be no cost to them. Either way, the costs are borne by the humans who make the decisions rather than (as at present) by the animals who have no choice in the matter. I personally see some problems with this extension of the precautionary principle and have argued against Bradshaw’s position because the history of morality in religion shows how such an approach can become puritanical and restrictive (Freeman 1998). But this objection does not detract from the seriousness of the issue he has highlighted.
The Biological Brain
• 39
Searching for the Mechanism of Consciousness It is one thing to show there is an association (or correlation, to use the preferred term among cognitive scientists) between particular conscious states, such as seeing and imagining objects or creating and interpreting speech, and activity in certain parts of the brain. It is quite another to establish that the brain area in question constitutes the causal mechanism whereby the conscious event is brought about. And it is yet another thing again to say at what organic level consciousness is to be discerned. On this last point, at one extreme there is anesthesiologist Stuart Hameroff, at the University of Arizona at Tucson, who thinks that consciousness originates in unimaginably small structures known as microtubules, which form part of the “internal scaffolding” that supports the cell wall of the neuron (Hameroff 1994). At the other, biologist Walter Freeman, at the University of California at Berkeley, says that consciousness is not located in any particular part of the brain, nor in the whole of the brain, nor even in the whole body, but must be understood—as philosopher John Dewey (1859–1952) taught—in the context of the whole of society (Searle and Freeman 1998, 721). In between, there seems to be no area or level of the brain that has not been championed by one expert or another as the likely seat of consciousness. The very success of brain mapping over recent years has led some cognitive scientists to make ambitious claims regarding the neural correlates—and indeed the causal mechanisms—of consciousness, whereas others have reacted with pessimistic warnings about how little has really been proved. Chris Frith, an investigator who with his team spends much of his time using scanners at the Wellcome Department of Cognitive Neurology in London, claims that no serious brain scientist expects brain imaging to do more than help find some neural correlates of consciousness. He is highly critical of those who indulge in untestable speculation about possible mechanisms of consciousness (Frith 2001). Antti Revonsuo, a researcher into consciousness at the University of Turku in Finland and coeditor of the respected journal Consciousness and Cognition, is another example of the dilemma in which such researchers find themselves. Like Frith, he is himself an experimental researcher who makes use of the imaging techniques we have been describing. He speaks confidently of the power of these methods to reveal many interesting correlations between brain activity and consciousness.Yet he is also fierce in his criticism of any claim that such research has literally “discovered consciousness” and insistent that neuroscientists at present have neither 40 • The Biological Brain
the technology nor the theoretical framework needful for the empirical discovery of consciousness in the brain (Revonsuo 2001). Among those whom Frith and Revonsuo have in their sights would seem to be Nobel Laureate Francis Crick and his colleague Christof Koch. They took the bold step of suggesting that a precise form of neuronal activity (known as gamma or 40-hertz oscillations) might be—at least in the case of visual awareness—the neural correlate of consciousness itself. Since Crick and Koch do not make any sharp distinction between NCCs and neural causal mechanisms of consciousness, their claim appears to be a stronger one than Frith or Revonsuo would find acceptable on the basis of current evidence (Crick and Koch 1990). We have arrived at the point where two roads of investigation need to be followed. One is to look at the evidence that researchers such as Crick and Koch have accrued from the use of the brain-mapping methods described in this chapter and to see which brain areas can most confidently be associated with various mental processes. The other is to ask in more detail what would justify going further and equating the neural correlate of a conscious state with the causal mechanism of that conscious state. These matters will be pursued in Chapters 3 and 4, respectively. Then we shall turn in Chapter 5 to some philosophical questions relating to consciousness and the brain, what is usually known as the mind-body problem.
The Biological Brain
• 41
3
From Light to Sight
he great bulk of neuroscientific work has been done on the visual system, and it is not hard to understand why. It commended itself originally as the primary sense for humans and as the simplest sensory mode for which to design experiments. Then as more and more became known about vision—so that large quantities of evidence accrued, a vast experience in the relevant techniques was built up, and fresh questions kept being raised—research in this field took on a momentum of its own. Not that the subject’s interest or importance has always been obvious. Neuroscientist Francis Crick of the Salk Institute, writing in the 1990s, said he could normally bring dinner-party conversation to an embarrassed halt by confessing to nonscientists his interest in how mammals see things (Crick 1994, 23). Most people have a commonsense idea of what happens when we look at any object, and it all seems pretty straightforward. Why should it engage the interest of a Nobel laureate? Our standard nonscientific account of what happens when we see something—let’s say the pencil lying on my desk—goes something like this: Light from the object enters the lens of the eye, and a miniature inverted image is formed on the retina, just as in a child’s picture-book demonstration of how a simple camera works. The image on the retina is then somehow conveyed to the brain, which in turn consciously registers the sight of the pencil. Pushed to give a bit more detail, we might think of it rather like cable TV, in which the visual scene is caught by the camera and translated into an electrical signal, which then travels along the cable before being decoded in the TV set and restored to its visual form on the screen. This description is an example of a data-driven or bottom-up theory of perception, and despite its simplicity, it actually sums up pretty accurately the information-processing model of the mind and brain that was developed in the second half of the twentieth century and has until very recently dominated psychology and cognitive science.
T
43
The great bulk of neuroscientific work has been done on the visual system.This engraving of visual activity in the brain and eyes is taken from the first European textbook of physiology, Rene Descartes’s De Homine (1662). An apple is seen by the eyes at lower right and the resulting image passes along nerve fibers into the visual cortex of the brain, where it will be processed. (George Bernard/Science Photo Library)
According to this standard account, physical information about the outside world (the sensory stimulus) is first received by the sense organs. It would take the form of waves of light in the case of the eye, waves of air pressure in the case of the ear, and so on. It is then “translated” into an electrical or chemical signal that passes up the nervous system from the sense organs to the brain. This translation process is known as transduction. In the case of visual information, transduction is a three-stage process. The visual stimulus brings about an electrical change in light-sensitive receptor cells in the eye’s retina, which causes a corresponding electrochemical change in adjacent nerve cells, which in turn sets off an electrical pulse called an “action potential” in a third kind of cell known as ganglion cells. Action potentials are the key to the whole system. They transfer information to the brain by causing more nerve cells there to develop more action potentials. (Creating an action potential is commonly referred to as causing a nerve cell to “fire.” See Chapter 2 for more about the working of nerve cells—called neurons—and for a physical description of the brain and its parts.) As visual information passes from one part of the brain to another, various elements—shape, color, or movement—are processed in different places, with new streams of information coming together at different points. In this way, the information goes through a series of stages in the brain, some of which lead to consciousness.
44 • From Light to Sight
The visual information received by the ganglion cells is mostly transferred to the lateral geniculate nucleus (LGN), part of the structure known as the thalamus and a kind of junction box in the brain. From there, it is passed on to an area at the back of the head known as the primary visual cortex (usually abbreviated as V1), so-called because it is the first staging post for visual information in the cortex. The neurons in V1 “fire” to send the information they have received further along the network of nerve cells making up the cortex.At this point there is a fork in the cortical pathway (i.e., the physical route through the cortex) taken by the visual information. Broadly speaking, the lower path processes “what” information—what size, what shape, what color—and works relatively slowly and with attention to fine detail. This has been called perception for recognition. Meanwhile, on the upper path, “where” information—giving the position and movement of the stimulus—is processed faster, though with a corresponding loss of precision. This has been called perception for action. This twin-track approach to processing within the cortex is not the only example of parallel pathways in the brain. I said just now that most of the information from the eyes went to the visual cortex, but not all of it does. There is a secondary system that bypasses the cortex altogether and takes information by a shortcut straight from the eye’s retina to a site at the center of the brain. This so-called subcortical system works faster than either cortical route, and it behaves like an exaggerated version of the upper pathway through the cortex. Its function is to deal rapidly with information from the fringe of the visual field, detecting and locating movement and focusing the attention of the eyes in that direction. This outline of how the brain processes information is already looking much less simple than our first ideas about it, and there is worse to come. In addition to the divisions between (1) the subcortical and cortical routes and (2) the upper and lower pathways through the cortex, neuroscientists have also detected (3) two kinds of ganglion cells—large and small—specializing in taking different kinds of information from the retina to the visual cortex.As with the other pairings, there is a division of labor between these two ganglion systems. The one with the large cells draws information chiefly from the retina cells known as rods and detects information relating to quick movement and changes of brightness. It is insensitive to details such as color and shape, which are analyzed and transmitted in more leisurely fashion by the smaller ganglion cells that draw their information chiefly from the cone cells of the retina. With all these parallel and to some extent competing processes From Light to Sight
• 45
going on, the question arises: Where does it all come together? In many cases (some people would say in all cases), the answer simply is that it does not. The reason lies deep in our evolutionary past. It seems that the faster, coarser system in each of our paired processes is the older and earlier in evolutionary terms. The cortex was the last part of the brain to develop and is proportionally much larger in humans than in other animals, so the subcortical shortcut system must belong to a more primitive stage of human development, a time when to spot something moving and to act immediately—either to eat it or avoid being eaten by it—was all that mattered. The appreciation of subtle colors could afford to wait another few million years.And even now, if we catch a glimpse of a runaway truck about to hit us, then we move too fast to care about the make or the color of the vehicle! In such circumstances the detailed information from the lower cortical pathway never has time to come into consciousness. But in everyday life, we do experience a single visual image in which the shape, size, color, position, and motion of an object all come together. Not to mention the sound, smell, feel, and taste that might all belong to that one experience. Working out how that unification is achieved is known within consciousness studies as the binding problem, and the lack of an agreed answer to it is just one of the large questions still hanging over the story of perception as I have been telling it here. There are, nonetheless, powerful advocates of the informationprocessing approach among the neuroscientists who actually study the physical makeup of the brain and nervous system. Francis Crick, for example, adopts it wholeheartedly. He emphasizes that we don’t actually “see” objects with our eyes (that is, with our light-sensitive retina cells) or even with the primary visual cortex; perhaps, he says, we don’t actually see with any of the brain areas normally associated with visual processing. On this line of thinking, we should treat the experience of seeing as the end product of the process, probably to be associated with the point at which the incoming information is applied to the practical business of responding, whether by thought, word, or deed. Exactly when and where in the brain it all comes together—and, indeed, whether it can be said to happen in any specific place or at any precise time at all—are matters that are hotly debated among cognitive scientists. As Crick himself admitted in a 1994 interview, “This is the bit where we have to wave our hands about” (Crick and Clark 1994, 11). These last comments highlight one of the great weaknesses of the information-processing account. It offers an excellent working hypothesis for tracing the route of nonconscious information around the brain, but it cannot explain how or why we 46 • From Light to Sight
ever have the conscious experience of seeing. I return to this key question in the next chapter.
Neuronal Hierarchies and the “Grandmother” Cell The proposed visual-processing system outlined above was put together out of evidence from many painstaking studies over many years.What follows is one frequently told story of how a combination of hard slog and sheer good luck can bring significant and rewarding advances in research. One of the first questions about perception that many scientists were asking was how the nerve cells that carried, analyzed, and brought together signals from the eyes were able to encode information such as position. For instance, if something was positioned at “two o’clock” in the left eye’s visual field, would one particular neuron, or perhaps one particular trail of them, always fire in response to a stimulus at this location? One day in 1958 at the Harvard Medical School, David Hubel and Torsten Wiesel were using the single-cell recording method to investigate the brain’s response to visual stimulation. They were engaged in work that would subsequently bring them a share in a Nobel Prize, but at this stage things had stalled, and results were disappointingly negative. They were recording the electrical activity—or more often the lack of it—in individual neurons in the visual cortex of a cat. The animal was anesthetized and its eye held open in a fixed position while spots of light were shone on different parts of a screen in front of it. If the particular neuron being “questioned” responded to the light by developing an action potential and firing, the position of the stimulus was said to be within the receptive field of that cell. The apparatus was set up so that when a response occurred, the neuron’s electrical pulse would register on the recording equipment and also cause a bleep on a loudspeaker in the laboratory. When this apparatus had been used to test the receptive field of individual ganglion cells, the results showed that each cell was sensitive to just one particular location on the screen. If the light spot was elsewhere, that cell did not fire. This result was interpreted as follows: It demonstrated that information about the location of the light spot on the screen was received by the cells in the eye’s retina and then reliably conveyed at least as far as the ganglion cells. The same was found to be true of cells in the LGN, whose firing was in turn triggered by ganglion cells. These cells fired only in response to light at a particular point on the screen, which indicated that they had received and could pass on accurate information about the location of the light From Light to Sight
• 47
stimulus in the visual field. Hubel and Wiesel wanted to know whether the neurons in the primary visual cortex (V1) would behave in the same way, proving that they also held this information about the position of the light spot. Hours of tests had yielded no positive results, and other researchers engaged in similar work elsewhere had no better success. The neurons in V1 refused to react to the spots of light, no matter whereabouts they appeared on the screen. Then, on the day in question, came one of those lucky breaks—like Alexander Fleming’s “accidental” discovery of penicillin—that from time to time reward the dedicated scientist’s patience and persistence. To appreciate what happened, we need to know a little more about the experimental setup. The spots of light used as the series of stimuli were produced by using a projector with carefully positioned dark slides, each slide having a small clear patch that would let through a beam of light. The beam produced by each correctly positioned slide was aimed at a precisely known point on the screen, a different location for each slide. On this particular day, while the recording equipment was switched on, one of the slides was caught briefly partway into the machine, its dark edge cutting right across the projector’s beam and causing a sharp straight line to show on the screen. When this happened, in David Hubel’s own words, “the cell went off like a machine gun.” The shock to the experimenters could hardly have been greater if it had indeed been a machine gun, as the rapid staccato reports rattled out from the loudspeaker attached to the apparatus. Further investigation confirmed that neurons in V1 responded not to spots of light but to lines. More than that, each neuron was sensitive only to lines in a particular location on the screen and at a particular angle, and a moving line evoked a far greater response than a stationary one. This finding was totally unexpected and presented a challenge to both of the prevailing views about the brain’s workings that were then current among neuroscientists. On the one hand, there were those who expected the cells in the visual cortex to behave rather like those of the retina and the ganglion cells, which preserved a kind of map of the external visual field. Had they been correct, then certain cells corresponding with certain points on the screen would have fired in response to any light at that given place. It should have made no difference whether the light was round or square or part of a line, moving or still. Yet Hubel and Wiesel had shown that these things did make a difference. The neurons in V1 had not responded to spots of light but had responded to lines and to the angle of those lines. On the other hand, there were those who rejected the whole idea of individ48 • From Light to Sight
ual neurons mapping the external world. They were naturally dismayed when Hubel and Wiesel showed that not only were individual neurons in V1 sensitive to precise external locations—and could therefore map them—but they reacted only to one small class of stimuli at the location to which they were sensitive. Not only did the cells seem able to produce a kind of map of the environment, but it appeared to be quite a detailed map (Hubel 1988, discussed in McCrone 1999). The following possible explanation of their findings was put forward by Hubel and Wiesel themselves. On the route from the retina via the ganglion cells to the LGN, the information from the eye is passed from cell to cell on a one-to-one basis. Each time that an individual cell fires, it provides a signal—which can be thought of as a message that a visual stimulus occurred at a particular location—that causes another individual cell to fire, thereby passing its single piece of information on up the line. That is why the experiment with the cat and the single spots of light on the screen worked for ganglion cells and cells in the LGN. But at the next stage, when the message passes from the LGN to the visual cortex, there is a significant change. A cell in this part of the cortex does not fire if it receives a signal only from a single cell. That is why the researchers failed to get any response from their standard experiment when they tested cells in V1 using single spots of light. The evidence suggested that a neuron in the primary visual cortex will only fire if (1) it receives several signals at the same time, (2) those signals come from LGN cells whose “location information” adds up to a straight line at a particular place on the screen, and (3) that line is at a particular angle. Here is a clue to how the brain encodes complex images. Each cortical cell will respond to a particular location and a particular angle, just as each retina cell responded to a spot of light in a particular position. These cells were named simple cortical cells by Hubel and Wiesel, to distinguish them from a related and more numerous kind of neuron they also discovered in V1 and called a complex cortical cell. Simple and complex cortical cells were similar in that both fired only in response to information about straight lines at a particular angle, but the complex cells were sensitive over a larger receptive field. As a general rule, science advances by an alternating combination of experiment and theory. The experiments using single-cell recordings had been undertaken to investigate receptive fields and to test the theory that individual neurons reacted specifically to a visual stimulus at a particular location. They had shown that that was indeed the case for the ganglion and LGN cells. But Hubel and Wiesel’s results for From Light to Sight
• 49
cells in V1 required a further development of the theory to account for the behavior of the neurons in that area of cortex.What they proposed was a hierarchy of neuronal information processing, such that the further along the visual processing pathway signals move, the more detailed is the information extracted by the individual neurons. If the ganglion and LGN cells detect spots of light and the V1 cells react to lines, then—so the argument went—further stages should be found to result in the extraction of more and more detailed features of the visual image, with fewer and fewer neurons responding as the complexity increases. Eventually, at the end of the process, there may be just a single neuron that will fire when—and only when—it receives the information specific to one particular image, the face of your grandmother, for example. The term “grandmother cell” was coined at the end of 1969 by Jerry Lettvin at the Massachusetts Institute of Technology, and in time it was picked up as a kind of shorthand for this particular way of interpreting the visual system of the brain. (For the history of the term “grandmother cell,” see Rose 1996 and references therein.) Its use was certainly on public record by 1973, when Oxford neuroscientist Colin Blakemore—who was skeptical of the idea—referred to it in an article in the New Scientist magazine in the following terms: “Surely animals cannot have individual detector cells for every conceivable object they can recognize? This debate has become known as the question of the ‘grandmother cell.’ Do you really have a certain nerve cell for recognizing the concatenation of features representing your grandmother?” (Blakemore 1973, 675) Semir Zeki of University College in London, another distinguished researcher into vision and a long-time critic of David Hubel, also pounced on the idea of a grandmother cell as ludicrous and as evidence of the absurdity of the hierarchical model. Just imagine: thousands of neurons die every day in the human brain, and if that one special cell should just happen to be lost, I would never see my grandmother again! It should be noted that this is not a necessary consequence of the hierarchical theory of perception. There might be a number of cells capable of playing the role of the grandmother cell, of which only one came into use on any given occasion. If this were so, then the loss of any one particular neuron would not be so catastrophic as Zeki makes out. Zeki himself holds that there are many small areas of cortex working in parallel, each devoted to a different aspect of vision, rather than a serially organized system (Zeki 1993). As it turns out, subsequent research has failed to find a grandmother cell, or any other single neuron that fires specifically in response to one particular object, so to that extent 50 • From Light to Sight
Blakemore’s skepticism and Zeki’s scorn may be justified. However, a study done in Japan on the visual cortex in monkeys has found that there are single neurons that selectively react to fairly complex groups of features that bear some resemblance to a face. And research into the brain damage sustained by patients suffering from prosopagnosia (the inability to recognize faces) does point to there being certain regions of the cortex that are essential to face recognition. But that is a long way from supporting the idea of an individual grandmother cell. Another serious objection to the notion of single neurons relating to single images is the sheer numbers of cells that would be necessary. Not only would we need one cell for each object we might see in the world, we would need one for every possible position from which we might view each object, since every possible orientation an object takes when rotated in space presents a slightly different visual image. Neurophysiologist Wolf Singer, director of the Max Planck Institute for Brain Research in Frankfurt, reckons that it would take an infinite number of neurons to account for this variety of representations in space. So if the answer is not to be found in the grandmother cell, what alternative solution does Singer have to offer? He is disarmingly frank about the state in which neuroscientists find themselves. In a recent lecture he acknowledged that much of their experimental work is still based on the assumption of a bottom-up, data-driven, hierarchical information-processing model of visual awareness, although they know in their hearts that this approach cannot supply all the answers. “We are still following this hypothesis,” he told his audience, “when we design our experiments. We stimulate and we follow the path of the activity from the periphery into the brain, to its higher centers in the brain. We are still pursuing this search. Climbing up into the hierarchy, although we know—or we could know by simple reasoning—that this search will not be successful.” He accepts that the world appears to us as a coherent whole and that it is only natural to assume there is some convergence center in the brain where all the information taken in by our senses is brought together and interpreted and acted upon. “Unfortunately,” he tells us, “neurobiology contradicts this” (Singer 1998). His evidence for this statement comes from the very experiments that have tried to find this rainbow’s end, where it all comes together, and have singularly failed to come up with any pot of gold. These experiments show that the effects of a visual stimulus can be traced from the eyes to the LGN, from there to the primary visual cortex, and then via the upper and lower routes to areas of the visual From Light to Sight
• 51
cortex where features such as position, movement, shape, and color are discriminated. But if we look to see where the next neurons in the chain are situated, it is in the region known as the motor cortex. These neurons have nothing directly to do with vision. They are sending signals down the line to the muscles that initiate movement or speech or some other active response to what has been seen. The brain’s signals have moved straight from the widely distributed visual neurons to the equally diverse motor neurons, with no single assembly point for the scattered visual information to come together first. In the light of this discovery, a momentum is building up across scholarly disciplines—philosophy, neurobiology, biophysics, psychology— for the view that we have been overly obsessed with the experience of perception and not sufficiently concerned with its purpose. As noted above, those parts of the brain’s visual system that evolved earliest are the ones that handle the fast track aspect of perception. They don’t linger over beautiful scenes and intricate details but deal quickly with warnings of sudden change in the local environment. Maybe if we want to understand the visual system, we have to think of it differently. Perhaps its primary concern is not the appreciation of a rich visual experience but sheer survival. If that is true, then it is not designed for contemplation; it is geared for action.
Seeing for Acting A traditional view of brain function, which we have been assuming so far, regards it as a passive system that needs to be stimulated to become active. In the typical case, some external signal—received and passed on by the sensory organs such as the eyes or the taste buds— passes through a series of stages, at each of which the information gleaned by the senses is processed, analyzed, and finally combined in some way to generate the sight or taste or whatever that is represented in the conscious experience. Now, as Singer told his conference audience, there is a rival hypothesis, one that assigns to the brain a much more active role and envisages a far more complex set of interactions than the serially organized information processing envisaged hitherto. The outline description of the visual system given at the beginning of this chapter already drew attention to one serious complication for the serial hierarchical theory: the suggestion that there are three pairs of parallel visual pathways. One of these pairs is believed to consist of two routes, a lower and higher, by which visual information passes from the primary visual cortex to other parts of the brain. Some of the evidence pointing to this hypothesis of two 52 • From Light to Sight
cortical routes can also be used to support the more action-centered view of the visual system now being considered. The existence of the divided visual pathway was proposed by Leslie Ungerleider and Mortimer Mishkin at the U.S. National Institute of Mental Health in 1982 (Ungerleider and Mishkin 1982). Their evidence came chiefly from studies carried out with macaque monkeys, which showed that neurons in the upper region known as the posterior parietal (PP) cortex, at the top of the head, are generally insensitive to color variation, whereas cells in the inferior temporal (IT) area, the lowest section of cortex that is nearer the ears, do respond to color. It was also found that the ability to choose between a familiar and unfamiliar object—a task for which the animals had been trained and in which they were proficient—was unaffected by damage to PP but disrupted by damage to IT. An even more noticeable difference between the lower (or ventral) and upper (or dorsal) streams of information occurs in the case of motion detection. There is a strong reaction in the upper region of cortex to signals indicating movement, and unlike the motion-sensitive cells in the primary visual cortex, where a given cell would only react to movement that had been picked up at a particular location on the retina, these neurons are activated by almost any movement detected over large areas of the retina. Neurons in IT, by contrast, do not show any sensitivity to moving visual stimuli. It is this kind of evidence that has given rise to the idea of a what/where split for the information extracted by the lower and upper streams, respectively: object vision below, spatial vision above. Ever since the reports of Broca and Wernicke in the mid–nineteenth century, evidence from patients suffering from various cognitive disorders has played a significant role in developing and testing theories of brain function. A situation was reported by David Milner and Melvyn Goodale in 1991 that, along with other new data, led them to modify the what/where distinction. In 1995 they published The Visual Brain in Action, a title that puts their theory in a nutshell. Instead of treating the ventral and dorsal streams as two aspects of the brain’s perceptual system, they say the upper pathway is not concerned at all with the perception of space or movement in themselves but only with the guidance of action. They have a number of strands of evidence for this claim, not originally available to Ungerleider and Mishkin, and the most interesting of them is the behavior of one of their patients, known by her initials, DF. This patient suffered accidental carbon monoxide poisoning, resulting in permanent damage to her brain, including part of the visual cortex. In consequence, she developed a condition known as agnosia, From Light to Sight
• 53
which prevents the subject from being able to recognize and describe geometric shapes. For instance, she was unable to say whether a slot in a board in front of her was oriented vertically or horizontally. This loss of ability seemed to be associated with the disruption of the ventral stream as a result of her injury. However, when DF was asked to place her hand or a hand-held card through the slot, she accurately positioned her hand to match the opening. Moreover, video recordings showed that she began to make the correct alignment from the moment her hand started moving, so she must have been relying on visual cues and was not using the feel of the slot to help orient the card or hand. This raises a fascinating question. She could not have achieved the correct angle unless she could see the aperture and also interpret what she saw. So why could she not describe it, since her ordinary speech and intelligence were not impaired in any way? This question is not easy to answer, partly because DF’s permanent damage not only affects the visual cortex but a large area of her brain, making it hard to tell exactly which area of damage is causing any particular effect. However, at least part of the answer seems to be that her upper cortical pathway is relatively intact and the neurons there are indeed receiving the information from the eyes about the slot. But they are solely concerned with preparing the hand for action, which does not require the ability either to consciously see or to describe the contents of the information. It is enough to be able to act on it. Further experimental work with monkeys who have damage to either the upper or lower sections of cortex along the alternative pathways, supports the contention that the dorsal visual stream is concerned with guiding action rather than perceiving, whereas the ventral stream is involved in perception for recognition. Agnosia could be described as an example of perception without consciousness, and it illustrates the fact that although the physiology of perception is related in some way to our consciousness of the outside world, it is very far from being the whole story. It also begins to raise the question of why we have consciousness at all. Most of us would say that we obviously need to be conscious in order—for example—to be able to see what we are doing.Yet a patient like DF appears to make nonsense of this claim. She can do the orientation task perfectly well without being conscious at all of the relevant information about the angle of the slot. Similar problems are raised by another example of nonconscious perception that has attracted a lot of interest in recent decades: the phenomenon known as blindsight. Blindsight is the name given to a form of blindness that is caused by damage to the brain rather than to the eye. A typical blindsighted 54 • From Light to Sight
patient is Graham Young. (Like most such patients who volunteer to help in research work, he is referred to in the scientific literature by his initials only, GY, but his identity was revealed in a BBC TV program. See Greenfield 2000.) Following a road accident as a boy, he suffered brain damage and lost all vision on his right side. Note that it is not his right eye that is damaged: both his eyes can see perfectly well to the left; neither can see anything to the right. But being able to see on one side and not the other is not what gives rise to the term “blindsight.” That comes from a much more curious finding. Forget about the left side, on which Graham can see normally, and imagine him concentrating upon his right field of vision, his blind side. In a series of experiments, he was asked to “look” at an area of screen that was entirely in his right field of vision. He could not “see” it at all, but he was told by the experimenter that the screen would display a light. Graham was asked various things about this light—was it on or off, was it moving or still, was it moving vertically or horizontally, and so on—and since he could not see it, he was told to guess the answers. If you or I were to run hundreds of tests like this while blindfolded, just guessing at either/or questions, then on average we should expect to guess right 50 percent of the time. But Graham and other blindsight subjects did much better than that. They “guessed” correctly on far more than the 50 percent of occasions that we would expect from chance. In some tests, the accuracy approached 100 percent. These results convinced researchers like psychologist Larry Weiskrantz, of Oxford University, that there must be information from the screen that was being picked up by the eyes and transferred to the brain, despite the insistence of the patients that they could see nothing and were only guessing. They must be perceiving something, even though they were not consciously aware of seeing it. They were “blind” in the sense that they honestly could not see, but they were “sighted” inasmuch as they were in possession of information that could only have come through the eyes. Hence Weiskrantz’s coining of the term “blindsight” to describe their condition (Weiskrantz 1986). The existence of nonconscious perception in patients with selective damage to certain parts of the brain all adds to the evidence that there are multiple routes by which information passes from the sense organs to various parts of the brain.
A Grand Illusion? More evidence that something much stranger is going on, even in normal vision, than the standard information-processing account would From Light to Sight
• 55
allow is provided by the phenomenon known as “change blindness.” This term refers to the quite amazing ability of people with ordinary, healthy vision simply not to notice changes that take place under their very noses. The classic experimental work was done by psychologists Daniel Simons at Harvard University and Daniel Levin at Kent State University in Ohio. In the most reported version of their test, a researcher’s assistant on a college campus comes up to a stranger, who is the unsuspecting subject of the experiment, and asks for directions. While these are being given, two men carrying a door pass by prior arrangement between the speakers. On the side of the door hidden from the innocent subject there is walking a second assistant, who takes the place of the original direction-seeker, who walks off also hidden by the door. In 50 percent of cases, the person giving the instructions carried on talking after the door had passed and, when questioned later, said he had not been aware of the substitution, despite the fact that the two assistants looked different, sounded different, and were dressed differently. Seen side by side, they could never be mistaken for each other, but—in the context described—the switch was missed by the experimental subjects as often as it was noticed (Simons and Levin 1998). In another test, Simons shows a short video clip of a basketball game. He gives the audience the simple task of counting the number of passes made by one of the teams.While they are concentrating on this task, something absurd happens: a gorilla wanders in and out among the players for five seconds out of a minute or so of film.Yet again, in a typical case, nearly half those watching will fail to notice the intruder.When the clip is run again, with instructions to look out for the unexpected participant in the game, everyone sees him. The crucial factor here seems to be the focus of the viewer’s attention, and the results appear to confirm a long-held opinion among many psychologists that attention holds the key to which cognitive brain processes become conscious and which remain “in the dark.” This variant on the phenomenon of change blindness has consequently been given the name “inattentional blindness.” In a third variant of this type of test, subjects are shown pairs of still pictures, with a brief interval—about long enough to blink—between each image. In each pair, the second version of the picture has a major component of the scene left out, or some other significant change has been made. I have myself been involved in a demonstration of this test during a lecture by Kevin O’Regan. He began his academic career as a theoretical physicist while at Cambridge University in England, but his interest soon turned to experimental psychology.At the French National Center for Scientific Research in Paris, where he is 56 • From Light to Sight
now director of the experimental psychology laboratory, he has been one of the pioneers in this field. In his public demonstration at the “Toward a Science of Consciousness” Tucson conference in April 2000, I was among the large proportion of people who failed to spot the difference between the two pictures, shown in quick succession, even though we had been specifically asked to watch out for some change. In one example, there was a group of people in the foreground with a mountain behind them. I simply failed to notice the disappearance of the mountain. Even when the projectionist flipped back and forth between the two images, it took quite a time to spot the difference. In a more worrying example, the picture was the view from the driver’s seat of a moving automobile. In one case, there was a white line painted clearly down the center of the road ahead; in the other it was missing. I never noticed when it came and went.Would I have noticed if I had really been driving and if it had been another vehicle, or a pedestrian, that was suddenly there in the picture? Inattentional blindness is not a researcher’s game. It has profound practical, legal, and ethical implications. It also challenges all our ideas about how and why we see what we do. It is clear from these examples that human beings are aware of much less than we think we are. Our impression is that we have a rich, seamless, and comprehensive visual experience, in which the local environment is represented in accurate and uniform detail, except perhaps at the very limit of our visual field. But if that were indeed the case, how could I fail to see that a mountain had disappeared from a picture? How could the viewers of the basketball passes not notice the presence of the gorilla among the players? The evidence of phenomena like change blindness suggests that our impression of what we see is mistaken. We do not have a complete picture of the scene before our eyes, but only the few details that attract our attention at a particular moment. The supposition that our belief about what we see is mistaken has been dubbed by philosopher Alva Noë at the University of California at Santa Cruz as the “grand illusion” hypothesis. The precise nature of the alleged illusion and whether it might not itself be an illusion—as suggested by Noë’s fellow philosopher Jonathan Cohen along the coast at the University of California at San Diego—has become a veritable industry among consciousness researchers. In the present context, our interest is with a recent proposal of Noë and O’Regan that draws support from inattentional and change blindness for an action-centered approach to perception (Noë et al. 2000; Noë 2002; Cohen 2002; O’Regan 1992; O’Regan and Noë 2001). Back in 1992, Kevin O’Regan fell in with the general consensus From Light to Sight
• 57
that we experience a uniformly rich and detailed visual world, and he was among the first to say that this richness is an illusion. Ten years later, he has, in collaboration with Alva Noë, changed his mind. It is not that they deny that having a rich uniform visual experience is an illusion; they deny that human beings have the rich uniform visual experience in the first place. If we look straight ahead and focus on something, the amount of our total visual field that we have detailed experience of is very small. Reading this book, for instance, we need to move our eyes back and forth (make “eye saccades,” to use the technical term) to be able to see in sharp detail the words at either end of the line. If we stare at just one word, we might just be able to focus simultaneously on one word either side of it, but no more, and then only if is not a very long one. For most practical purposes, we can say we can see only one word. But we are also visually aware of the whole page, of the whole book, indeed of the whole room in which we sit. So in one sense, we can see just one word, and in another sense we can see the whole scene. The feeling that we have simultaneous uniform information about the entire visual field comes from the fact that we can move our eyes and, if need be, our whole head very quickly to focus wherever we wish. The role of movement in the description just given leads Noë and O’Regan to call their theory the sensorimotor approach to perception, and it depends on a concept they call “perceptual presence.” In the example I just gave of reading the book, we experience the whole page as present because we experience ourselves as having access to the whole page, should we choose to, by shifting our focus and attention to any part of it.What we do not experience is comprehensive, uniform, and detailed awareness of the whole page all at once, at this moment.Another example used by Noë is the sight of a cat sitting motionless behind a picket fence. Strictly speaking, we can see only those parts of the cat that show through the gaps, but our perceptual experience is of a whole cat that is present but partly hidden by the slats of the fence. We are not, insists Noë, just combining the visible bits of the animal with our memory of other cats to imagine or deduce the presence of an entire cat behind the fence.We really are perceptually experiencing the presence of the whole animal. As with the page of the book, we experience the whole cat as present because we experience ourselves as having access to the whole cat by the simple expedient of shifting our eyes or head or body to attend to the bits of the animal currently out of view. This understanding of perception is in stark contrast to the usual approaches, which tend to be variations on the idea that the brain cre58 • From Light to Sight
ates a picture or model or—to use the most general term—a representation of the external environment. Representational theories of mind and perception go hand in hand with the information-processing model of brain activity. They motivate the search for a place or a mechanism in the brain that brings together all the diverse bits of sensory stimulation into a single coherent representation of the environment, which can then either trigger some automatic response or else be brought into consciousness to enable a deliberate action to be formulated and carried out. As indicated above, the mood in consciousness studies is swinging away from that approach in favor of a more action-oriented understanding, which in itself can be seen as a return to an earlier but largely neglected interest in direct perception. A leading exponent of direct perception was J. J. Gibson (1904– 1979), who for many years headed his own department at Cornell University. He felt that other experimental psychologists, studying perception in carefully controlled and artificial laboratory conditions, were in danger of getting distorted and artificial results for their pains. Gibson was much more interested in how normal people, going about their daily routines, actually saw their environment. For instance, in his best-known piece of work, he studied how pilots—in the days when flying was much less automated than nowadays—used their eyes to judge an aircraft’s speed and position during takeoff and landing.When one is flying in a straight line, the point toward which the plane is moving appears stationary, and everything else appears to be moving away from it. The whole of the pilot’s visual field will be constantly changing, forming what Gibson called an “optic flow pattern.” It falls on the eye, together with other visual clues, to create a structured pattern of light named the “optic array.” This pattern provides a vast amount of visual information that, according to Gibson, is both unambiguous and directly accessible to the pilot, enabling an accurate and instant judgment to be made concerning altitude, speed, direction, and so on. Gibson compared the mind’s automatic access to the information supplied by the optic array with a radio picking up a broadcast signal and making it instantly audible as speech or music. He called this process “resonance.” Its most important feature was that he understood it in a holistic way, not as a series of steps creating an internal picture but as a direct, integrated perception of the environment (Gibson 1950). I ought not to give the impression that all researchers are carried along on the tide of sensorimotor approaches to perception. Despite the striking experimental results mentioned, a wealth of other empirical evidence fails to support Gibsonian ideas, and many neuroscientists From Light to Sight
• 59
remain highly skeptical. Francis Crick, for instance, is scathing about present-day Gibsonians and “their guru,” as he calls Gibson (Crick 1994, 75). Nonetheless, many of the current developments in our understanding of perception do hark back to Gibson’s insights, though without embracing his entire scheme. His heirs in what is sometimes called the ecological approach to cognition include the psychologist Eleanor Rosch and neurobiologists Christine Skarda and Walter Freeman, all at the University of California at Berkeley. The most important characteristic of Gibson’s theory was his belief that the integrated perception of the environment, which resulted from the resonance process, included an element of interpretation. The optic array was not simply a pattern but a meaningful pattern; it not only provided bare information but also offered possibilities for action on the basis of that information. So here already was that proposed link between perceiving and acting that we have seen developing lately. Gibson called these possibilities for action “affordances” because they afforded opportunities of one kind or another. A chair, for example, afforded sitting; an elevator afforded ascending and descending, and so on. An affordance, in Gibson’s understanding, is neither an objective fact about the environment nor a subjective idea in the mind. It cuts right across the old objective-subjective divide and can only be rightly understood as “both physical and psychical, yet neither,” to use Gibson’s own words. Thus a small hole of the right size will afford a safe haven for a mouse, and that is an objective fact. But it only becomes a mouse hole if a mouse perceives it as such and acts upon that perception. For a cat, the same hole—the same objective fact—affords a hunting opportunity, but again it needs to be seen as such and acted upon to make it a reality. Perception of the world as the perception of affordances is therefore a matter of recognizing environmental features as inviting particular behavioral action. It is the implied interactive process between creature and environment that attracts the label “ecological” to this approach to psychology and makes Gibson a forerunner of those scholars who today emphasize the importance of treating the mind as embodied and embedded in the world (see Gibson 1979; Chapter 7, this volume).
60 • From Light to Sight
4
The Conscious Brain
esearchers into consciousness are interested in the structure and functioning of the brain because they believe—and have produced much evidence to prove—that there is a significant connection between these biological facts and the conscious states experienced by the mind. The physical state of the brain cells (neurons) associated with a particular conscious mental state or event is known as its neural correlate of consciousness (NCC). The term “physical state” is being used at this stage in the broadest sense and may signify anything from a single cell (or even a subsection of it) to a whole sequence or network of billions of cells in many areas of the brain and even beyond it. Similarly, the phrase “conscious mental state or event” may cover anything from a quite general condition, such as being awake rather than in a dreamless sleep, to some very specific thought or experience, such as seeing the vivid red of a poppy or recalling my grandmother’s smile. The word “correlate” is an ambiguous one, which has been the cause of much misunderstanding and disagreement in the world of consciousness studies. The basic meaning is not disputed. Two items are correlated if, when they change, a given variation in one always corresponds to a matching alteration in the other. For instance, if we measure and weigh a set of solid steel ball bearings, we will invariably find that the larger the diameter of the ball, the greater its weight. Thus if we know the relative sizes of two of the balls, we also know their relative weights, even if we have not weighed them. The weight is a correlate of the size and vice versa. Another example of two correlates is provided by thunder and lightning, which are the audible and visible effects, respectively, of an electrical discharge in the atmosphere. The connection in this case is sometimes masked by the fact that light travels much faster than sound, so unless the thunderclap is right overhead, it is always heard a measurable time after the lightning flash is seen. This in itself gives rise to a secondary correlate, between
R
61
the time that elapses between the flash and the bang and the distance of the event from the observer. By counting the seconds between seeing the lightning and hearing the thunder, it is quite easy to calculate how many miles away the storm is. In the examples given so far, the two correlated items are also causally connected. Now consider an instance in which this is not so. There used to be a piece of folksy health advice that said, “Don’t eat pork unless there’s an ‘r’ in the month.” The theory underlying this saying is first, that it is unwise to eat pork in hot weather because of the danger of food poisoning and, second, that the hot months of the year (May–August) have no “r” in their names, whereas the months in the remaining cooler part of the year are all spelled with an “r.” There probably is both a correlation and a causal connection between hot weather and pork’s being unfit to eat. But there is no causal connection between the spelling of the month and the hot weather. That correlation is purely fortuitous, as is easily demonstrated by the fact that it does not apply in the Southern Hemisphere. In consciousness research, the jump is often made from observing a correlation between a given state of the brain and a corresponding conscious experience to assuming a causal connection between the two. On other occasions, it is unclear whether a distinction is being made between an observed correlation and a proposed causal mechanism. The confusion and ambiguity arise in part because of a difference in emphasis between philosophers and scientists over the question of causality. In science, causality is essentially a practical matter that is established by experimental manipulation. If we can set conditions so that A is followed by B and we can eliminate other plausible alternatives, then we can conclude that A causes B. This is standard procedure in physics, chemistry, and biology and also in everyday commonsense. But philosophers tend to be much more reluctant than scientists to admit that other plausible alternatives have really been eliminated, which is frustrating for the scientists. Cognitive psychologist Bernard Baars, for instance, makes no bones about speaking of the neural basis of consciousness (personal communication). And other scientists would doubtless smile in agreement when he hints— only half humorously, I think—that the ambiguity surrounding “correlate” is deliberately fostered to keep philosophers in business!
What Counts as a Neural Correlate of Consciousness? Since the term “neural correlate of consciousness” is currently used with a broad spectrum of meanings, researchers’ preconceived theo62 • The Conscious Brain
ries will influence their attitudes to the practical task of searching for the NCC. Güven Güzeldere, for instance, a professor of philosophy at Duke University and coeditor of a valuable collection of articles on all aspects of consciousness studies, has argued that “there is no neural correlate of consciousness.” But that is because he has chosen to think of the NCC as some brain mechanism that is solely dedicated to bringing about consciousness, and he considers it unlikely that such a system exists (Güzeldere 1999). David Chalmers, who is also a philosopher and has recently been appointed director of the Center for Consciousness Studies at the University of Arizona at Tucson, is in full agreement with Güzeldere on this last point. He does not draw the same negative conclusion, however, because he does not preclude the NCC from having other functions within the brain in addition to those connected with consciousness. He does not even think in terms of a single NCC, but rather envisages there being many different NCCs associated with different aspects of consciousness (Chalmers 2000). Chalmers accepts that in our present state of ignorance, there are bound to be many alternative working definitions and concepts of what constitutes NCC. He is not saying that his own approach to the topic is the only legitimate one, but he does make the point that the most useful kind of definition will be one that makes coherent theoretical sense in its own right and also provides a worthwhile and practical target for experimental investigation. Practicing what he preaches, Chalmers defines an NCC as a system in the brain whose state “directly” correlates with a state of consciousness. He is here trying to steer a middle course between too loose and too ambitious a definition. A useful definition needs to be tight enough to exclude the label NCC from being attached to any physical state that is associated with a mental state, perhaps by nothing more than sheer coincidence. Yet a working definition also needs to be open enough not to exclude at the outset any reasonable hypotheses that deserve being tested experimentally. An example of a superficially attractive working definition, but one that most researchers would regard as being drawn too tightly, might be this: a neural brain state N can be regarded as the neural correlate of the conscious mental state C, if and only if the physical occurrence of N is both necessary and sufficient for the mental occurrence of C. This may turn out in the end to be a correct definition of the NCC, but it is overly restrictive as a working hypothesis because it rules out in advance the investigation of certain other possibilities. For example, it may be the case that a particular conscious state C1 is found to occur in the presence of either of two different physical states, N1 and N2. In this case, neither N1 nor N2 would be The Conscious Brain
• 63
necessary for C1, since either one would be sufficient for C1 to occur.A situation like this, in which a single mental state matches two different physical states, is known by philosophers of mind as a case of “multiple realizability.” As we shall see in the next chapter, many find this possibility quite feasible. Agreeing on a working definition of the NCC is only one of the problems facing the scientific researcher into consciousness. Another arises from the fact that we can distinguish two rather different questions that investigations into the NCC might help us to answer. The first is:What is it in the physical structure and functioning of the brain that enables us to be conscious of anything at all? The focus of interest here is what philosopher David Rosenthal at the City University of New York has dubbed “creature consciousness,” by which he means the general background condition of awareness that distinguishes a conscious organism from a hunk of rock or a person in a coma (Rosenthal 1997). The second question relates to the NCCs for specific contents of consciousness:What state do my neurons have to be in for me to be dreaming of a white Christmas? Or seeing a red rose? This last example highlights a further problem: In terms of neural activity, what are the similarities and what are the differences between my brain’s largely nonconscious system for processing visual information and my conscious experience of the color, shape, and delicacy of that particular flower? (See Chapter 3 for a preliminary discussion of these issues.) Furthermore, it is a disputed question whether there is any such thing as just being conscious, without actually being conscious of something in particular. If the answer is negative, then it may be that the distinction between an NCC for background consciousness and an NCC for the contents of consciousness is misplaced. The NCC for being awake, for instance, might just be the sum of all the individual NCCs for the different contents—sights, sounds, thoughts, feelings, and so on—that make up any particular moment of consciousness. However, the very wide variety of candidates put forward by different scholars as holding the key to the NCC might in part be reconciled if the contents of consciousness and the background state of consciousness are actually correlated with different levels of brain function. In general, there has been greater research interest in looking for the NCCs for the specific states of consciousness and their contents than in the idea that there might be some kind of neuronal on/off switch for consciousness itself. The latter has not been entirely neglected, however, and where it has been explored, the focus has tended to be the evolutionarily older or “subcortical” part of the 64 • The Conscious Brain
brain, especially the thalamus (see the section on brain anatomy in Chapter 2). Ever since the days of Paul Broca and Carl Wernicke, the effects of physical damage (called lesions) to selected parts of the brain have been used as clues to the role of those parts in relation to mental functions and to consciousness. It has been found that small lesions in a part of the thalamus known as the intralaminar nucleus (ILN) are associated with a complete loss of consciousness, resulting in a state of coma, and this finding has led some scientists, such as Joseph Bogen, to consider that the ILN might be the key site for the generation of consciousness (Bogen 1995). There has not, however, been much enthusiasm for the notion of a single group of neurons uniquely responsible for consciousness. Francis Crick and Christof Koch, for example, although not denying the significance of the ILN for consciousness, interpret its importance differently. They think consciousness is more likely the result of the existence of a large number of neuronal connections between the ILN and many different areas of the cortex, regions where much of the activity linked to the contents of consciousness actually takes place. They envisage the ILN providing some kind of arousal signal that triggers cortical activity, which is what then leads to states of consciousness (Crick and Koch 1990). Without the trigger, the neurons in the cortex are not aroused, so there is no cortical activity and no consciousness. Hence the correlation between damage to the ILN and states of coma. Another important contributor to the scientific study of consciousness—one of those championing it before it became fashionable—is cognitive psychologist Bernard Baars, formerly at the Wright Institute and now at the Neurosciences Institute in California (Baars 1988). For many years, he worked with clinical neuropsychologist James Newman, and they have proposed that a wider system of neural architecture, including the thalamus but extending well beyond it, is responsible for activating the cortex in a way that brings about consciousness (Newman 1997). They have called it the extended reticular-thalamic activation system (ERTAS). Newman and Baars insist that a knowledge of brain function constitutes the necessary basis for any general theory of mind and that if there is a real difference between certain cognitive (mental) processes that are conscious and others that are not, then there must be corresponding differences in underlying brain function that will allow the one to be reliably distinguished from the other. Baars has promoted an investigative technique that he calls contrastive analysis (or, more recently, contrastive phenomenology) as a means to explore the differences between conscious and nonconscious brain functions. In practice, this The Conscious Brain
• 65
technique is simply the application of standard experimental method, treating consciousness as a variable. The idea is to find pairs of situations that are as near to identical as possible, except that only one is accompanied by conscious experience. An example would be the difference between undertaking a skilled task (such as driving a car or playing the piano) in a learning situation and carrying out the same task when it has become familiar and routine. In the first case, we are acutely aware of every move we make; in the latter we can be on “autopilot,” as we say, and not be conscious of our actions at all. Another instance of such a pair is the case in which a word is flashed on and off so quickly on a screen that it strikes the eye’s retina but never enters consciousness, contrasted with the same word held on the screen long enough to be consciously seen. By recording and comparing the brain’s activity in both situations within each pair, it should be possible to find out whether there is any site or pattern of neuronal firing that is consistently found only in the cases where there is conscious awareness. Baars is alert to the fact that with any particular pair—say with the subliminal versus consciously observed stimulus of the word on the screen—there might be a number of alternative ways to account for a recorded difference in brain behavior. But he can confidently point to certain consistent findings, taken across a whole series of different observations involving quite different kinds of cognitive events, such as the following: areas of the brain’s sensory system show more metabolic activity (as measured, for example, by PET) when a task is accompanied by consciousness; paying attention to one thing rather than another involves greater neural firing; and observed compared to subliminal sensory stimulation causes greater brain activity. He believes there is sufficient evidence here to make the modest working hypothesis that consciousness is associated with increased brain activity and to look for further confirmation of it. One obvious place to look is the daily sleep-wake cycle that we all undergo, with its successive periods of dreamless (nonconsciousness) sleep, dreaming or rapid eye movement (REM) sleep (altered consciousness), and the fully awake state (normal consciousness). The leading researcher in this area for more than thirty years, J. Allan Hobson of the Harvard Medical School, assigns the differing characteristics of these three conditions to the greater or lesser quantities of certain key chemicals, known as neuromodulators, present in the brain (Hobson 2001). Just a couple thousand or so neuromodulating cells, whose bodies are located in the brainstem, are able to use their long axons to project very widely. In this way, their neurochemicals are released at synapses all 66 • The Conscious Brain
over the forebrain. It used to be thought that the subcortical systems responsible for controlling the levels of these chemicals acted in too diffuse and general a way to be usefully considered as part of the physical correlate of conscious awareness. Since the late 1980s, however, it has been argued by Newman and Baars, among others, that this diffuse influence does not exclude the possibility that one or more of these generalized systems can also have more specific effects on cognitive processes. By 1997, Newman could write enthusiastically about ERTAS, “not as a diffuse fountain arousing the cortex nonspecifically, but a highly articulated system capable of generating differentiated patterns of activation” (Newman 1997, 63). Central to the Baars-Newman claim concerning the relation of ERTAS to consciousness is the idea that it works by bringing the “spotlight of attention” to bear first on one and then another of the nonconscious cortical processes, thus bringing them briefly into consciousness. Newman (whose untimely death occurred in 1999) suggested a way in which this directing of attention might be achieved physiologically. The key to his idea was the inhibition of cortex in unconscious areas rather than the direct excitation of conscious regions. He envisaged selective patterns of inhibitory activation being directed to certain parts of the cortex, with the result that other areas would be relatively excited and so come into consciousness. More recently, Baars has shifted most of his own attention to the cortex because there is now much more information on it than there was back in the late 1980s when he first talked about ERTAS. The evidence can be taken to suggest that evolutionary older parts of the brain had already developed structures that were not conscious in more primitive animals, but that begin to support what we know as consciousness when we get to mammals. That, says Baars, is the stage at which one really has a thalamocortical system (personal communication).
Neural Correlates for the Contents of Consciousness These last comments move us toward the topic of the contents of awareness, and we continue this investigation by looking at a series of experiments specifically designed to locate the NCC of the contents of a visual conscious experience. As already explained in Chapter 3, quite a lot is known about which neurons in the visual cortical pathways respond to different kinds of stimuli. But much of this processing of information from the eyes is carried out nonconsciously, so it would be misleading to think of every one of these active cells as part of the NCC for the contents of consciousness. But how can we tell at The Conscious Brain
• 67
what point the hitherto nonconscious signal becomes associated with a conscious experience? During the late 1980s and well into the 1990s, Nikos Logothetis at Baylor College (he has since moved to the Max Planck Institute in Tübingen), along with a number of colleagues, devised a way of eliminating certain cells from consideration as NCCs (Logothetis and Schall 1989; Leopold and Logothetis 1996). The method used was single-cell recording in awake monkeys. Monkeys cannot tell us verbally what they are experiencing, but they can be trained to press different bars to indicate which of a limited number of familiar images they can see. If they see a horizontal grating, they press one bar; if they see a vertical grating, they press another. The tool used by Logothetis to distinguish between conscious and nonconscious processing was the phenomenon known as “binocular rivalry.” Normally our two eyes pick up almost identical scenes, and by a process that we might call binocular cooperation, the slight difference between the two images is exploited by the brain to produce the stereoscopic effect that allows us to judge distance. However, if the left and right eyes are for some reason presented with two quite different and incompatible images—such as horizontal and vertical gratings—the cooperation turns to rivalry. Instead of a visual experience based on a combination of two stimuli, one image will dominate and make it into consciousness at the expense of the other, which is completely suppressed. Over a period of time, it is common both in humans and in monkeys for the dominance to flip from one eye to the other, so that the subject sees each image singly and alternately, the switch between first one image and then the other happening quite automatically. While the monkey under investigation pressed first one bar and then the other to indicate which of the two images was currently in consciousness, Logothetis took recordings from selected neurons in different parts of its visual cortex. These particular neurons were known from previous tests to fire in response to either stimulus P or stimulus Q. Since both stimuli were being presented to the monkey at the same time, one to each eye, both were being processed simultaneously, although only one was in consciousness at any given moment. The aim of the experiment was to identify neurons that were active but were known to respond to the stimulus not currently being experienced. These neurons could then be eliminated as candidates for the NCC. It turned out that when the bar was pressed indicating that the monkey was seeing stimulus P, cells in the primary visual cortex (V1) were firing in response to both P and Q. But in the inferior temporal (IT) area of the cortex, the only cells to fire were those already known to respond to P. Similarly, when the monkey pressed the other 68 • The Conscious Brain
bar, indicating that it was now seeing Q, the P-sensitive cells in IT stopped firing and those known to react to Q started up. In V1, however, both sets of cells continued to be active. One possible inference is that neurons firing in V1 do not contribute the NCC for visual experience but that neurons firing in IT do. At this point, however, a question arises as to how much should be read into these results. There is again something of a divide between the caution of philosopher David Chalmers and the confidence of cognitive psychologist Bernard Baars. Chalmers comments: “None of this evidence is conclusive (and Logothetis and colleagues are appropriately cautious), but it is at least suggestive” (Chalmers 2000). Baars retorts that there is no such hesitancy among scientists: “I don’t know of anybody who argues that Logothetis didn’t prove things about visual consciousness. Otherwise Logothetis wouldn’t be allowed to put the words ‘Neural correlates of subjective visual perception’ in his title” (personal communication). Fair enough. But the crucial question is, What exactly was it that he proved? Logothetis himself, writing in Scientific American in 1999 and looking back over more than ten years’ research, concluded that visual awareness should not be thought of as the end product of a hierarchy of stages but as involving the whole visual pathway (Logothetis 1999). Even so, does not the firing of V1 cells, representing a response to the nonconscious stimulus, show that V1 is not a major part of the neural correlate for the contents of subjective visual perception? No, says Baars. It only shows that those particular neurons in V1 are not the current correlate; V1 has millions of neurons in it, and they don’t all do the same thing at the same time. Another example of the clash between philosophers and scientists is provided by a cluster of problems concerning the entire concept of a neuron—or even a system of neurons—having the same contents as a conscious experience. This in turn necessarily raises a query over the whole notion of an NCC for the contents of consciousness. These matters are addressed in an article by the philosophers Alva Noë and Evan Thompson, which was still awaiting publication at the time of this writing. Back in 1996, Francis Crick had written a provocative article in the scientific journal Nature. He had confidently asserted that consciousness was now an essentially scientific problem, and therefore, “No longer need one spend time attempting [ . . .] to endure the tedium of philosophers perpetually disagreeing with each another” (Crick 1996, 486). Noë and Thompson admit to having been goaded by these dismissive remarks, and they cannot resist an opening jibe at Crick’s overly optimistic hope that the The Conscious Brain
• 69
scientists might “with a little luck [ . . .] glimpse the outline of a solution before the end of the [twentieth] century.” That hope has not been fulfilled. But their serious point is one that we made earlier: that researchers’ theories, including their philosophical presuppositions, fundamentally affect the design and interpretation of their experimental work. Thompson and Noë contend that current neuroscience is wedded to certain problematic conceptions to do with the contents of consciousness and that they need the help of philosophers more than ever to lead them out of a blind alley (Noë and Thompson in press). Is this just another interdisciplinary spat? Probably, although we might recall from the previous chapter that Wolf Singer, himself a neuroscientist of the first rank, criticized his own profession in what might be interpreted as a very similar fashion. So where does the difficulty lie? I will consider just one of the arguments by way of illustration. Noë and Thompson draw a distinction between two different ways in which two systems of representation might relate to each other. They might either be matched (that is, with respect to their contents be point-for-point identical), or they might simply be in agreement (that is, not contradict each other in relevant details). To use their own example, a photograph depicting several birds flying across a sunny blue sky would be in agreement, in respect of content, with a verbal report to the effect that several birds are flying across a sunny blue sky. But their content cannot be said to match, in the strong sense of being point-for-point identical, because the verbal report leaves entirely open details on which the photograph would be precise. The picture, for instance, will depict a particular shade of blue sky, a definite number of birds in specific spatial relation to each other, and so on and so forth. There is no reason, of course, why a verbal report should not be made as detailed as one likes, but in the example described here there is a lack of symmetry between these two sets of contents. According to Noë and Thompson, a similar lack of symmetry will be a feature of the representational contents of the neurons in the visual cortex when compared to the contents of the conscious experience of seeing. The former will be more general (like the verbal report), and the latter more specific (like the photo). To take a very simple example, the content of the receptive field that causes a particular neuron to fire may be a vertical line. Period. But the visual experience of seeing a vertical line will necessarily include more detail than that, since it will at the very least represent the line as against a background of a certain kind and as occupying a certain spatial relation to the observer. 70 • The Conscious Brain
In view of these considerations, Noë and Thompson conclude that recent studies of the neural basis of binocular rivalry provide no evidence to support the claim of a content match between neural activity and perceptual experience (as opposed to the weaker claim of content agreement in this or that respect). So far, so good, but they then go on to say that in the absence of point-for-point matching, there is “no evidence for the existence of a content NCC for visual perception.” That may be too strong a claim because it follows only if an NCC for contents requires exact content matching, and the authors say they are not sure whether Logothetis himself intends his results to bear the weight of such an interpretation. We are back here where we began this chapter, unable to interpret the experimental work on NCCs without an agreed definition of the NCC, yet unable to agree on a definition until we have the experimental data to support one or other of the rival theories.
Neurons in Concert The name of biologist Francis Crick, winner of the Nobel Prize for his discovery with James Watson of the structure of deoxyribonucleic acid (DNA), has been invoked a number of times already in this book. Now I will look in more detail at his particular contribution to the science of consciousness from his base at the Salk Institute, undertaken in close cooperation with his junior colleague Cristof Koch, at The California Institute of Technology (Caltech). They have been bolder than most neuroscientists in putting forward what they believe to be the neural correlate of consciousness, in the strong sense of its being the causal mechanism that brings it about. Crick laid his ideas before the general public in 1994, in a book called The Astonishing Hypothesis. The title refers to his view that our entire conscious life is nothing more than a biological brain process. The book’s short opening paragraph has become justly famous (or infamous) as a memorable encapsulation of the Crick-Koch doctrine and is widely quoted: The Astonishing Hypothesis is that “You,” your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules. As Lewis Carroll’s Alice might have phrased it: “You’re nothing but a pack of neurons.” This hypothesis is so alien to the ideas of most people alive today that it can truly be called astonishing. (Crick 1994, 3) The Conscious Brain
• 71
At the time Crick’s book was first published, many people reacted by saying that in a scientific and materialist age, this claim was not the slightest bit astonishing. It was indeed just what we should expect. Crick disagreed. The book’s subtitle—The Scientific Search for the Soul—showed that one of his targets was the religious belief that still clings to a kind of Cartesian body-soul dualism. But that was not all. He also said that many people who claim to be materialists and deny they believe in life after bodily death do in fact carry on thinking in the old dualistic ways. He insisted that such crypto-dualists in the scientific community would indeed find the hypothesis astonishing, when it was set out in all its starkness. Crick’s specialty is the visual system (see Chapter 3), and he is well aware of all the evidence to support the idea that lots of nerve cells, in quite different parts of the brain, react to different aspects of a visual stimulus. Crucially, he accepts that the notion of a grandmother cell—a single neuron that corresponds to the visual awareness of one complete object in all its complexity—cannot be taken seriously. So he is committed to some explanation that involves the distribution of the correlate of visual experience across a large number of spatially separated neurons. He is also insistent, of course, that we must not think in terms of my brain’s bringing the whole visual scene together, as if on a screen, for “me” to see. For him and those who think like him, there is no “me,” apart from the behavior of my brain cells and their associated molecules. So whatever the neurons are doing when “I” see something must account not only for the thing seen but also for the act of seeing, for the experience of visual awareness itself. Crick and Koch make the assumption that there is one basic mechanism underlying the wide variety of conscious states that we experience, and that this mechanism involves—on each occasion—a comparatively small number of neurons (Crick and Koch 1990). The question is: What is it about these particular cells that makes them come together to produce awareness? Brain-mapping evidence shows that activity associated with a particular conscious experience is typically spread over many parts of the brain, so spatial proximity is not what binds the active neurons together. A partial explanation may have to do with permanent connections between distant neurons. They undoubtedly exist, some being inherited and others being strengthened by repetition (according to at least one theory, this process is the basis of learning, as described in Chapter 8), but there are not enough neurons in the brain for this theory to account for all possible conscious events. A different line of approach is to look for a group of neurons active at the same time, rather than the same place. 72 • The Conscious Brain
Crick’s idea was not new. A number of researchers in the 1980s had observed that groups of neurons—not necessarily close to each other—were sometimes linked by firing in a synchronized rhythm. Back in 1981, Christoph von der Malsburg had proposed that some such correlated firing might be the clue to the so-called binding problem. It involves how to account for the fact that, if I snap shut the stiff red folder in front of me, I am conscious that the red I see and the snap I hear and the hardness I feel are all related to the one folder. How are the three modes of perception—sight, sound, and touch— bound together to produce a single experience of a single folder? By the time Crick was writing, these synchronized firings, which von der Malsburg had only hypothesized, had actually been reported (see von der Malsburg 1981; Freeman 1988; Gray and Singer 1989; Eckhorn et al. 1988 ). The latter two sets of authors had indicated that the observed oscillations might be the mechanism used by the brain to achieve binding between physically separated neurons. In 1990, Crick and Koch—in Crick’s own words—“took this idea one stage further by suggesting that this synchronized firing on, or near, the beat of a gamma oscillation (in the 35- to 75-hertz range) might be the neural correlate of visual awareness” (Crick 1994, 245). This theory, which is usually referred to as the “40-hertz oscillation,” has become irrevocably tied to the name of Francis Crick, and many neuroscientists now accept that it is in some way significant, even if they do not attribute to it the causal mechanism of consciousness. A second Nobel laureate from another field, who like Francis Crick swept into consciousness studies in the 1980s, was the immunologist Gerald Edelman. Science writer John McCrone has given a vivid account of how Edelman’s high profile and aggressive entry into the domain of the neuroscientists won him their envy and their enmity in equal measure (McCrone 1999, 166–167). He did not help matters by telling the world—via an interview in the New Yorker magazine— that mainstream cognitive scientists were so misguided, in their information-processing and computational approaches, as to be “not even wrong” (Edelman and Levy 1994, quoted in McCrone 1999). But having succeeded in raising the millions of dollars to fund his own Neurosciences Institute in southern California, he could afford not to be polite to his new colleagues. Crick’s own reaction to Edelman was typical of the neuroscience community, which both had joined late in their careers and of which both are now established members. He damned Edelman with the faint praise that he made up by his enthusiasm for what his theories lacked in clarity (Crick 1994, 284). Another critic declared that Edelman’s claims for the importance of biology in The Conscious Brain
• 73
consciousness research would carry more conviction if he had taken the trouble to get his neurological facts right. The same writer—John C. Marshall—went on to complain that Edelman’s descriptions of experimental results “are as sloppy as his neurology” (Marshall 1992). These comments—which may or may not have been deserved but were certainly made—appeared in a review of the book Bright Air, Brilliant Fire, published in 1992, the last in a series of four books in which Edelman worked out his ideas and laid them before the public. This last of the set is a short and nontechnical paperback, making it the best introduction to his theories for the general reader. Edelman’s previous success in immunology had involved a bold reversal in the way the body’s immune system was understood. The old idea had been that each time a new and potentially dangerous foreign molecule entered the body, the defensive system measured the shape and size of the intruder to be able to produce the right shape and size of “antibodies” to bind with it and neutralize it, preventing its doing any harm. In other words, the immune system had to get information from the invading molecule itself—as Edelman puts it, it had to be “instructed”—before it could start producing the appropriate antibodies to deal with it. Edelman picked up on a quite different approach, which he did not invent but of which he became a major champion. This new idea was that the body’s immune system already had a huge variety of antibodies ready and waiting for any intruder before it appeared. On this view, when a strange molecule (perhaps on a virus or bacterium) does enter the body, it is confronted by a vast array of defensive cells with different antibodies. It will then bind with those that make the best fit, and the act of binding will trigger a mechanism in the cell that causes it to divide repeatedly, producing a large number of identical copies (clones). They will be immediately available to mop up any more of the invading molecules that may be in the vicinity. The significant difference between the two models of the immune system is that the first one involved instruction, whereas the second is a case of selection. As with evolution, the selected member of the group (or “population”) reproduces to a disproportionate degree, thereby altering the composition of the cell population. There are now relatively more of the selected item. The mechanism in the immunology case is different from that in Darwinian evolution, and the time scale is vastly shorter, but there is one crucial factor they have in common: there is selection, but there is no selector, and there is certainly no instructor. The interaction between the intruder and the immune system happens automatically and naturally, and only 74 • The Conscious Brain
subsequently do we recognize it as an act of selection, in virtue of what happens afterward. Nothing succeeds like success, so having triumphed with this quasi-Darwinian approach to immunology, Edelman proceeded to apply exactly the same principle to the functioning of the brain, including consciousness (Edelman 1987). In place of an invading molecule on a virus, we have sensory input, let us say through the eyes. In place of the immune system, we have the brain’s visual system.And in place of a vast array of antibodies, we have a vast array of groups of neurons. The focus on groups of neurons and their relation to each other, rather than on individual cells, is one of the distinctive features of Edelman’s model of consciousness. He favors groups partly because individual cells are very limited in what they can do: they can either fire or not fire; and when they do fire, each neuron is such that it can either excite or inhibit other neurons but not both. Groups of neurons are much more flexible and able to accomplish the more complex role that Edelman’s theory requires of his unit of selection. The other reason for focusing on groups is that each individual neuron is connected to so many others that Edelman cannot see how a single neuron could function as an isolated unit of selection. Returning to our comparison between immunology and consciousness, just as in the first case the variety of antibodies was already there in place before ever a foreign
Gerald Edelman points to a poppitbead model of the gamma globulin model—the key to immunity—during a press conference at Rockefeller University, October 1972. Having triumphed with his quasiDarwinian approach to immunology, Edelman proceeded to apply exactly the same principle to the functioning of the brain, including consciousness. (Bettman/CORBIS)
The Conscious Brain
• 75
molecule showed up, so in Edelman’s theory of neuronal groups, the groups of neurons are already in place—some of them inherited at birth and others developed over the lifetime of the individual brain— ready and waiting before there is any sensory stimulus. When the stimulus does occur, what happens is exactly comparable to the case of the immune system. Instead of the visual system (for instance) having to be “instructed” by information from the eyes—as the informationprocessing model of Crick and the mainstream cognitive scientists maintained—the new stimulus just “selects” the existing neuronal group or groups that offer the best fit. Those groups will respond— that is, become active; their member neurons will fire—and that response will be what is picked up by the various measuring and scanning devices used in experimental neuroscience. The picture is not yet complete, however. Edelman’s theory of neural group selection is more memorably referred to as neural Darwinism, and in all versions of Darwinism, the selected member of a given population needs a mechanism for strengthening its influence relative to the rest of the population. In evolution, the genes associated with features that best fit an organism for its environment come to predominate by differential reproduction because, relative to the population as a whole, the individuals carrying those features will on average fare better, live longer, and have more offspring to whom they will pass on their genes. In the immune system, the rapid cloning of the selected antibodies ensures their increased significance. In the case of consciousness, Edelman claims that the selected neural groups become dominant by a process that he calls reentry. Scientists using a serial model of processing, in which information from the visual stimulus (for example) moves from one area of visual cortex to another in turn, interpret reentry as feedback. But Edelman insists that is a misinterpretation. The very term “feedback” presupposes a general direction of forward movement, against which every so often some signal may be sent back from a later point on the route to an earlier one. But Edelman rejects the whole notion of a one-way route. He envisages a situation in which all relevant parts of the brain (which he calls “maps”) are involved simultaneously, with signals going to and fro between them in all directions. In the present case, these areas are chiefly in the visual cortex, but he also includes the subcortical LGN as being in a reciprocal signaling relationship with V1, the primary visual cortex (see Chapter 3). Reentrant signaling is regarded by Edelman as “perhaps the most important” of all the proposals in his theory (Edelman 1992, 85). Here’s why. Like all other theorists concerned with perception, 76 • The Conscious Brain
he has to cope with the binding problem. He does not believe in serial information processing, so he certainly will not be tempted by the notion of a single “grandmother cell” that corresponds to the brain’s final composition of some particular image. And although he is dealing with already linked groups of neurons, these groups are still separated across different maps. Each of these maps is physically part of a different area of the cortex and relates to different aspects of the world—shape, color, or movement—as we saw in Chapter 3. Reentrant signals passing from selected groups of cells in one map to those in another link the whole thing. Each time signals pass between selected groups, the synaptic links between them are strengthened, while the links between groups of neurons that are never selected get weaker and weaker. This use-it-or-lose-it character of neural pathways is well known and generally accepted among neuroscientists (see Chapter 8). It is not itself one of Edelman’s speculations. But he does harness it to his cause by claiming that it provides the mechanism whereby his hypothesized selected groups become dominant. The term “map” is used because the standard understanding among neuroscientists is that the areas of cortex so-called really do act like maps, at least in this respect: the sensory cells in the retina (for example) connect to the selected groups of neurons in each map in such a way that neighboring locations on the retina are also neighboring locations on each map. Thus a reentrant signal passing from map A to map B will automatically link up with the selected group of neurons in map B that corresponds to the same location on the retina (and in the world) as the selected group in map A that sent the signal. In turn, Edelman says, this process can lead to new selective properties emerging through successive reentry to and fro across maps over time. It is as if we had two maps of North America, one showing annual rainfall and the other showing population density, and could combine the two to deduce that desert regions with negligible rainfall were also very sparsely populated. This type of neural coordination has been simulated on a computer and is called by Edelman the reentrant cortical integration (RCI) model of the cortex. He regards it as a very important property of the brain for a number of reasons. For a start, it fits his picture of the brain functioning in a Darwinian way by “selection without a selector.” Second, the selected neuronal groups and their reentrant circuits are the means by which the perceptual system—prior to any conscious thought—can begin to categorize and quickly respond to familiar stimuli, just as the immune system can “recognize” and quickly deal with a previously encountered virus. And The Conscious Brain
• 77
third and perhaps even more important, it provides the mechanism for coupling the sensory system (seeing, hearing, feeling, etc.) to the motor system (the muscular action that enables movement and speech) by an extension of the reentry scheme, which Edelman dubs “global mapping.” Quite some time has been spent on his basic model of perception, but so far none of this neural activity is claimed by Edelman actually to correlate with consciousness. His purpose so far has been merely to show a possible way in which the brain, by quite natural, automatic, and nonconscious means, can recognize and retain perceptual categories. For those perceptual categories to be transformed into what he calls “primary consciousness,” that is to say, into simple sensations and perceptual experiences, all the processes that we have described so far need also to be linked up with memory. Memory for Edelman is not some kind of mental filing cabinet or storehouse, containing stacks of old information, but an active process that links the newer cortex to those evolutionarily more primitive parts of the brain, such as the hippocampus. Since that is the subject of a later chapter, I shall content myself here with noting Edelman’s claims and, again, the frosty reception they initially got from the neuroscience establishment of which he had yet to become a member. This time Oxford University’s Susan Greenfield can stand as the representative skeptic. Although it is perfectly sensible, she says, to interpret our present experience in the outside world in terms of past experiences, Edelman’s postulated link between one subcortical area responsible for internal associations and another for ongoing perceptions is “a colossal simplification of what we know of the physiology of the brain” (Greenfield 1995, 126). Greenfield’s own bid for the neural correlate of consciousness is what she calls “neural gestalts” associated with a “stimulus epicenter” (Greenfield 1995, 97) or—more poetically—“fountains in the brain” (Greenfield 1995, 140). Like Crick and Edelman, she thinks that consciousness depends upon groups of neurons acting in concert in some way, but she regards consciousness not as an on/off switch but more like a dimmer dial for a room light.With an appropriate stimulus, chemicals—the neuromodulators mentioned earlier—are released into the forebrain by neurons rooted in the brainstem. These are the “fountains,” and they arouse the cortical neurons to activity. They are not permanent collections of cells, “hardwired” together, but are more like the clouds in the sky formed by water droplets that come together for a season and then drift apart again. The greater the number of neurons involved on any occasion, the 78 • The Conscious Brain
greater will be the degree of consciousness. This dynamic and flexible approach offers a way to explain levels of consciousness that seem to be less than those of a healthy awake adult (consider a drowsy or drunk partygoer, for example) but that one has every expectation will return to normality quite quickly. It does not in itself make a direct contribution to the question of the neural correlates for the contents of consciousness. Our final example of the search for the NCC is a view that embraces the whole brain and indeed more than the brain.Walter Freeman taught neurobiology at the University of California at Berkeley for forty years but before that had studied mathematics, physics, and electronics, as well as taking in a medical degree on the way. Such a broad background makes for a wide vision, and Freeman—although he is a great expert on the details of neurons—takes the view that what the individual neuron does can only be understood in a much wider context. This wider context is not just a transitory group of cells, such as Greenfield proposes, nor a selected group like Edelman’s, but “the larger framework of its relation to the behavior of the owner of the neuron” (Freeman and Burns 1996, 172). The owner does not have to be human. Freeman’s classic experimental work was undertaken on rabbits, and what work it was.While the rabbit sniffed first one odor and then another—six seconds per sniff—Freeman recorded the EEG trace given off by the part of the brain that responds to smelling and tasting. Using sixty-four channels, each sixsecond trial would yield something like 1 million numbers that had subsequently to be analyzed and interpreted (Freeman and Burns 1996, 173). It was a heroic piece of research that took twelve years, and it yielded what was at the time an amazing result: olfaction (that is, the perception of smell) occurred only when the animal positively sniffed or licked, not when the odor was just allowed to drift into its nostrils. This result was an early confirmation of the crucial link between action and perception (see Chapter 7) and a warning shot across the bows of those who saw cause and effect in too linear a way. Another important finding was that the same odor presented a second time gave a different neuronal response. In other words, if the EEG is indeed correlated in some way with the experience of olfaction, it indicated that a remembered smell is experienced differently from a novel smell. This is another indication that the brain is not just passively receiving signals but is active in perception. Walter Freeman has concluded that conscious perception and the neuroactivity associated with it is all part of a nonlinear dynamic system, which extends beyond the brain or even the The Conscious Brain
• 79
individual creature or person. For him, consciousness is far more than the perception or awareness of an individual organism: “First, last, and always,” he says, “the self is a social being. Its consciousness is social” (Freeman and Burns 1996, 180).
80 • The Conscious Brain
5
The Mind-Body Problem
hy do we have a mind-body problem when we don’t have a digestion-body problem? That is the characteristically blunt way in which philosopher of mind John Searle at the University of California at Berkeley opens up the subject of this chapter (Searle 1984, 14). The form of his question already points to his own belief that mental activity is a biological process carried out in the brain, just as digestion is a biological process carried out in the stomach.And the answer to his question—the reason that we do have a mind-body problem when we don’t have a digestion-body problem—is that most philosophers and many other people think he is wrong to believe that. It is not hard to see why people are skeptical of Searle’s assertion. Just try to imagine your digestion existing outside your body. It can’t be done. Now try to imagine your mind existing outside your body. That is much easier. Indeed, many people say that not only can they imagine out of body experiences but that they have actually had them. Of course, the fact that we can imagine something—or even that some people claim to have experienced that thing—does not make it true. But the fact that we can even imagine the mind existing independently of the body is enough to explain why there is a “mindbody problem.” We could state the basic problem like this: My body and my mind are so intimately connected that I think of both of them as being “me,” and yet at the same time I can imagine each of them being quite separate from the other. So am I my mind? Or am I my body? Or am I both? And what is the connection between them? That is the mind-body problem.
W
Substance Dualism: Descartes versus Ryle The simplest place to start tracking the history of the mind-body problem is with those scientists and philosophers who accept at face 81
René Descartes’s view of the mindbody relationship is called dualism because it treats the mind and the body (including the brain) as two distinct things. It is also called Cartesian dualism, because of its close association with Descartes. (Courtesy of Thoemmes Press)
value our intuition that our bodies are one thing and our minds—or our souls as older writers often called them—are something else entirely. This is the idea associated most closely with the seventeenthcentury philosopher and mathematician René Descartes (1596– 1650), whom we met in Chapter 1. He said that the whole universe is divided into just two kinds of “stuff ”—mental and material— and that minds are examples of the one and bodies are made of the other. This view of the mind-body relationship is called dualism because it treats the mind and the body (including the brain) as two distinct things. It is sometimes known as “substance dualism” because it says that these two things are two totally different kinds of “stuff ” or substance. It is also called “Cartesian dualism” because of its close association with Descartes (see Cottingham et al. 1985–1991; Document 1, this volume). Descartes gave to each of the two substances that made up his system a name that described its essential characteristic. These Latin expressions are widely used, even in nontechnical books, and it is worth the trouble to learn them. The material or physical substance of which the body was made he called res extensa (literally, “something that extends,” or “stuff that takes up space”). The mental or nonphysical substance of which the mind was made he called res cogitans (“stuff that thinks”). It is important to realize just how totally different these two substances are. In particular, although it is of the very essence of res extensa that it occupies a physical space, it is of the very essence of res cogitans that it does not exist in space at all; it bears no relationship to space. According to Descartes, anything we might say about the location of the mind would be equally misleading. It is equally wrong to say that the mind is in some particular place or that it is nowhere or that it is everywhere. So on this reckoning, we must not think of the mind or soul as a kind of ghostly aura, something that is associated more or less with the physical space taken up by the body but that is
82 • The Mind-Body Problem
able to float free of it. In particular, we cannot—if we are being true to Descartes—speak of the mind or soul “leaving” or “returning to” the body, because that would imply that res cogitans has a physical location, and it does not. It is easy to see why dualism is attractive as an explanation of the relationship between the mind and the body. First, it accords with our everyday sense of our minds and bodies as being separate, albeit closely related, things.And second, a point not mentioned so far, it allows for the possibility of our minds or souls—our essential selves in Descartes’s view—outliving the death of our mortal bodies. It does, however, suffer from a number of difficulties, of which the most obvious and persistent is what is known as the interaction problem. Although we commonly think of our bodies and minds as distinct, we also think of them as causally related. That is to say, that they constantly interact with each other. Our bodies act as instruments putting into effect the decisions of our minds, and our minds respond to the information transmitted to them by our bodies through the five senses. But if our minds and bodies are totally unlike each other, as Descartes maintains, how is such causal interaction possible? This objection was put to Descartes by Pierre Gassendi (1592–1655), who said that the only way a physical object (like my arm, for instance) could be made to move was if it was touched by another moving physical object. But Descartes had insisted as a matter of first principle that the mind did not exist in physical space, and therefore—if he was right—as a matter of simple logic, my mind could never cause my arm to move (see Wilkinson 2000, 37–38). This situation, in which the conscious mind has no causal power over the body, reduces consciousness to an “epiphenomenon.” This term comes from the Greek and means “a surface appearance,” as opposed to some substantial interacting entity.A commonly cited example of an epiphenomenon is the spray on the surface of the ocean. At first sight, it may look as though it is the power that creates and controls the waves over whose crests it dances, but it is actually just a byproduct of the interaction between wind and water that is the true cause of the waves. The threat of epiphenomenalism hangs over any theory of consciousness that denies mental states are physical entities yet treats the material world as a causally closed system obeying the laws of physics. Descartes was never able to give a convincing answer to this problem because he did not accept its assumption that physical objects can only be moved by contact with other physical objects. It is very difficult to explain, he said, but our experience simply is that we The Mind-Body Problem
• 83
do move our limbs in response to the decisions of our minds.As a theist, he accepted that God—who is nonphysical—was the source of all the motion in the universe, and therefore it must be possible for physical movement to be brought about nonphysically. And that was that. On various occasions when he did try to explain the matter from a practical point of view, he could give the impression of slipping away from his basic assertion that the mind is completely nonphysical and nonspatial. His language appeared to demand that it be thought of as some kind of very refined, very rarefied, but nonetheless physical substance. At times he spoke of the interaction between mind and body happening at just one specific place, the pineal gland in the brain. Again, the best way to understand him is perhaps to remember that he was a theist, who believed that God was omnipresent without being spatial or having a location. This belief provided a basis on which to imagine the human mind/soul having a similar property. Elsewhere, when discussing the nature of sensations, he referred to the mind’s being intermingled or mixed up with the body in what he called “a substantial union.” And this uniting of the body and soul was, he said, no chance occurrence, but the essence of what it was to be a human being. This statement appears to be a long way from his earlier assertion that “I am a substance the whole nature or essence of which is to think.” An essence, moreover, of which he had said that it did not depend “on any material thing.” Part of the difficulty in interpreting Descartes may arise from our tendency to substitute the word “mind” for his term “soul.” As explained in Chapter 1, the word “soul” is a slippery one, and sometimes “self ” is a better interpretation than “mind.” So we might do better to think of Cartesian dualism being a self-body rather than a mind-body relation. Despite the interaction problem, dualism reigned supreme for 300 years. As recently as fifty years ago, at the time John Searle was a student at Oxford, Gilbert Ryle (1900–1976), then that university’s towering philosophical figure, could still refer to Descartes’s theory as “the official doctrine” of philosophy of mind. But Ryle was about to change all that. In his highly influential book titled The Concept of Mind (1949), Ryle undertook what he himself called a “hatchet” job on substance dualism, or the “dogma of the Ghost in the Machine,” as he called it (Ryle 1949, 15–16). “It is,” he wrote, “entirely false, and false not in detail but in principle. It is not merely an assemblage of particular mistakes. It is one big mistake ” (Ryle 1949, 134–135). Ryle was so impressed with the significance of Descartes’s big mistake that he coined a new term to describe it. He named it a “category mistake.” Imagine two friends having a fierce argument as to 84 • The Mind-Body Problem
whether the north wind is red or green. They are wasting their time. They are both wrong because color is not a category that applies to a movement of clean air. To talk about the color of the wind is to make a category mistake. In the same way, according to Ryle, to talk about the mind as a “thing”—as he claimed Descartes had—is a category mistake. And because the mind is not a thing, it makes no sense to imagine its being made of some substance, either physical or nonphysical. Furthermore, Ryle said, just as it is a mistake to talk about the mind as a whole as some thing, so it is wrong to discuss individual mental states such as thoughts and beliefs as if they were things. Ryle was not taking account here of the subtlety of Descartes’s views. As we saw above, it was his self—rather than just his mind—that Descartes characterized as “a thinking thing,” and he would have agreed with Ryle that a thought is not a thing. (He would have called it a mode.) But Ryle was not concerned with subtlety as he wielded his philosophical hatchet. Here is an example of his approach. According to Ryle, Descartes would have said—and most of us would no doubt agree with him—that it is one thing (a mental thing) to believe that the ice on the lake is dangerously thin, and quite another thing (a physical thing) to avoid skating on the ice or to warn a friend not to skate. But Ryle denied this difference. Descartes would have said—and again I guess that most of us would agree with him—that refraining from skating was the result of a prior belief that the ice was thin. But Ryle disagreed. He said that keeping off the ice and warning others to do the same was not the result of believing the ice was thin but itself constituted the belief that the ice was thin. In his opinion, believing the ice to be dangerously thin was no more and no less than having a “disposition” to avoid going on it and to advise others not to either. There was no ghostly mental state that constituted the belief and then gave rise to behavior based on it. The behavior itself constituted the belief; the belief was nothing other than the disposition to behave in that way. Ryle used disposition as a technical word to indicate a tendency to behave in a particular way, and he applied it to feelings and moods as well as to thoughts. Just as keeping off the ice was not the result of some mental state labeled “believing the ice was thin” but itself constituted the belief that the ice was thin, so—according to Ryle—a person does not become lethargic or burst into tears as a result of a prior mental state labeled “mood of depression.” The lethargy and the sobbing themselves constitute what it is to be depressed. The mood would not exist without the disposition to the behavior, and without the actual behavior there would be no reason to infer the disposition. The Mind-Body Problem
• 85
Therefore to propose the existence of mental states as some additional nonphysical thing, over and above the physical behavior that gives them expression, is both unnecessary and misleading. The great virtue of Ryle’s philosophical behaviorism was that it solved—or rather it dissolved—the interaction problem that had proved dualism’s Achilles’ heel. By insisting that mental states and physical behavior were not two different things but one and the same thing, he removed at a stroke the need to explain how one caused the other. Mental states did not cause physical behavior; they were constituted by physical behavior. Because it both solved this aspect of the mind-body problem and also chimed in harmoniously with the contemporary fashion for behaviorism in psychology (see Chapter 1), Ryle’s theory found a ready welcome. But it did not immediately open up the way to the scientific study of consciousness. By giving conscious mental states a physical basis in behavior, the theory looks as if it ought to have removed the barrier to the scientific study of consciousness that dualism had erected. But there was a new difficulty. Ryle had taken away with one hand what he had given with the other. True, he had disposed of the interaction problem, but by denying that mental states existed other than as dispositions to behave in certain ways, he had introduced another obstacle to their scientific investigation. How could science possibly investigate something that did not exist? It could study the resultant behavior, but that was not quite the same thing. This basic shortcoming in Ryle’s theory also showed up in the way it failed to deal satisfactorily with two important aspects of mental activity: one was the experience of sensations, such as pain, and the other was what we call private thoughts. Consider the case of pain first. If my mental state at a given moment can be described as a decision to raise my right arm, then it might fairly be claimed that the raising of my right arm is the physical expression of that decision, that the behavior constitutes the mental state. But now suppose that my mental state at another moment can best be described as having a toothache. I might clutch my jaw and cry out and phone the dentist. But equally I might put a brave face on things and carry on with whatever I was doing. How could these two quite different patterns of behavior—especially the latter—both be claimed to express my mental sensation of pain? To answer this objection, Ryle fell back on his technical notion of a disposition. Strictly speaking, he told his readers, a mental state is constituted by a disposition to behave in a certain way, and it remains the case even if one’s actual behavior is different from that. Playing the stoic and ignoring the sensation of toothache does not alter the fact that the painful sen86 • The Mind-Body Problem
The Thinker by Auguste Rodin. English philosopher Gilbert Ryle had Rodin’s Thinker in mind when he asked the question,“What is The Thinker doing?” (Library of Congress)
sation still disposes me to clutch my jaw and cry out and phone the dentist, even if my actual behavior expresses the rival mental state, a determination to hide my agony. This answer leads to further problematic questions, relating to the whole question of pretense and playacting. Just as one can hide a genuine pain, so it is quite possible to behave as if one did have a sensation of pain, even when one feels perfectly well. Professional actors do it all the time. To this the behaviorist can offer two responses. First, playacting is the exception that proves the rule. It only works and has the desired effect because overwhelmingly it is the case that behavior does accurately portray mental states, which in turn are constituted by the disposition of a person to behave in that particular way. Second, even pretend behavior can and must be identified with some mental state—a desire to deceive, perhaps, or to entertain—so The Mind-Body Problem
• 87
Ryle’s theory is not disproved. Even so, the problems of sensations and pretense remained a difficulty for behaviorism. Ryle spent the last twenty years of his life wrestling with the even more intractable problem of private thoughts, which he summed up in the question, “What is The Thinker doing?” He had in mind Auguste Rodin’s famous sculpture, depicting a hunched figure with his chin pressed down on his hand and his brow furrowed in concentration, but the question applies equally to anyone thinking private thoughts (see Lyons 2001, 72–76). Descartes would have said that The Thinker was engaged in a private activity in the nonphysical area of his life. The thoroughgoing psychological behaviorist B. F. Skinner—a contemporary of Ryle— claimed simply, “Human thought is human behavior,” so for him the answer would have depended on the observable behavior related to the time of thinking. If the period ended with the writing out of a new poem or a new mathematical theorem, then it could be said that the thinker was composing a poem or formulating a theorem. If there was no such outcome, then all The Thinker had been doing was activating his muscles so as to crouch head-on-hand with furrowed brow. Nothing more. But Ryle was not satisfied with either of these answers. He could not accept Descartes’s ghostly mind, but neither could he deny altogether the reality of private thought. For him, The Thinker was definitely doing something, and it was something private, and it was something in the one world in which our whole life is lived. But just what it was, Ryle could never tell, and his hoped-for final book on the subject was never written. It is tempting—but would be quite wrong—to think of Ryle as taking the physical and mental worlds of Descartes and simply cutting out the nonphysical side.What he has done instead is to take the physical and mental worlds and cut out the dividing line between them. He says that it makes no sense to speak as if there could be two worlds, one a ghostly shadow of the other. There is only one world, and all our life is lived in it. Theories like his, which do not deny the reality of the mental realm but claim that it can be entirely accounted for—without remainder—by a physical description of the relevant organism, are grouped under the general name of “reductive physicalism.”
Reductive Physicalism One straightforward version of reductive physicalism is mind-brain identity theory.We start the story of mind-brain identity theory with Ullin Place (1925–2000), a British psychologist and philosopher 88 • The Mind-Body Problem
brought up in the behaviorist tradition. Like Ryle, he wanted to find a way to affirm the real existence of private mental states—thoughts, feelings, emotions—without being drawn back into dualism. His solution, which drew on ideas put forward by E. G. Boring in the 1930s, was to propose that all mental states were in fact nothing more or less than physical states of the brain. Place was already thinking along these lines when in 1951 he moved from Ryle’s postwar Oxford University to the University of Adelaide in Australia. There he found a responsive colleague in the philosopher Jack Smart, who had gone there the previous year, also following a spell at Oxford, to take up a senior professorship in philosophy. By 1956, having been able to test out his ideas on the sympathetic ear of Smart, Place felt confident enough to publish an article titled, “Is Consciousness a Brain Process?” This event is widely taken to mark the birth of modern consciousness studies as a respectable scientific and philosophical research area. Place was careful to point out that he could not claim to answer a firm “yes” to the question that formed the article’s title. Rather, he argued that accepting the reality of inner processes—such as private thoughts and feelings—did not entail dualism, and the thesis that consciousness is a process in the brain cannot be dismissed on logical grounds. Once this hypothesis had been shown by the philosophers to be logically possible, he said, it would be up to the scientists to show experimentally whether it was also true (Place 1956). Place’s main purpose was to show just what was—and what was not—entailed by the proposition “Consciousness is a brain process.” He distinguished three senses in which a statement in the form of “X is Y” might be construed. • First,Y might describe some feature of X, as in “The hat is yellow,” or “The cat is on the mat.” In this case, there is no claim that X and Y are identical. • Second,Y might define X, as in “A bachelor is an unmarried man.” In this case X and Y are absolutely identical, so that any statement that is true of X will also be true of Y and viceversa.
Neither of these was the sense intended by Place when he said, “Consciousness is a brain process.” • He had in mind a third interpretation of “X is Y,” which he called “The ‘Is’ of Composition,” in which X and Y are two different descriptions for the same thing. (“The ‘Is’ of The Mind-Body Problem
• 89
Composition” is not the current philosophical term for such statements, but for simplicity I retain Place’s original usage.) An example would be, “The lady in the red dress is my wife.” Unlike the first case, this statement does involve an identity claim, insofar as the same person is identified as being both my wife and the lady in the red dress. But unlike the second case, the identification here is not absolute. That is to say, because the terms are descriptions rather than definitions, they not completely interchangeable. For instance, it could sometimes be true to say, “My wife is wearing a blue dress,” but it would always be false to say, “The lady in the red dress is wearing a blue dress.” This distinction does not make the original statement (identifying the lady in the red dress as my wife) untrue, but it does—to use the technical philosophical term—make the identity contingent. In other words, it does happen to be the case, on this occasion, that the lady in the red dress is my wife, but it could have been otherwise.
It was very important for Place to establish the correct interpretation of “is” in his proposal that consciousness is a brain process. On the one hand, if people thought he simply meant it as a description of where consciousness could be located in the body, (equivalent to “The cat is on the mat”) then it would not be a strong enough claim to rule out some version of dualism. The conscious mind could be thought of as located in the brain and also as existing independently of the body, rather as water can be located in the domestic plumbing system but exists independently of it. And Place wanted to outlaw dualism in all its forms. On the other hand, if people understood his “is” as a definition of consciousness (equivalent to “A bachelor is an unmarried man”), then they could easily show it to be false. For instance, the statement “I am not conscious of my brain process” makes sense and is likely true. But the statement “I am not conscious of my consciousness” is neither true nor sensible. Only when the “is” is taken as the “is” of composition (in Place’s sense of the term), does his hypothesis become plausible. Support for the identity theory came from two other philosophers: Place’s colleague Jack Smart at Adelaide University and David Armstrong, a native Australian who, having like the other two spent some time studying at Oxford University, held university posts successively at Melbourne and Sydney Universities. Smart appealed to a deep-seated belief among scientists and philosophers that simple solutions are preferable to complex ones. This principle was first clearly stated by the fourteenth-century English philosopher William of 90 • The Mind-Body Problem
Occam. He said that when we were faced with competing explanations, the one that required the fewest assumptions was most likely to be right. Ever since then, the appeal to simplicity has been known as “Occam’s Razor” because it favors theories from which any unnecessary complications have been shaved off, leaving a smooth and elegant account of the matter in hand. Smart used Occam’s razor to argue that even if one ignored the problems raised by dualism, mind-brain identity theory was inherently superior because it cut the number of basic elements in the universe from two to one. Place had taken great pains to explain that the identification “consciousness is a brain process” is of the same kind as “the lady in the red dress is my wife.” And we saw that it is a feature of this kind of identification that the terms on either side of the “is” are not interchangeable in all circumstances. However, not everyone was convinced that what Place proposed really counted as identity. A muchhonored principle known to philosophers as Leibniz’s Law states that for any X and Y to be identical, whatever is true of X must be true of Y. But in the example given, there are circumstances in which it can be true that “my wife” is wearing a blue dress, whereas it can never be true that “the lady in the red dress” is wearing a blue dress. Therefore—opponents argued—“my wife” and “the lady in the red dress” are not identical in the sense required by Leibniz’s Law. Neither, therefore, by Place’s own admission, are conscious sensations and brain states identical in the required sense. Smart took up this challenge in an article titled “Sensations and Brain Processes,” first published in 1959. Central to his argument is a distinction, first clearly set down by the philosopher Gottlob Frege (1848–1925), between the sense (Sinn) of an expression and what the expression refers to (Bedeutung). Smart demonstrates this distinction using the same illustration as Frege himself. The terms “morning star” and “evening star” have different senses—one is a bright wandering star that is seen early in the day, and the other is a bright wandering star that is seen late in the day—but both these heavenly bodies, spoken of since antiquity, are known by modern astronomers to be in fact the planet Venus. In other words, although the terms “morning star” and “evening star” do not have the same sense, but they do refer to the same thing. The expressions are not interchangeable—it would be nonsense to say, “This morning I saw the evening star”—but nonetheless they do refer to the identical heavenly body, the planet Venus (Smart 1959). Returning to the example of my wife in the red dress, it is clear that “my wife” and “the lady in the red dress” both refer to the identical The Mind-Body Problem
• 91
woman, even though the two phrases have different senses and are not interchangeable. In exactly the same way, argued Smart, a conscious sensation (such as a feeling of pain) and the brain process associated with such a sensation (say, “brain process of sort P”) are not interchangeable. But they can still refer to the same thing (known as the “referent”). Smart’s defense works so long as we accept that Leibniz’s Law applies to the referents and not to the senses of the expressions for which identity is claimed. Armstrong’s contribution to the debate was to bring together the internal and external aspects of physicalism. Place had appealed to mind-brain identity only to explain those private aspects of mental life that did not seem to be explicable in terms of externally observable behavior. But this distinction left an awkward kind of double explanation that cried out for Occam’s Razor to simplify it. This Armstrong did. One of the analogies used by Ryle to show what he understood by “dispositions” was the brittleness of glass. We say that glass is brittle because of the way it behaves when dropped or knocked against a hard surface (and in a similar way, we say a person is vain if they spend hours admiring themselves in the mirror). But Armstrong pointed out that the behavior of the glass is due to its inner crystalline structure, and in a similar way the behavior of the vain person must be due to the inner physical structure of his or her brain. Armstrong was thus able to conclude that mental states were both identical with physical brain states and also the cause of behavior (Armstrong 1968). There was, however, a further problem with the identity theory. It seemed to require that anybody in any place at any time who experienced a particular mental sensation—say the redness of a tomato— must be in an identical brain state (or undergoing an identical brain process) to anyone else undergoing that same experience. This possibility seemed unlikely on empirical grounds, and despite a half century of effort, neuroscientists at the start of the twenty-first century have yet to identify a single brain state or brain process that is found every time a specific mental state—such as seeing a red tomato or thinking “2 + 2 = 4”—is induced. It is also unlikely on theoretical grounds. Edelman, for instance, has pointed out that the particular state of a particular brain is the result not only of the evolution of the species but also the life history of the individual, and no two are exactly the same, any more than two faces or two fingerprints are exactly the same. That is not the end of identity theory, however. Just as Place was able to point to differing meanings of the word “is” in his formulation 92 • The Mind-Body Problem
of “consciousness is a brain process,” so also there are more and less demanding senses of the term “the same as” or “identical with.” The stronger version, called by philosophers “type identity,” would indeed require that a given mental state—say the thought that “2 + 2 = 4”— be identified with exactly the same physical brain state whenever it occurred. This sense is undoubtedly what was intended by the early mind-brain identity theorists like Ullin Place, who was still defending it against to the weaker version to the end of his life. In general, however, it is the weaker version, known as “token identity,” that is more likely to be argued for today. For those who have this understanding, it is possible for the same mental state to be identified on two different occasions with two physical brain states, without those physical states being identical with each other. Identity theory still has its staunch supporters, such as the theoretical psychologist Nicholas Humphrey of the London School of Economics (see Humphrey 2000 for a particularly clear contemporary argument for identity theory), and some version of it often appears to lie behind the work and pronouncements of neuroscientists like Francis Crick, but the various problems arising from it have led many philosophers of mind to seek other alternatives to Cartesian dualism.We turn first to the most extreme form of physicalism, known as “eliminative materialism,” and then to a cluster of semidualistic theories grouped for convenience under the heading of “nonreductive physicalism.” The influential theory known as “functionalism” is discussed in the next chapter.
Eliminative Materialism Mind-brain identity theory was an example of reductive materialism. It assumed that mental states really existed but said that they were the same thing as certain physical brain states (just as the morning star is the same thing as the planet Venus). By contrast, eliminative materialism claims that only physical brain states exist. They are not identical with mental states and processes; they replace them. With this approach, all talk of the mind and of mental events is simply to be eliminated from psychology and replaced by descriptions of brain states and processes. This rather drastic solution to the mind-body problem is most often associated with the husband-and-wife team of Paul and Patricia Churchland, but the idea can be traced back to the Austrian philosopher Paul Feyerabend (1924–1994). Eliminative materialism (often called simply “eliminativism”) strikes many people as a silly idea, but it is not. It may be mistaken, The Mind-Body Problem
• 93
but it is not ridiculous. To see why, it is important to understand what its supporters are actually proposing to eliminate. They are not denying the reality of conscious experience. They are not saying that we should somehow replace our personal awareness of the world by a detached scientific description of it. That would indeed be ridiculous. What they are saying is that mental events—things like thoughts and beliefs and ideas—are not directly given experiences, as most of us imagine them to be. Rather, the concept of mental events forms part of a theory we use to interpret and apply our experience. And if mental states and processes are not things that actually exist but are only part of a humanly created explanatory theory, then it is perfectly sensible to consider replacing them by a better theory, a rigorously scientific one (Churchland 1981; Churchland 1986; Churchland and Churchland 1991; Feyerabend 1963). An example from another area of science will help to illustrate the idea. There was a time when certain medical conditions— epilepsy, for instance—were thought to be caused by demon possession. Today epilepsy is said to be caused by organic disorders in the brain. These are two different theories, rival theories, to account for epilepsy. If you accept the scientific explanation of physical brain disorder, then you do not need to retain the concept of demon possession. It can be eliminated from the description of the disease. It would of course be possible to salvage the theory of demons by saying that the organic disorder in the brain is itself the result of demonic activity. But medical science does not do that. It is content to account for the physical disordering in purely physical—that is to say chemical and biological—terms. The demons are eliminated. In an exactly parallel way, say the Churchlands and other eliminativists, commonsense psychology (sometimes called “folk psychology”) has traditionally explained human behavior in terms of mental states and processes like hopes, fears, desires, and so on. But just as folk medicine’s belief that illness was caused by demons has given way to scientific explanations in terms of bodily malfunctioning, so folk psychology’s belief that behavior is caused by mental states and processes should give way to a scientific explanation in terms of physical brain states and processes. Mind-brain identity theory accepted that a description of brain states and processes gave a complete account of behavior, but it still kept the mental description alongside the physical one while insisting that they were identical. That, claim the eliminativists, is like accepting the modern scientific account of disease but keeping the demonic explanation as well. Whether we can take eliminativism seriously depends crucially 94 • The Mind-Body Problem
on whether we can accept the claim that mental entities are part of an explanatory theory. Most of us instinctively treat the mind and the mental events associated with it as given facts, as things in need of explanation. In terms of our analogy with illness, mental events are parallel with the disease. But the Churchlands insist that mental events are parallel with the demons, not the epilepsy. This claim becomes less strange if we stop thinking in terms of a clear-cut distinction between bare facts (like epilepsy) and explanatory theories (like demon possession). Instead, consider that the “fact” and the “theory” might be much more closely entwined, so that a theory itself influences the nature of the facts that it seeks to explain. This idea that what we think of as bare facts are actually “theory-laden” had been popularized in the 1960s by philosophers of science such as W. V. Quine (1908–2000) and Wilfrid Sellars (1912–1981) (Quine 1961; Sellars 1963). The suggestion here is that it is a mistake to think of a single, clearly defined fact—say epilepsy—on the one hand, and two rival explanations—say demon possession and brain malfunction—on the other. Rather, we should think of two different things altogether: epilepsy-thought-of-as-demon-possession and epilepsy-thought-ofas-brain-malfunction. Since the theory explaining the illness is now part of the definition of the illness, it is easy to see how denying the explanation could be interpreted as a denial of the illness itself. If, for me, suffering an epileptic fit is identical with being possessed by a demon, then if you say there is no such thing as demon possession, I hear you saying my epileptic fits don’t really exist. But you are not saying that.You are saying that the illness I attribute to demon possession is better understood as a consequence of a physical brain disorder. By accepting this way of thinking, I will not be denying the reality of my illness. I will be changing an outdated and false understanding of it for a truer scientific understanding. According to the Churchlands, it is exactly the same when we come to our conscious experience of the world about us. Because we have always experienced it “mentally,” interpreting it in terms of beliefs and hopes and fears, the claim that mental events do not exist sounds like a denial of our experience. But that is not what is intended. Rather, the eliminativists are saying that we should shift from experiencing the world mentally to experiencing it scientifically. Doing so, they claim, will be the equivalent of shifting from treating illness as demon possession to treating it as a disordering of the body. It will be a shift from a false theory to a truer one. Even when properly understood, the eliminative proposals have certain problems. For one thing, it is often unclear how far they see The Mind-Body Problem
• 95
eliminativism as a long-term goal, and how far they regard neuroscience as already so advanced that we can move to a nonmental experiencing of the world immediately. For another, the same philosophical ideas (deriving in part from Sellars and Quine) that underpin the eliminative project also throw doubt on the traditional sharp distinction between truth and falsehood. It therefore becomes debatable whether the Churchlands ought to claim so vociferously that folk psychology is false and scientific psychology true. It might be better to ask which is the more useful way of viewing the matter, rather than which is the true one.When these two genuine difficulties are added to the problem that many people seem willfully to misunderstand eliminativism, the net result has been that alternative versions of physicalism have in general proved more persuasive than eliminative materialism.
Nonreductive Physicalism In this section we consider a number of attempts to steer a middle way between substance dualism and reductive physicalism, approaches that may be gathered under the umbrella heading of nonreductive physicalism. They are truly physicalist because they start with the reality of the material world and deny that anything else would exist without that physical presence. But they are nonreductive insofar as they all maintain—in different ways—that there exists in association with that physical bedrock a mental realm that is dependent on it but not reducible to it. These various proposals are perhaps best thought of as attempts to meet perceived difficulties in the identity theory of Place and Smart. Donald Davidson, for instance, began with a version of the weaker or token identity theory—of the kind put forward by Armstrong—and weakened it still further in one important respect (Davidson 1970). All forms of identity theory require what are known as bridging principles, which set out the nature of the link between the mental and physical descriptions of the identical event or state. In the case of the stronger version of the theory, these bridging principles take the form of “psychophysical laws,” which state that any given physical state will necessarily show itself as some particular mental state. The belief that mental states are multiply realizable (see above) was what led most physicalists to abandon type identity theory, which was governed by very strict psychophysical laws, in favor of the more flexible token theory. In the 1970s, Davidson went further and denied the existence of strict psychophysical laws altogether. As a result, his theory has been dubbed “anomalous (literally,‘lawless’) monism.” 96 • The Mind-Body Problem
As a materialist, Davidson still holds that every mental event is identical with some physical event. So having dispensed with psychophysical laws, he needs some other bridging principle to determine the nature of the link between the two. His chosen candidate is supervenience, a concept borrowed from moral philosophy, where it is sometimes used to describe the relation between bare facts and the values attributed to them. The essential point about a relation of supervenience is that it is asymmetric. That is to say, the two partners in the relation are not equal, but one of them (the supervenient one) is dependent on the other (subvenient or basal) one. So a set of mental features (X) can be said to supervene on a set of physical features (Y), if they are related in such a way that a change in X must be accompanied by a change in Y, but a change in Y need not necessarily entail a change in X. According to this theory, two identical physical states must exhibit identical mental states, but two identical mental states might correspond to two different underlying physical states. This is what it means to say that supervenient properties are multiply realizable with respect to their associated basal properties. A leading exponent of supervenience through the 1980s and 1990s was Jaegwon Kim of Brown University, but recently he has pointed out that it is not really a theory of the mind-body relation at all. Unlike substance dualism or identity theory, it does not attempt any explanation of the relation and is better regarded simply as a description of it. It turns out that this trait makes supervenience compatible with a number of different explanatory theories that are unacceptable to a convinced physicalist such as Kim. He has therefore distanced himself from this approach after twenty years because of its failure to rule out theories such as dual aspect monism (Kim 1998). Neutral—or dual aspect—monism is a theory that was championed by no less a giant of twentieth-century philosophy than Bertrand Russell (1872–1970), and in essence it goes back more than two centuries earlier to Baruch Spinoza (1632–1677). The aim of these theories is to do justice to the reality of our conscious and mental lives without falling foul of the interaction problem. Their method is to avoid making either mind or matter into a fundamental substance. The mental and the material are thought of instead as two aspects of some underlying essence that is itself neither physical nor mental— or, if you prefer, is both physical and mental—but which we can only know or experience in one or other of these alternative ways. A modern version of this theory has been put forward by David Chalmers, the most prominent of the younger philosophers exploring the realm of the mind and consciousness (Chalmers 1995; Document 9, this The Mind-Body Problem
• 97
volume). He has proposed a double aspect theory in which the underlying neutral substance is “information.” Information is to be understood here in a technical sense first introduced by Claude Shannon in 1948 and now widely employed in the world of electronic communications. He admits himself that this suggestion is speculative and “more likely than not to be wrong” (Chalmers 1997, 32). But he put it forward in the hope that it might help progress toward a more satisfactory theory. One immediate objection to any dual aspect proposal is the consequence that everything that has a physical aspect also has a mental, or conscious, aspect. This belief is known as “panpsychism” (from the Greek, “everything-soul/mind”) and is regarded by many as either plain crazy or else a direct route back to animism and superstition. It is a measure of Chalmers’s commitment to his dual aspect theory that he is willing to accept panpsychism as the price for embracing it. That is not the kind of thing to appeal to a physicalist like Kim, although Chalmers freely uses the language of bridging principles and supervenience in explaining his ideas. For Chalmers, both the mental and the physical are equally fundamental and universal properties of the universe, and his theory is sometimes called “fundamental property dualism” to distinguish it from another approach to the mind-body relation, called simply “property dualism.” The essential difference between the two theories is that ordinary property dualism is a genuinely physicalist theory and sees the mental realm as being dependent upon the physical and arising in some way from it. On this understanding, everything is physical; it is just that some things—awake human beings, for instance— have additional mental properties on top of (supervening on) their basic physical structure. This idea directly contrasts with dual aspect monism, which concedes that everything has both a physical and a mental aspect (and hence entails some form of panpsychism). What makes ordinary property dualism nonreductive is the claim that the mental, although dependent on its physical substrate, is not to be totally identified with it but is something over and above the physical sum of things. Asked how that can be, most property dualists come up with some version of emergence theory. To explain it, I shall return to John Searle because he is one of the best known and clearest exponents of emergentism. He is also regarded by most of his colleagues as a property dualist, although he denies it, and in 2002 he published an article titled, “Why I Am Not a Property Dualist.” An emergent property is a feature of a whole system that is not exhibited by the parts that make up the system. The classic example, wheeled out by Searle 98 • The Mind-Body Problem
time and again in his highly entertaining but intellectually rigorous lectures, is the wetness of water. He will twirl the glass of the said liquid provided to keep the speaker’s throat lubricated and remind you that all it contains is molecules of H2O. Individually, they do not exhibit the qualities of wetness and liquidity, but taken all together they do. There is no secret extra ingredient. Water in bulk has the emergent property of liquidity that is not a feature of its parts. To use his oft-repeated definition, the emergent property is “caused by and realized in” the constituent parts to the whole. So conscious mental states are caused by and realized in the constituent neurons of the brain. “Brains cause minds” is his slogan (Searle 1984, 20–22). It is often objected that the analogy with water is inadequate, because the properties of H2O molecules and of water in bulk are all objective physical properties, whereas the whole point about consciousness is its subjective quality. For this reason there are other philosophers, such as Michael Silberstein of Elizabethtown College in Pennsylvania, who go further and say that consciousness requires a more radical emergence theory, in which the organism exhibiting the emergent property not only has features that are different in kind from those present in its parts individually but has features of a kind whose nature and existence could not even be predicted simply by a knowledge of the parts (Silberstein 2001). Whether such radically emergent properties exist in nature (other than as an explanation of consciousness) is disputed, although some critics concede that quantum physics may offer a genuine example of the phenomenon.
The Mind-Body Problem
• 99
6
The (Un)Conscious Computer
n May 1997 the Russian grandmaster and chess world champion Gary Kasparov took on IBM’s chess-playing computer Deep Blue in a best-of-six-games challenge match held in New York. Kasparov predicted: “A win for Deep Blue would be a very important and frightening milestone in the history of mankind” (quoted in Lyons 2001, 142). Well, Deep Blue did win, by two games to one with three drawn, and Kasparov was indeed scared. But was he right to be? Was his chess opponent really the forerunner of a breed of supercomputers that will take over the world? A lot of people think so, like Marvin Minsky at the Massachusetts Institute of Technology (MIT), a keen supporter of artificial intelligence (AI), who claimed twenty years ago that the next generation of computers would be so intelligent that we would be “lucky if they are willing to keep us around the house as household pets” (quoted in Searle 1984, 30). But others are skeptical, like Berkeley’s philosopher of mind John Searle, who thinks that all the fuss over computer chess is just “crazy” and dismisses Deep Blue as “just a hunk of junk that somebody’s designed” (Weber 1996). In this chapter, I try to untangle the facts from the fiction in the battle over machine intelligence and whether the conscious mind works like a computer.
I
A Computer Is Just an Adding Machine The debate about machine consciousness is bedeviled from the start by the fact that a number of important terms in the computer world are—at best—ambiguous, and they confuse the issue considerably (Tallis 1994). Originally used for humans, these terms were applied metaphorically to machines and then (by now taken literally) reapplied to humans to make them seem machinelike. These key words, such as information, memory, and representation, always need to be 101
Gary Kasparov plays against Deep Blue. There were a variety of attitudes toward what Deep Blue was actually doing.The IBM scientists who designed and built it said it was less “intelligent” than even the stupidest human, pointing out that it was incapable of intuition, let alone feeling. (Laurence Kesteron/CORBIS SYGMA)
handled with care. The confusion begins with the word “computer” itself. Nowadays we typically think of a computer as a machine, but in its first use, the word was applied to human beings whose job was to compile mathematical tables of various kinds. They included the actuarial tables used by insurance companies to calculate different risks and the premiums appropriate to charge in relation to them and the tables produced by the Navy Board to assist the calculation of a ship’s position at sea. The production of these tables involved repeating many hundreds of essentially simple sums, and this activity was known variously as reckoning, calculating, or computing. It was a tedious and time-consuming business liable to human error from fatigue; and mistakes could have disastrous consequences. Consequently, from the seventeenth century onward there was an incentive to develop a mechanical calculator or computer that would take the drudgery out of compiling these tables and reduce the likelihood of mistakes being made in them. It is worth following in some detail the history of human aids to calculation because it sets out the basic evidence that a computer is just a tool, one of those things made by human beings to enable us to do certain things better or faster than we could otherwise. We make knives to help us cut, automobiles to help us move from place to place, and calculators or computers to help us add things up. On this view, computers can never be conscious or even intelligent. A mod-
102 • The (Un)Conscious Computer
ern high-speed computer like IBM’s chess-playing Deep Blue may seem to be quite magical and very clever, but—so the argument against machine consciousness goes—it is in essence no more than a large abacus. The abacus is the earliest recorded form of computing device. In ancient Europe, it originally consisted of a flat stone or board marked with lines to indicate units (tens, hundreds, etc.) and using pebbles as counters. The Latin word for pebbles is calculi, from which we get our word “calculate.” One of the oldest counting boards of this kind still to survive is the Salamis Tablet, discovered on the Greek island of that name about a hundred years ago and thought to date from the third or fourth century B.C.E. A later version of the counting board had grooves to keep the counters in straight lines in which they were free to slide. They were made of various materials, including wood and metal, and writers from the time of the early Roman Empire (around the first century C.E.) speak of the counters being made also of ivory and of colored glass. The arrangement of beads on parallel wires—which is the form of the abacus most common today—is a comparative latecomer that first appeared about 800 years ago in China. In the basic form of the abacus or counting board, the lines representing units are each divided into a lower section with five counters (probably originating from the five fingers—digits—on the human hand) and an upper section with either one or two counters. When counting, the figures one to five are indicated by moving the appropriate number of the five lower counters away from their base position; figures from six to nine are indicated by moving one of the upper counters (representing five) plus the appropriate number of the lower counters to make up the required total. To indicate the figure ten, all the counters in the units column are returned to their base positions and one counter in the lower section of the tens column is moved up. And so on. The basic function of the counting board was to record figures, but it could also help with addition. A person could set the pebbles to represent one figure and then, without resetting the pebbles first, move them further to indicate the addition of a second number. This could be done by dealing with one column at a time and then reading off the pebbles at the end. There is nothing mysterious or magical about the moving of the pebbles. They are certainly not doing any counting or adding up for themselves. But they do enable the user of the abacus to read off the answer to the sum without needing to hold all the calculations in his or her head. The argument being pursued here is that the same thing is true of the modern computer. The (Un)Conscious Computer
• 103
Renaissance genius Leonardo daVinci conceived the idea of interacting cogwheels as an aid to calculation centuries before anyone else. In 1967 researchers studying two of his rediscovered notebooks came across sketches and notes relating to a mechanical calculator. (Courtesy of Thoemmes Press)
The next stage in our story—a crucial link between the ancient abacus and the modern computer— concerns the introduction of interacting cogwheels (like those in a mechanical clock) to take the place of moving the pebbles by hand. In this as in many other things (e.g., submarines and flying machines), the Renaissance genius Leonardo da Vinci (1452–1519) conceived the idea centuries before anyone else, but his investigations came to light only in 1967 (see http://www. maxmon.com/1500ad.htm). In that year, researchers studying two of his rediscovered notebooks came across sketches and notes relating to a mechanical calculator. They are sufficiently detailed for a working model to have been built, based on Leonardo’s drawings, but there is no evidence that he ever put his ideas into practice. So the honor of creating the first mechanical calculator falls to the French mathematician and philosopher Blaise Pascal (1623–1662). Pascal’s adding machine (named the Pascaline after its inventor) was built in 1642 when its maker was just nineteen years old. Its fundamental mechanism was exactly the same as in the modern odometer, which records the miles traveled by a bicycle or automobile. A cog or gear wheel representing the “units” column can be set to any single digit from 0 through 9. This number shows from behind a display window in the casing of the machine. From the cog’s starting position, one-tenth of a revolution is required for each successive digit to appear as the wheel is advanced. The first wheel engages with a second one, placed to its left, in such a way that a full revolution of the first wheel causes the second wheel to advance by one-tenth of revolution. This will cause the second wheel (the “tens” column) to show the digit 1 in its display window, as the first wheel’s display returns to 0. The total display thus reads 10. Note that what has been achieved is no more and no less than what was accomplished with the abacus. In both cases, when the units column/wheel has reach ten, it is reset to zero and the tens column/wheel registers one (which indicates the total of ten just reached by the units).
104 • The (Un)Conscious Computer
“I see,” said my wife, as I explained all this to her. “When the first cogwheel has done a complete revolution, it tells the second cogwheel to clock it up.” Quite innocently, my wife had here fallen into the trap that catches all of us sooner or later when we are thinking about computers. What we are describing is a purely mechanical operation, but we use words that are only appropriate to conscious beings. The first cogwheel does not tell the second one anything. There is simply a tooth on the first wheel that engages with a tooth on the second wheel and turns it by one-tenth of a revolution. And that had been carefully arranged by Pascal when he designed the machine, so that he would know (not so that the machine would know, but so that the human operator would know) that one complete revolution of the units wheel had taken place. It is all exactly the same as when the user of the old abacus reset the units column of pebbles and moved up one of the pebbles in the tens column. It is a mechanical move, carried out by the operator for his or her own benefit. Neither the pebbles nor the cogwheels do any counting, thinking, or communicating of information. Only the human operator does those things. There are two reasons why we might be tempted to imagine that Pascal’s calculator is thinking or counting in a way that the abacus is not. The first is that the operation of the gears is hidden from us by the casing of the machine. All that we can see on the outside are the dials to set the numbers and the figures that show through the display window provided for each wheel. And when we cannot see what is going on, we begin to imagine things. With the abacus there was no such temptation, because we could see the pebbles being moved by the operator. There is a second reason that we might imagine the Pascaline is counting of its own accord. The human thinking that lies behind the successful adding up was all done in the distant past when the machine was designed and built, not at the present time when it is being used.With the abacus, it is obvious that I am resetting the units and pebbles and advancing the tens pebble because I do it with my hand at the exact point in the calculation where it is necessary. But Pascal’s mechanical calculating machine appears to reset the units and clock up the first of the tens all by itself. That is nonsense. Pascal has done it, as surely as if he were sitting here manipulating an abacus. But he has done the work previously, at the research and development stage of the machine, not at the moment when we use it and see the result. Thus again—though in a rather different way this time—the machine’s working is hidden from us, and so it appears to be mysterious. It is exactly these same two kinds of hiddenness that deceive us when we are tempted to regard a modern computer as counting or The (Un)Conscious Computer
• 105
thinking by itself. The mechanical action, no longer a matter of manipulating pebbles or even of turning cogs, but of electronics, is hidden in the machine’s wires and silicon chips. And the human thinking, which is in fact responsible for the results displayed on the screen, has been done by the computer’s designers and programmers long before the machine ever came into our hands. In this view, the computer can no more think about what it is doing than the pencil I use to jot down the figures I want to add up can think about what it is doing. They are both simply tools to help me do my sums. Neither has any thoughts, goals, or intentions of its own but just helps in carrying through those of its designer or user. The symbols each creates—the figures scribbled by the pencil or the characters appearing on the computer screen—have no meaning for the tool that is used to create them. Neither a pencil nor a computer is a conscious mind with understanding. The only thing that makes us doubt this obvious fact is that in a modern computer, the electronic machinery is hidden out of sight and works very fast. Taking these two qualities together, it appears to produce its results almost by magic. It is obvious that I am manipulating my pencil. It is less obvious who, if anyone, is manipulating the computer.
A Computer Is a Functional System I have so far given a very one-sided and negative account of the possibility that computers might be intelligent, let alone conscious. To appreciate the other side of the argument, we need to consider a different kind of answer to the question, What is a computer? Instead of dismissing it as a mere calculating tool, we should think about its function and assess it in terms of what it actually does. Putting the matter in as broad a way as possible, we could say that a computer is a system that receives information (the input, for example, from a keyboard), processes it according to certain rules (contained in its software program), and creates an appropriate output (for example, in the form of a display on the screen). Now some people—those who take a computational view of the thinking process—say that this is also a good outline description of how the mind-brain works. In their opinion, the mind-brain is essentially a system that receives information (the sensory input, for example, from the eyes and ears), processes it in a regular way (determined by its network of neurons), and creates an appropriate output (in the form of a signal to the body’s “motor” system, resulting in movement or speech). This approach to the mind-body problem is called “functionalism.” It was deliberately left out of the 106 • The (Un)Conscious Computer
survey in Chapter 5 because it is the philosophical standpoint most closely allied with the claims of artificial intelligence and machine consciousness, and a discussion of it fits more naturally here. Functionalism is a theory in the philosophy of mind that thinks of mental states rather as we think of patterns. A pattern—say a sixpointed star—can be made out of anything. It may consist of pencil lines on paper, light bulbs on a billboard, buttons on a coat—it makes no difference. The thing that makes the pattern a star and not a circle or a crescent is the mutual relation of its constituent parts, not the material out of which those parts are made. In a somewhat similar way, functionalism says that mental states are determined by their functional relations. They are determined by their relations to (1) their sensory stimulation or input, (2) other inner states, and (3) their behavior effects. Suppose, for example, I experience pain by placing my hand too close to a hot stove. My pain is understood in reference to (1) the physical stimulation I receive from the hot stove, (2) its causal impact on other mental states I have, such as worry, and (3) behavioral effects I exhibit, such as saying “ouch.” The distinctive feature of functionalism that concerns us here is its implication that human mental states are not restricted to human biological systems, such as brains. This idea follows from the claim that mental states are determined solely by their relations and not at all by the physical makeup of the elements in the relationship. According to functionalism, any nonbiological system that exhibits the same functional relations as some human’s mind-brain can be said to have the same mental state as that particular human. Thus a system of computer chips set up with the appropriate functional relations would have a mental state, and because mental states are not based on intrinsic properties, such as the stuff they are made of, the same state may be shared by things with different physical makeups. In the language of the philosophers, mental states are “multiply realizable,” and an identical mental event may be “instantiated” in a number of different physical systems. Thus in my example of the star pattern, we could say that the pattern was multiply realizable because I gave instances of its being instantiated in pencil lines, light bulbs, and buttons. By distinguishing between the role that a mental state plays and the material setup in which the state exists, the functionalist is inviting a comparison between the hardware/software distinction in computer science and the brain/mind relation. However, it is only a short step from metaphor (“it is as if ‘a’ were ‘b’”) to identity (“it is the case that ‘a’ is ‘b’”). Hence the suggestion that mind-is-to-brain as software-is-to-hardware came by the 1970s to be treated almost as an established fact, The (Un)Conscious Computer
• 107
and functionalism became the most fashionable position in the philosophy of mind during the last quarter of the twentieth century. Historically, functionalism developed out of a dissatisfaction with the behaviorism and identity theory of philosophers such as Ullin Place and Jack Smart (see Chapter 5). An important early presentation of it appeared in an article titled “Minds and Machines” by philosopher Hilary Putnam, who was later to become professor of mathematical logic at Harvard University. Published in 1960, this article already emphasized functionalism’s value as a way of linking the study of the mind with the then fledgling world of computer science. In particular, the crucial distinction between mental states and their physical instantiation offered psychologists a lever with which to pry their subject from the grasp of the neuroscientists. If the extreme physicalism of identity theory had proved true, then psychology as an independent science would have vanished. Functionalism, said Putnam, encouraged a more abstract level of description than pure physicalism had done. It is interesting to note in passing that although functionalists generally have associated themselves with materialists (that is, the view that only material things exist), there is a dualism lurking beneath the surface. For, since any given mental state cannot be reduced to the physical mechanism that produces it (whether neurological or silicon-based), then mental states must be something more than the merely physical. Philosopher Jerry Fodor, writing in 1981 at MIT, also noted the “certain level of abstraction” that typified the cognitive sciences, such as computational theory and psychology. He commended functionalism to the readers of the Scientific American precisely because of its breadth of applicability: it “recognizes the possibility that systems as diverse as human beings, calculating machines, and disembodied spirits could all have mental states” (Fodor 1981, 124). It is time to introduce into our story of the mind and the computer the name of one of the most brilliant minds of the twentieth century: Alan Turing (1912–1954). He is perhaps best known as the man who cracked the “Enigma” code used by the German military during World War II, so helping to win the battle of the Atlantic and to save the lives of many U.S. and British sailors involved in the convoys of the early 1940s. He was a mathematical genius, and long before the war, when still in his early twenties, he developed the concept of a general-purpose computer now known as the Turing machine. This is a slightly misleading name, because what Turing produced was not a physical machine but a description of the minimum set of step-by-step instructions out of which even the most complex computational tasks 108 • The (Un)Conscious Computer
can be constructed. Turing showed that any actual machine capable of carrying out these simple, basic instructions could in principle— given enough time—solve any problem, no matter how complicated, that was capable of being tackled in a series of such steplike procedures. The instructions for these procedures are called algorithms. An algorithm does not have to be complicated. An example of a one-step algorithm is the rule for converting feet to inches: “multiply by 12.” Converting miles to kilometers takes two steps: “multiply by 8 and divide the result by 5.” Converting degrees Celsius to degrees Fahrenheit takes three: “Divide by 5, multiply the result by 9, and add 32.” These examples serve to illustrate the idea of an algorithm as a series of simple one-step procedures.A computer program consists of a set of algorithms, and any process capable of being carried out by a computer is said to be algorithmic or computational. But even such simple instructions as those just given would need to be broken down further still for a computer because a Turing machine can only add or subtract one symbol (digit) at a time. So the instruction “multiply by 12” has to take the form “add the original number to itself 12 times,” and this in turn must be made up of a series of steps, each of which boils down to “add 1” however many times is necessary. So the question about the mind-brain being a kind of computer, as functionalism claims, becomes a question of whether the human mind is algorithmic. If it is, then it should be possible to produce an artificial algorithm or computer program that will mimic a human mind so closely that one cannot tell the difference. The process of building one’s own version of something by studying the original and trying to copy it is known as reverse engineering, a term borrowed from industry, where it is a common practice among companies trying to work out how a competitor has managed to achieve some technical breakthrough or other. The classic test for reverse-engineered artificial intelligence was again devised by Turing, and he published it in 1950 in a paper titled “Computing Machinery and Intelligence.” It is universally known as the Turing test. He claimed that if someone carried on two separate conversations with two unseen correspondents, all replies being typed out, and could not deduce from the replies that one of the correspondents was in fact a computer, then that machine was intelligent. No machine meeting these criteria has yet been built, but the test is carried out every few years, and sooner or later someone is bound to fail to “spot the difference,” just as sooner or later a computer was bound to beat a chess grandmaster. But that may still not prove that the machine is intelligent, let alone conscious. The (Un)Conscious Computer
• 109
There is a common assumption that the ability to reverse engineer something is the ultimate proof that we have fully understood it and know everything there is to know about it. However, with a subject like AI, a distinction needs to be made between reproducing an effect and replicating the exact means of achieving it. Take as a parallel the case of what we might call artificial flight. When the early would-be aviators began designing their flying machines, they naturally looked carefully at naturally flying creatures to learn something of how it was done. But the Wright brothers’ plane did not fly in the same way as a bird. The same essentials—forward thrust, lift, and so on—are achieved in both cases but by different means. Lessons were certainly learned from nature, such as the airfoil shape for the crosssection of the wing, but they were limited. There were no successful attempts to get results by flapping the wings of airplanes. In the case of the chess-playing computer, attitudes varied as to what Deep Blue was actually doing. The IBM scientists who designed and built it said it was less “intelligent” than even the stupidest human, pointing out that it was incapable of intuition, let alone feeling. Meanwhile, the hapless Kasparov, who got beaten by it, was more inclined to assign intelligence to it, looking for signs of cleverness in the result rather than in the method. “I don’t care how the machine gets there,” he complained. “It feels like thinking” (Johnson 1997). Herbert Simon, a professor of computer science and psychology at Carnegie Mellon University in Pittsburgh (where Deep Blue’s precursor Deep Thought was built by a group of graduate students), agrees. Covering himself by the remark that there are of course different types of thinking, he nonetheless confesses that he would “call what Deep Blue does thinking” (Weber 1996). Simon is a Nobel laureate and himself a former designer of chess-playing programs, so his views are not to be ignored or set aside lightly. Here is why he takes the stance that he does. The computer operates by a mixture of what is called brute force, which is the ability to accomplish millions of calculations per second, and selectivity, namely the avoidance of wasting time on calculating moves whose outcomes have only a very small likelihood of proving useful. With a game like tic-tac-toe—which with just nine squares and a choice of only X, O, or blank, has a mere 362,880 possible positions—a computer as powerful (i.e., as fast) as Deep Blue could afford the time to calculate every possible outcome of every possible move. In other words, it could function by brute force alone. But chess is altogether another story. With sixty-four squares, and a large variety and number of pieces and pawns on each side, the number of theoretically possible positions is a staggering 10 to the 120th power. 110 • The (Un)Conscious Computer
Written out in full, that is: 1,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000. Clearly some selectivity is necessary here, even for Deep Blue. (At the time it beat Kasparov, it was claimed that the computer could evaluate 200 million moves per second.At that rate, it would still take many million times the age of the universe to analyze all the possible chess positions.) So selectivity is built into the chess-playing program. Simon believes that human thought is also a combination of brute force and selectivity, although the ratio of the former to the latter is obviously much greater in the case of the computer. Nonetheless, he treats all thought as being some combination of the two, and this belief forms the basis of his claim that Deep Blue is genuinely thinking. That is to say, the computer is not just simulating human-style thinking but is replicating its essential features. Douglas Hofstadter is another professor of computer science— at Indiana University—and wrote the Pulitzer Prize–winning book Gödel, Escher, Bach, exploring different avenues of human intellect and creativity. Like Simon, he now accepts that there is a genuine similarity between the way humans and a computer such as Deep Blue play chess. But he draws a different lesson from this belief than Simon does. Instead of concluding that Deep Blue is capable of thinking, he makes the opposite deduction: “I used to think chess required thought,” he says. “Now I realize it doesn’t” (Weber 1996).At the time he wrote his book in the late 1970s, he regarded playing chess as a creative activity akin to music or art, but the success of the computers has persuaded him that that was a mistake. It is instead just a cerebral activity with no emotional content. Hofstadter is not the only person to revise his ideas. Fodor was quoted earlier as a promoter of functionalism in the early 1980s, and he was one of the philosophical architects of the dominant information-processing theory in cognitive science. But twenty years later, he was having second thoughts. In his book The Mind Doesn’t Work That Way, published in 2000, he bemoans the extent to which the project for understanding mental processes through AI has failed and even admits that his own influential book, The Modularity of Mind (1983), adopted an unhelpful policy. In fact, he tells us, what cognitive science has actually found out about the mind is that mostly we do not know how it works. That makes him annoyed with overly optimistic accounts of how well things are going, exemplified in Steven Pinker’s 1997 book How the MindWorks (whose title inspired Fodor’s own). The (Un)Conscious Computer
• 111
Pinker is a psychologist who unashamedly accepts the computational model of the mind and sees reverse engineering as the key to understanding how it works. “When you rummage through an antique store,” Pinker said in an interview, “and come across a contraption built of many finely meshing parts, you assume it was put together for a purpose, and that if you only understood that purpose, you’d have insight as to why it has the parts arranged the way they are.” I was amazed when I first read these words of Pinker’s. They are almost identical to those of Archdeacon William Paley (1743–1805), an eighteenth-century English clergyman, who likened the intricate workings of nature to those of a watch found abandoned on a heath.A stone, Paley said, might have been produced by accident; an artifact like the watch could only be the result of deliberate manufacture. Paley is famous for using this evidence of design in the universe to argue for the existence of God, the cosmic creator. But Pinker is a disciple of Richard Dawkins, whose neo-Darwinism insists that all apparent evidence of design is in fact explicable in terms of natural selection. And nature is The Blind Watchmaker, as Dawkins calls it—parodying the Archdeacon’s God—in the title of his book on the subject. So Pinker has to backtrack smartly. Having enthused over the finely meshing parts of the treasure in the junk shop and having spelled out the parallel, “That’s true of the mind as well,” he has quickly to add, “though it wasn’t designed by a designer, but by natural selection.” But a bit later on he is back again, suggesting we can make sense of things “by putting ourselves in the shoes of the fictitious engineer behind natural selection” (interview by John Brockman, Edge 3, January 11, 1997). Pinker’s need to invent a fictitious engineer eloquently points up a major problem with the computational model of the mind, one that is exacerbated when it is combined with an evolutionary explanation of its origins. The problem is that whether we treat a computer as an adding machine or as a functional system, it really only makes sense in the context of an external user. And if the mind is a computer, who is using it?
Consciousness as an Intrinsic Feature of a System The problem we have now encountered is one that has been a particular concern of the Berkeley philosopher of mind John Searle, at least since 1980, when he first told his story of the Chinese Room. Searle puts the matter like this: Consciousness is an intrinsic feature of human (and some animal) nervous systems, in virtue of which they are conscious subjects. In contrast, information processing is typically 112 • The (Un)Conscious Computer
a concept in the mind of an observer and not intrinsic to the system handling it. So in the case of a computer, we may treat it as the recipient and storer and processor of information, but in itself, intrinsically, a computer is just an electronic circuit. We design and build and use these circuits because we can interpret their inputs and outputs as useful information and so treat the hidden things that go on in between as information processing. But all this is in the eye of the beholder; it is not intrinsic to the computational system, neither the hardware nor the software.We speak of the electrical state transitions in the computer as “symbol manipulation,” but again this description makes sense only in the context of a symbolic interpretation applied to the electronics by the programmer or the user. These self-evident facts Searle uses to argue that a computer could never be conscious, or intelligent, or “think” in the sense that we do, even if it could pass the Turing test. Put another way, intelligence requires understanding, and understanding requires meaning—“semantics”—whereas computers have only “syntax.” Syntax here refers to the formal “rules” by which computers manipulate symbols. A computer can be programmed to output certain symbols in response to a certain input of symbols, but those symbols will never have any meaning for the computer or its program. Only the human operators provide the meaning, which is what the Chinese Room story illustrates. Searle’s Chinese Room is a fictitious locked space with someone in it who speaks no Chinese but has a supply of different Chinese symbols, together with instructions for using them (written in English).When Chinese characters are passed in to him or her, the person consults the instructions and passes out more symbols. Neither the input nor the output means anything to the operator within the room, but if his or her instructions are good enough, it will look to the outsider as though he or she were answering in Chinese the questions in Chinese that were being passed into him. That, claims Searle, is exactly the situation with the Turing test. Only from the outside does the computer appear to understand the questions and answers. Inside, all is a formal shuffling of meaningless symbols (Searle 1980; Document 6, this volume). For more than twenty years, this imaginary scene has been the focus of countless learned debates and challenging articles. As long ago as 1997, Searle counted more than 100 published attacks on it; at that time his great adversary Daniel Dennett of Tufts University unkindly commented that Searle might be able to count the attacks, “but I guess he can’t read them,” or he would not continue trotting out the same argument, basically unchanged, for fifteen years (quoted in The (Un)Conscious Computer
• 113
Searle 1997, 116–117). Now make that twenty-two years. Dennett scorns the Chinese Room, writing, “It has proven to be an amazingly popular number among the nonexperts, in spite of the fact that just about everyone who knows anything about the field dismissed it long ago.” And he claims that it was “back in 1982, when Douglas Hofstadter and I first exposed the cute tricks that make the Chinese Room ‘work.’” But Searle is unrepentant and continues to argue that none of the attacks has succeeded in destroying his argument or its simple message: Computer programs are syntactical, whereas minds have semantic content; syntax by itself is not the same as, nor by itself sufficient for, semantics. Therefore computer programs are not minds and vice versa. For a description of one kind of computerized system that is claimed by its researchers to be able to learn and remember, see the discussion on artificial neural nets (ANNs) in Chapter 8. Coming to grips with the whole computer-mind question involves resolving a confusion between simulations (or models) and replications (or duplicates). Searle points out that a computer simulation of a stormy weather pattern does not mean the computer will make one wet. In an exactly similar way, a computer simulation of a person playing chess or speaking Chinese does not mean the computer is thinking. A computer can simulate something without replicating it, can model something without becoming it. A functionalist, however, is likely to claim that if the computer functions like a mindbrain, then in all-important respects it is a mind-brain. A model (or simulation) is a system that has both similarities and differences compared with the original, usually a difference of scale. Thus a model of the solar system is useful because it is much smaller than the real thing, whereas that of a deoxyribonucleic acid (DNA) molecule is much larger than the original. In other cases, a model will enable the time taken by some process to be speeded up, in order to predict a likely outcome, or the details of some complex system to be simplified in order to focus on the essentials. In every case, the model’s value lies in its ability to behave like the real thing in certain respects that enable us to learn more about the original. Suppose the planning authorities want to make a model that can predict whether a proposed hotel, built 1,500 feet from the lakeside, will obscure the view from the beach of a local mountain peak that is 5 miles away and 5,000 feet high. One possibility would be to build a three-dimensional, scaled-down version of the landscape and run a straight wire between the relevant point on the beach and the top of the mountain. This represents the line of sight between the shore and the peak, and if the scale model of the hotel fits under the wire, then 114 • The (Un)Conscious Computer
the view will not be spoiled; if it is higher than the wire, then in real life the hotel—if built—will cut off the view. That model would work but would itself be expensive to build, and the same information could be gained just as effectively and much more cheaply by simply drawing on paper a cross-section of the scaled-down version. This two-dimensional diagram will show to scale the horizontal distances from the beach to the hotel and to the mountain peak, respectively. Two vertical lines drawn at the appropriate distances will represent to scale the heights of the mountain and the hotel. A diagonal line can then be drawn from the point representing the beach to that representing the top of the mountain. This line is the equivalent of the wire in the three-dimensional model. If it passes over the vertical line representing the hotel, then the view is clear; if it cuts through it the view is obscured. The two-dimensional model is just as effective as the more complex and more costly three-dimensional version. But there is an even simpler way to model the scene. To see how this works, make a pencil-and-paper diagram like the second model but do not bother drawing it to scale. Just write in the lengths: the horizontal distance from the beach to the proposed hotel site is 1,500 feet, that from the beach to a point directly below the mountain peak is close to 25,000 feet, and the height of the mountain is 5,000 feet. Just label the height of the hotel as “y” feet. You now have two right-angle triangles, a smaller one with base length 1,500 and height y, and a larger one with base length 25,000 and height 5,000. A very simple rule of geometry says that when y is just long enough to touch the diagonal side of the larger triangle but not long enough to cut across it, then the ratio of the two sides of the smaller triangle and the ratio of the two sides of the larger triangle are equal. Surprising as it may seem, this simple rule is our third model of the lakeside scene. It is a genuine model because it accurately represents and “behaves like” the real thing, in the important sense that it relates the height of the hotel to the view from the beach to the mountain. Putting in the figures for our supposed scene that I gave above, the model shows that y divided by 1,500 equals 5,000 divided by 25,000. This “statement of equality,” or equation, can be written more simply in the form: y/1,500 = 5,000/25,000. Another very simple mathematical rule says that you can multiply both sides of an equation by the same number and they will still be equal. Since our main interest is in the height of the hotel, we can multiply both sides of our equation by 1,500 and get the new result y = 1,500 x 5,000/25,000. This works out as y = 300. In other words, if the proposed hotel is exactly 300 feet tall, its roof will just be on the sight line from the beach to The (Un)Conscious Computer
• 115
the top of the mountain. Therefore the model quickly tells the planning authorities that if they wish to retain the view of the peak from the lakeshore, they must insist that the hotel be built less than 300 feet high. The advantage in terms of time and cost of this third kind of model, known as a mathematical model, over the alternatives is clear even from this simple example. But it gets better. If we take the basic equation and replace the actual numbers used in this particular case with letters (for example, X and x for the distances from the lake to the mountain and hotel, respectively, and Y and y for their heights), then it takes the completely general form y/x =Y/X. The second half of the equation consists of one fixed number, the height of the mountain peak, divided by another fixed number, its horizontal distance from the beach. This figure is easily calculated and is not going to change. It is what mathematicians call a “constant,” and we may represent it in our equation by the single letter K. Our model now looks like this: y/x = K. We can multiply both sides of this equation by x (as we did in the example above, when x was 1,500) to give the form: y = Kx. In English, this says that the height of the hotel, when it just touches the sight line between the beach and the mountain top, equals its distance from the beach multiplied by a constant number (which is easily calculated from the height of the mountain and its distance from the lake). This gives the general rule, which could be applied to any hotel between any mountain and lake anywhere: If the mountaintop is to be visible from the lake, the height of the hotel must be less than its distance from the beach multiplied by the number K. In a more mathematical form: y < Kx, if the mountaintop is to remain visible. This purely formal mathematical model is an example of what Searle calls syntax without semantics. The same basic model (y = Kx) may be used to represent not just heights and distances but any reallife situation where two elements are related in direct proportion (for example, crop size to acreage). It is because computer models are of this kind that they are so useful and can be adapted to so many purposes. It is also why they can have no intrinsic meaning but rely on external users to apply them to particular situations. Another attack on the computational description of the mind comes from Roger Penrose of the Mathematical Institute at Oxford University, in two related books titled The Emperor’s New Mind (1989) and Shadows of the Mind (1994). Like Searle, he seeks to show that a computer program lacks understanding and therefore cannot be equated with a conscious mind, but he goes further than Searle because he claims that computers cannot even simulate human thinking. 116 • The (Un)Conscious Computer
As might be expected, the basis of his argument is mathematical, and it starts with the fact that all computer programs are algorithmic. As we saw above, an algorithm is a rule or set of procedures for manipulating numbers to solve a problem, which may be something as simple as converting feet into inches. In the 1920s it was suggested by the German mathematician David Hilbert (1862–1943) that it should be possible to state any mathematical proposal in an algorithmic form such that an algorithm could be devised to determine its truth or falsehood. A few years later in 1931, the Austrian Kurt Gödel proved this was not the case. He showed that for any potential algorithm intended to determine mathematical truth, there must be some statements (now known as “Gödel sentences” in his honor) whose truth it cannot determine. There are in fact some Gödel sentences that are true, and a human mathematician can see this, but the algorithm cannot. This capacity of the human mind, says Penrose, indicates that it is not wholly algorithmic, and therefore not wholly computable. From this observation it follows both that the mind is not itself computational and also that the mind cannot be adequately simulated by a computer. Critics have argued that even if Penrose is right when he says that human mathematical understanding involves more than computation, it is too large a jump to claim on that basis that all mental and conscious states are beyond and different from computation. He, however, claims that it is unreasonable to draw a sharp line between mathematical understanding and human understanding generally, or between human understanding as one aspect of human consciousness and other aspects such as seeing red or feeling pain, or indeed between human consciousness and that of other animals. He concludes that establishing noncomputability in any aspect of consciousness strongly suggests that it should be a feature of all consciousness, though he accepts that there is no logical requirement for this deduction. But this conclusion places Penrose in something of a dilemma because he is in danger of proving too much. He does not want consciousness to be tied to computation, but neither does he want it to be cut loose from physics altogether. He sets out four possible viewpoints one might have on the relationship between consciousness and computation: 1. Consciousness and other mental phenomena are evoked entirely by appropriate computational processes. (This he calls “Strong AI.” It is the position traditionally taken by members of the AI community, who are engaged in building “intelligent” computer programs and robots.) The (Un)Conscious Computer
• 117
2. Physical brain processes cause consciousness, and any physical actions can be simulated on a computer, but the computational simulation cannot by itself produce consciousness. (This he calls “Weak AI.” It is Searle’s position.) 3. Physical brain processes evoke consciousness, but this physical action cannot even be properly simulated computationally. (This is Penrose’s own position.) 4. Consciousness cannot be explained by any physical, computational, or other scientific means. (A dualistic view that denies consciousness has any material basis.) (Penrose 1994b)
The difficulty with viewpoint 3, which Penrose adopts, is that present-day mainstream physics does not acknowledge that there is any physical action that cannot in principle be simulated on a computer, which forces him to the margins of known physics to seek for pointers to some as yet unknown physical law or principle that would allow consciousness to be both physically caused and noncomputable. His efforts in this direction are pursued in Chapter 9. While critics of artificial intelligence continue to put forward their theoretical arguments against it, supporters of machine consciousness and AI are putting their energies into the practical side of things. The best way for them to confound their critics is to succeed in creating intelligent programs and conscious robots. See Chapter 8 for reference to Igor Aleksander’s work in this direction.
118 • The (Un)Conscious Computer
7
Embodied Consciousness
n Wednesday, September 13, 1848, Phineas Gage—a young man in his midtwenties—was at work as usual as a foreman on the construction of the Rutland and Burlington Railroad in Vermont. At 4:30 P.M., he was engaged in the routine but potentially hazardous task of tamping the explosive charge into a hole drilled in the rock prior to blasting. It seems that Gage was distracted by some of his men and looked back over his shoulder while letting the tamping iron fall on to the powder. The resulting explosion shot the tamping iron back out of the hole directly at the turned head of the foreman. The local paper of the following day takes up the story: “The iron entered on the side of his face, shattering the upper jaw, and passing back of the left eye, and out at the top of his head. The most singular circumstance connected with this melancholy affair is, that he was alive at two o’clock this afternoon, and in full possession of his reason, and free from pain.” (Macmillan 2000) In fact, the story is even more remarkable than that. Within a few minutes of having been thrown flat on his back by the blast, Phineas Gage was able to speak, to get up, and to walk more or less unaided to a nearby cart. There he sat for the three-quarters of a mile drive to the local tavern, where he was helped down from the vehicle and on to the verandah. There again he sat while he waited for the doctor. This was a man who had a hole in his head caused by an iron rod more than 3 feet long and 1.25 inches in diameter. Such had been the force of the explosion that the rod went right through his skull and landed over 20 yards away, carrying with it most of the left frontal lobe of Gage’s brain.Yet apart from losing the sight in his left eye, there was in the short-term no obvious permanent harm done. The significance of this bizarre story to the science of consciousness lies in the subsequent and less happy history of its victim. Although he made a remarkable physical recovery, and his powers of
O
119
The skull of Phineas Gage. For decades, Gage’s brain injury has been invoked to support just about every theory of the role of the frontal lobes in the working of the brain and the mind. (National Library of Medicine)
speech and rational thought were by no means entirely lost, it would not be true to say that there was no permanent mental harm.We can say this on the basis of the testimony of Dr. John Harlow, the medical practitioner who attended him after the accident and kept track of him as best he could until Gage’s death in May 1861. Harlow gave two accounts of the Gage case. The first took the form of a letter to the Boston Medical and Surgical Journal, published in 1848 exactly three months after the accident. The second was a paper delivered to the Massachusetts Medical Society twenty years later in 1868, seven years after Gage had died. In it, Harlow reports that Gage applied for his old job back but was refused: His contractors, who regarded him as the most efficient and capable foreman in their employ previous to his injury, considered the change in his mind so marked that they could not give him his place again. . . . Previous to his injury [he] was looked upon by those who knew him as a shrewd, smart business man, very energetic and persistent in executing all his plans of operation. In this regard his mind was radically changed, so decidedly that his friends and acquaintances said that he was “no longer Gage.”
The comparative lack of firsthand information about Gage’s mental state after his accident and the almost total lack concerning his personality beforehand have left the field wide open for specula120 • Embodied Consciousness
tion. Add the fact that the precise nature of the damage to the brain is unknown—although the unfortunate man’s skull and tamping iron are still available for inspection—and you find that Gage has been invoked over the decades to support just about every theory of the role of the frontal lobes in the working of the brain and the mind. One recent example is given by the neuroscientist Antonio Damasio, professor of neurology at the University of Iowa College of Medicine, in a best-selling book titled Descartes’ Error. Descartes’s error, according to Damasio, was not simply that he divided the mind from the body but that he eliminated the role of the emotions from the working of the mind. Damasio believes that Gage’s alleged postinjury change of personality provides evidence for the role of emotion in reasoning and decision making, especially in the context of personal and social behavior (Damasio 1994). But like others who have looked to Gage’s story to support their hypotheses, Damasio’s descriptions of that change far outrun the actual contemporary evidence. It has been argued by some critics that the details he amasses do not provide the reliable independent basis for his theory that Damasio claims (Macmillan 2000; Elster 1999). On the contrary, rather than being based on historical records, the details seem to owe their origin to the theory itself, because the kinds of changes described by Damasio are precisely those that his theory would predict that Gage must have undergone. But there is far more to the case for embodied consciousness than the tale of the unfortunate Gage.
The Embodied Mind We saw in the previous chapter that although in some ways the world of computers and AI might seem to be at the opposite end of the spectrum from Cartesian dualism, with its mental world quite independent of any physical basis, that is not in fact the case. Despite the physicalist emphasis implied by intelligent machines, the “functionalist” approach to the philosophy of mind, which normally goes with the mind-as-computer model, does in fact have its own dualistic aspect. Functionalism says that mental states depend upon their relations, not upon the material in which they happen to be instantiated or realized. If the same pattern of functional relations is found in the carbonbased neurons of a human brain and in the silicon-based chips of a computer, then they will both exhibit the same mental state. The mental state of the functionalist is different from that of the Cartesian in that it has to have some physical basis—it cannot just Embodied Consciousness
• 121
float free—but it is nonetheless independent of any particular physical basis. What Damasio and others who think like him are saying is that both the Cartesian and the functionalist pictures must be wrong because both leave out the emotional or “affective” dimension of the mind. Neither a disembodied mind (as in Descartes) nor a disembodied brain (as in functionalism) can supply this lack, it is argued, because emotion is necessarily grounded in a particular living body with a history and an environment. In the standard account, emotional reactions are determined by the limbic system, which is part of the evolutionarily old part of the brain, hidden underneath the newer and much larger cortex. The key brain organ is the amygdala, which is responsible for the familiar fight-or-flight response to any alarming stimulus, associated with the release of adrenaline into the body. Reaction here is nonconscious and very fast (auditory information has been calculated to reach the amygdala in little over one-hundredth of a second). The conscious handling of emotional situations, which involves the prefrontal cortex (so much of which was lost by Phineas Gage in his accident) only comes into play on a longer time scale. For many years, this physiological account bolstered the philosophical prejudice inherited from the ancient world that emotions belonged to our “lower” or animal nature, whereas conscious rational thought marked out our “higher” and specifically human nature. This dichotomy is now under attack from those who stress the “embodied” nature of the mind as an essential characteristic and not just an accidental (and by implication unfortunate) aspect of human being. One of the earliest and most influential books to advance this new approach was The Embodied Mind: Cognitive Science and Human Experience, jointly written by biologist Francisco Varela, philosopher Evan Thompson, and psychologist Eleanor Rosch and published in 1991. They explicitly call into question the assumption that cognition—that is, thinking and other mental processes—consists of the representation of an outside world by a mental system that exists independently of that world. They outline instead a view of cognition as “embodied action.” By this they mean not only that the mind is necessarily embodied in the brain and nervous system but that the whole organism—mind and body—is itself embedded in its environment. This approach to the mind as embodied and embedded in its world has become increasingly influential. Varela, Thompson, and Rosch see this embodiment as having two aspects: an “outer” or “biological” one, which regards the body as a physical structure, and an “inner” or “phenomenological” one, which 122 • Embodied Consciousness
treats the body as a “lived, experiential structure” (Varela, Thompson, and Rosch 1991, xvi). The aim of their book is to bring these two aspects and the study of them into continual dialogue by finding a way of investigating and discussing the inner aspect that is as precise and reliable as the sciences are for studying the outer aspect. They offer two possibilities: continental phenomenology, a very disciplined form of introspection developed by philosopher Edmund Husserl in the early twentieth century, and Buddhist psychology, based on information from equally disciplined meditation techniques over thousands of years. These authors are putting forward an alternative to the widely held assumption that the mind-brain is concerned with representation. As an illustration, consider what they say about color, which at different times was the special research topic of all three authors. Some researchers (we may call them objectivists) assume that colors are “out there” in the world and that what our bodies and minds do is to make a mental representation of what is there. The process may involve turning light signals into electrochemical ones and so on, but still we are basically seeing what is out there. Other people (let’s call them subjectivists) hold by contrast that color is in the eye of the beholder and that our bodies and minds create color and then project it as part of a world of our own making. These two positions equate broadly with philosophical realism, in which cognition is the recovery of an already given outer world, and philosophical idealism, in which cognition is the projection of an already given inner world. The writers of The Embodied Mind regard this difference as a needless opposition between the inner and outward aspects of embodiment. Their alternative view of cognition as “embodied action” avoids this opposition. The term “embodied” emphasizes first that cognition indeed depends upon the kinds of experience that come from having a body with various sensorimotor capacities—touch, sight, and hearing on the sense side, and movement, speech, and action on the motor side. But as we have seen, the word “embodied” is also a reminder that these capacities are themselves embedded in their wider biological, psychological, and cultural context. The term “action” alongside “embodied” serves as a reminder that, for these authors, sense and movement and perception and action have evolved together and are inseparably linked. A more recent work promoting similar ideas is Consciousness in Action (1998) by the English philosopher Susan Hurley. Her book is a sustained philosophical challenge to the “perceptual input/behavioral output” conceptual framework for how the mind operates. The inputoutput picture, she claims, is completely wrong. It is wrong in the Embodied Consciousness
• 123
first place because it assumes a “ghost in the machine” model of the person, based on Descartes’s sharp divide between the mind and world. It is wrong in the second place because it identifies the personal distinction between perception and action with the subpersonal categories of causal input and output. She blames two assumptions for making the traditional input-output view so plausible. One is that causal flows are one-way or “linear”: If A causes B, then B is caused by A; it would not be true to say that B causes A. The second assumption is that the relation between perception and action is merely “instrumental,” that each is a means to the other but they are essentially separate. She prefers “motor theories” of perception and “control systems theories” of action, which reject both linearity and instrumentalism and appeal instead to a system of complex dynamic feedback. Taken together, motor and control system theories offer what Hurley describes as a “two-level interdependence” view of perception and action, and this model, which Hurley approves of, looks very similar to Varela, Thompson, and Rosch’s embodied action. Like their proposal, this two-level view sees perception and action as mutually and symmetrically interdependent, depending neither on a thoroughgoing realism (sometimes called the myth of the given) nor on a thoroughgoing constructivism (the myth of the giving). The consequences of this relation between perception and action are worked through in the second part of Hurley’s book, having been prepared for in the first part. Especially impressive is the accumulating evidence that identical sensory input can result in different perceptual experience, depending upon a person’s accompanying action. With the mind no longer insulated from the world by Cartesian dualism, says Hurley, the self “reappears out in the open, embodied and embedded in its environment”—words that could have come straight from The Embodied Mind, although she does not actually cite it (Hurley 1998, 3). A third champion of the consciousness-in-action school of thought is Rodney Cotterill, a biophysicist at the Danish Technical University and author of Enchanted Looms (1998). He says it is essential to approach consciousness through an understanding of the physical structure of the body, including its nervous system, but that this approach will not be sufficient on its own; one has also to ask,What role does consciousness play in an organism’s life, and how does that relate to the anatomy and the physiology of the nervous system and indeed of the whole body? Assuming there must be an evolutionary advantage for consciousness, he conjectures that consciousness helps an animal to survive by adding new “context-specific” reflexes to the behavioral repertoire it inherited at birth. These novel reflexes are the 124 • Embodied Consciousness
body’s means of learning and remembering, of storing newly acquired knowledge of its surroundings. They are described as contextspecific because they first arise in a specific set of circumstances and will be called upon again only in an identical—or very closely similar—context. They are called reflexes because once they have been created and stored, they will operate automatically if and when the appropriate circumstances arise again. Understanding the mechanism for them is the key to our understanding of conscious thought processes and their basis in action (i.e., in muscular movement, known as “motor activity”). As Cotterill sees it, an alert creature is continuously receiving two types of sensory input. One is passively received from its surroundings, whereas the other, which is of much greater importance, results from the organism’s own activity, what Cotterill calls its “selfpaced probing” of the environment.We undertake this active probing in both conscious and unconscious ways.Walking along a smooth corridor or up level, evenly spaced stairs, for instance, I am not aware of my body’s “questioning” the floor to check that it is still there. Only when the level surface is disturbed by a dip or an obstacle, or a subconsciously anticipated final step is missing from the staircase and my foot hits the floor unexpectedly, does the whole business suddenly become the focus of conscious attention. However, if I am climbing a mountain trail or crossing an unfamiliar room in the dark, I will deliberately test each footstep to make sure the ground is clear and will bear my weight. But in all these cases, whether conscious or not, the sensory input is a response to my own probing, my own bodily movement. The emphasis here on active probing is reminiscent of Walter Freeman’s discoveries concerning olfaction in rabbits (see Chapter 4). Cotterill says that this information is used by the nervous system to program the next step, and moreover that only this kind of information, input initiated by the organism’s own probing, is absorbed and utilized by the creature. Here is how he tells the story. When the brain sends a command to any part of the motor system—to muscles controlling foot movement, let us say, telling the foot to take a step and check out the ground—the signal goes from the premotor cortex to the motor cortex and from there to the muscles.What Cotterill claims is that whenever the premotor cortex sends a signal to the motor cortex, it also sends a copy of this command to another part of the brain, an area where sensory input is processed. This copy (called the efference copy) alerts the sensory input system to expect some information to be sent back from the probing action of the foot. In response to this alert, the nervous system sets up the anticipated Embodied Consciousness
• 125
next command to the muscles. It can do this because it can draw on a stored repertoire of behavioral reflexes.Anticipation is the name of the game, and when the looked-for signal comes back from the foot’s probing movement, the brain does not have to think about what is going to happen next because the motor command for the next step is already in place and can be sent instantly. The whole process is carried out quite automatically and unconsciously, which is why we can normally walk along the sidewalk without giving it any thought at all. But suppose there is an uneven section of paving that makes me trip up. In that case the signal coming back from the foot will not be the expected one, and it will not match the stored outcome of past probing in similar circumstances. This mismatch causes a tension, which Cotterill believes is what underlies feelings of emotion. The mismatch between the anticipated and the actual raises the alarm, as it were, and the previously smooth and automatic cycle of probing, reporting, and signaling the next probe suddenly becomes the focus of attention. As we saw in Chapter 4, attention is widely believed to be the key to the switch between nonconscious and conscious cognitive states, so this proposed mechanism would account for the link between emotion and our consciousness of “raw feels.” It also explains how we can so readily ignore the potential overload of information that constantly bombards our senses. In the first place, only the input that relates to our own probings enters the system at all. All the rest just fails to register, like so much water off a duck’s back.And second, of the large number of signals that do enter the cognitive system—let in because they are responses to the brain’s own questioning of its surroundings—it is only those that provoke a mismatch with the expected result that become the focus of attention and enter into consciousness. The reactions to these unexpected inputs are what will be embodied as novel reflexes. They will be added to the behavioral repertoire, increasing the number of “remembered next steps” that the organism can draw on in response to a particular stimulus and so improving its chances of survival in a hostile environment. The system is conscious to the extent that it can discriminate input arising from its own probing movements and so develop a sense of being the agent of its own actions. This consciousness is made possible by a series of “loops” in its neuronal circuitry, which allow for the efference copy (an output from the premotor region of the brain) to be looped back to the area that processes input, and for the predicted outcome of movements, embodied as stored reflexes, to be compared with the actual outcome as it happens. There are similarities here to Gerald Edelman’s theory of reentrant signaling (see Chapter 4). 126 • Embodied Consciousness
It is the postulated existence of these internal loops that leads Cotterill to spell out a necessary relationship between motor action and conscious thinking and to offer a physiological account that harmonizes with Hurley’s philosophical rejection of the input/output model of perception and behavior. The classical stimulus-response paradigm treats mental activity as something hidden that goes on between an observable stimulus or input from the environment and a behavioral response or output that is also externally observable. But Cotterill is saying that this puts things back to front. Thoughts do not occur passively in response to external stimuli; they occur only if the premotor cortex actively sends efference-copy signals to call up the outcome of past probings into the environment. But since we are all capable of thinking without acting, this must mean that the “copy” can be sent to the sensory input area of the brain even though no “original” has been sent to activate the motor cortex. This central claim has led Cotterill to label the thought process as “proxied movement,” a kind of simulation of the organism’s interaction with its surroundings (Cotterill 2001, 10). Because no actual command is sent to the motor system, there can of course be no corresponding report from the foot (or whatever) concerning the actual relationship of the body and its environment. Thus no matter what anticipated response is called up, it will fail to be matched in reality, and this inevitable mismatch will in turn focus attention on the proxied movement—the thought process—and so bring it into consciousness. Cotterill’s theory of the role of the premotor cortex in evoking conscious states was first published in 1995, and since then a number of experimental results by other researchers have lent support to his ideas. One of the most obviously relevant was the discovery of socalled mirror neurons by Giaccamo Rizzollati and colleagues (mirror neurons were first reported in Dipellegrino et al. 1992 and were discussed further in Gallese et al. 1996 and Rizzolatti et al. 1996). Functional magnetic resonance imaging (fMRI) studies show that when monkeys look at another monkey performing some action—say, grasping a banana—then the same areas of the prefrontal cortex are activated as when the monkey itself performs that same action. This result has been interpreted as showing a strong connection between perception and action. There was a later positron emission tomography (PET) study that showed the premotor cortex is also involved in visual attention.An even more telling result, in the opinion of Cotterill himself, is very recent work on infants who were unable to make the sequence of muscle movements necessary for such maneuvers as turning over from one’s back onto one’s tummy. A high proportion of Embodied Consciousness
• 127
these youngsters was subsequently diagnosed with autism. This finding fulfilled a prediction, made by Cotterill in his book, that if the premotor cortex were involved in thinking, then an inability to sequence elementary motor procedures into more complex movements might be linked to the failures of mental linkage and sequencing that are found in autistic people. Yet another representative of the current scholarship that insists on the biological nature of mind and consciousness is neurophysiologist Rodolfo Llinás, in his book I of the Vortex: From Neurons to Self (2001). He is a leader among the growing band of experimental neuroscientists who are not ashamed to put consciousness at the center of their research programs. Like Damasio, he is equally opposed to Cartesian dualism, which he says held that the mind appeared suddenly as a result of spectacular intervention (Llinás 2001, ix) and to the computer-type metaphor that says the brain is hardware and the mind software. Such language, says Llinás, is totally misleading (Llinás 2001, 3). He argues like Cotterill that consciousness must be understood as a product of biological evolution and goes further than Cotterill in stressing its central importance right from the earliest stirrings of animal life. Here Llinás goes against a commonly held view that only the highest animals are conscious and that even for them it is only a by-product of evolution, not a selected attribute useful for survival. Noting that plants, which are static, do not have a brain and nervous system, Llinás finds it plausible to assume that their development in animals is closely related to mobility (Llinás 2001, 15). He views the body’s motor system (the central generation of movement) and its sensory system (the central generation of what he calls “mindness”) as parallel elements in the same process, both linking the animal’s control system to what is happening in its environment. Mindness has from the very beginning been the internalization of movement. Like Hurley and Cotterill, Llinás says it cannot be the case that our actions are simply a response to our perceptions. For a start, he says, there simply is not enough time. Facing a Pete Sampras serve, it takes longer for the nerve messages to travel from eyes to brain to arm than it does for the ball to reach the back of the court. Only by anticipating the shot is there any chance of the serve being returned. Llinás goes so far as to claim that prediction, with its goal-oriented essence, so very different from reflex, is the ultimate function of the brain (Llinás 2001, 21). It is achieved because the mind-brain is a selfactivating system, with the properties of the external world already embedded over evolutionary time and modified by individual experience. Llinás’s description shares much with Cotterill’s but also shows 128 • Embodied Consciousness
significant differences. For instance, we saw that Cotterill thinks of the body as harnessing both inherited and acquired reflexes in the service of anticipatory action, whereas Llinás contrasts reflex with prediction. Some of these conflicts may be more apparent than real and may result from different ways of using technical terms like “reflex” rather than from substantial disagreement. In general, however, it does seem that Llinás envisages the motor and sensory aspects as a pair of matching systems or at least two matching halves within a single system, whereas Cotterill’s picture is more of a single integrated whole. According to Llinás, the motor system has a large store of what he calls “fixed action patterns” (FAPs), which serve two purposes. They carry out routine business like digestion and maintaining posture and also provide fast reflex actions to emergency situations. In accordance with his view of the similarity of the motor and sensory systems, Llinás believes that the sensory system also has FAPs, but unlike motor FAPs, which function externally in action, the sensory ones find expression internally as experienced sensations, what the philosophers of mind call “qualia.” At a stroke, this proposition gives qualia both a physical reality (“neuronal activity and sensation are one and the same event”) and a crucial survival role.As will become clear from the discussion of qualia in a later chapter, Llinás has pulled off quite a trick here. Some philosophers deny that qualia even exist, and a lot more doubt whether they have any survival value. In fact, there is a widespread feeling among researchers that qualia, and therefore consciousness itself, are redundant in evolutionary terms because nonconscious reactions are always quicker and therefore better for survival. Llinás counters that qualia, as the expression of sensory FAPs, are a property of mind of “monumental importance” for animals just as much as humans. Like motor FAPs, they greatly simplify the brain’s functions, with a consequent saving in time and energy. This is especially true of learning. A thorn may prick me this time, because the conscious sensation comes too late to stop it, but that does not make the pain useless. It has embedded in the neural system information about thorns that I can use in the future, either to avoid them or to “tame” them. For example, I may apply their sharpness to use as tools or weapons, both of which have survival value (Llinás 2001, 221).
Emotion The growing importance not just of embodiment but specifically of emotion as a topic at the heart of the neuroscience of consciousness is Embodied Consciousness
• 129
witnessed by a number of recent publications. In the wake of Damasio’s Descartes’ Error (published in 1994 and subtitled Emotion, Reason, and the Human Brain), Joseph LeDoux’s The Emotional Brain appeared in 1996, and two years later neuroscientist Jaak Panksepp published the fruits of decades of previously unfashionable and largely ignored work in his Affective Neuroscience (1998). The following year saw another major title from Antonio Damasio, The Feeling of What Happens, and again the subtitle (Body and Emotion in the Making of Consciousness) spelled out the message. Indeed, it sums up the whole theme of the present chapter. Then the opening year of the new millennium saw the launch of a brand new academic journal boldly named Consciousness and Emotion. Such a project by a major publisher is evidence of the conviction, in some circles at least, that this subject is no nine days’ wonder. For much of history, reason and emotion have been treated as opposites, an idea that both resulted from and helped to strengthen a dualistic interpretation of human nature. Emotion has been associated with what was thought of as our lower or animal nature, and as such it needed to be guided and kept in check by the higher powers of rational thought. The mind was superior to the body; cool logic was preferable to hot passion. It is easy to see that the dissatisfaction with dualism reflected in this book and the embodied approach to mental states described in this chapter do not sit naturally or comfortably with this traditional attitude. Clearly, we need a new understanding of emotion—or affect—and its relation to thinking. It is less easy is to find agreement on what shape that new understanding should take. First and foremost, emotions are feelings—and are therefore conscious by definition—but like other conscious states we have looked at, they appear to be accompanied by nonconscious neural processes. Then again, emotions are naturally associated with the body—we have a gut reaction, we jump for joy, we shake with fear—and yet they can be brought on by a purely mental cause, such as receiving tragic or brilliant news. So what is emotion, what causes it, and what value does it have? The ever-observant Charles Darwin (1809–1882) noticed the same facial gestures expressing the same emotional reactions in people across many cultures and races. This similarity suggested to him that emotional indicators, such as the smile and the frown, are part of our inherited human makeup rather than something we learn socially. More recently, Paul Ekman, a psychologist at the University of California at San Francisco, has done careful research to test Darwin’s hunch. Armed with a set of photographs of faces, each showing what he regarded as a typical expression of one of six basic emotions—sur130 • Embodied Consciousness
prise, fear, anger, joy, sadness, and disgust—he traveled the world and visited groups representing more than twenty different cultures. In every case, he showed his volunteers the series of faces and asked them to allocate to each picture the one of the six emotional words that matched the facial expression. There was no significant variation across the groups. In another set of trials, Ekman asked people to view a series of images depicting scenes likely to evoke a range of emotions. While they were looking at the pictures, their faces were secretly photographed and then compared with the emotional response reported by the viewer. Again, there was uniformity across individuals and across cultural groups. There were sometimes differences in the emotions displayed—we can imagine that what disgusts one person might amuse or even delight another—but the match between the viewer’s own facial expression and their own reported emotional state was always the same. The assumption is that there is an evolutionary advantage to being able to “read” the emotional state of other people and that it requires constancy of response that is somehow “hard-wired” into all human beings. The fact that, for instance, someone blind from birth will spontaneously smile when happy, although they have never seen anyone else smile, tends to confirm this view (Ekman details from Greenfield 2000, 107–109). The next step was to discover what parts of the brain are involved in such emotional responses. Damasio associates damage to the frontal lobes of the brain (prefrontal cortex) in Phineas Gage to his subsequent emotional deterioration. The prevalence of emotional instability and mood swings among teenagers has also been linked to the fact that the frontal regions—which are the last part of the human brain to fully develop and be integrated—are probably engaged in the control of the emotions. Damasio believes that the prefrontal cortex manages emotion by integrating information on the current situation with emotionally colored memories of the past and the anticipated
The ever-observant Charles Darwin noticed the same facial gestures expressing the same emotional reactions in people across many cultures and races.This suggested to him that emotional indicators, such as the smile and the frown, are part of our inherited human makeup rather than something we learn socially. (Courtesy of Thoemmes Press)
Embodied Consciousness
• 131
emotional consequences of current actions. There is some evidence of a difference in emotional role between the left and right hemispheres of the brain, with the left being more active in positive moods and the right in negative cases. Another study suggested that the right side of the brain is more sensitive than the left when it comes to picking up emotional signals from others. If that were truly the case, then someone holding a telephone receiver to his or her left ear—meaning that by the crossover effect of the brain’s “wiring,” the sound would be processed on the right side of their brain—might be expected to give a more sympathetic response to a caller than if they held the receiver to their right ear. However, the whole notion of a right-brain–left-brain division of function is currently less secure than it was a decade ago, and it is probably not worth changing one’s phone-answering habits. The cortex may be involved in the control of emotional responses, but it is unlikely to be their source. From the 1950s onward, an influential theory associated with the psychologist Paul MacLean attributed the emotional aspects of brain function to what is known as the limbic system. This term embraces a cluster of brain structures that lie between the evolutionarily oldest parts, making up the brainstem, and the cerebral cortex, which developed most recently. There were two main strands of evidence linking the limbic system (the amygdala, hypothalamus, hippocampus, etc.) to emotional processes. On the negative side, when these structures were damaged or removed from laboratory animals, emotional reactions lessened or disappeared, whereas they survived the removal of the cortex. On the positive side, when the limbic system was electrically stimulated in awake humans, they reported emotional experiences. But when cortical areas were treated in the same way (as in the case of Wilder Penfield’s patients), mental rather than emotional experiences resulted. MacLean pictured this threefold division of the physical brain (brainstem, limbic system, and cortex) as reflecting a mental hierarchy in which the cortex housed the rational mind that had to keep guard over the unruly passions generated by the limbic system and the purely automatic functions of the brainstem. This picture fitted reasonably well with both traditional philosophical dualism and the then fashionable Freudian theories of repression. In its original and quite precise form, MacLean’s limbic system theory of emotion has suffered along with these companion theories as new ideas have developed, and the phrase “limbic system” is now used more loosely both to indicate a somewhat shifting group of structures currently thought to be the physical substrate of the emotional processes (MacLean detail from Greenfield 2000, 110–112). Joseph LeDoux, for instance, 132 • Embodied Consciousness
no longer counts the hippocampus as part of the brain’s emotional apparatus, although the question remains controversial. Wherever in the brain the emotional process is launched, it still needs to be asked what actually triggers it. A commonsense view would be that a perception (A), such as seeing a wild bear, leads to an emotional response (B), such as fear, which in turn leads to an appropriate practical reaction (C), such as running away. There are at least two problems with this view. In the first place, we have already learned enough in this chapter to be wary of stimulus-response-type answers to questions about consciousness. And second, the results of Paul Ekman’s experiments with emotion-laden pictures include evidence of nonconscious physiological reactions—such as changes in blood pressure and heart rate or developing sweaty palms—which occur before the viewer has consciously recognized the face. This suggests that the body’s classic fight or flight response to danger is implemented ahead of any emotional awareness of the hazard. Maybe the emotion is not the cause of the body’s reaction but is itself triggered by it. That is the nub of an early theory produced independently over a century ago by two psychologists, the American William James and the Dane Carl Lange. In their picture, the sight of the bear (A) directly evoked a practical reaction in the nervous system (C), both internally directed to arouse the body for action and externally directed to run away, and then finally feedback from those reactions created the subjective emotion (B) (see Kaszniak 2001, 6). The James-Lange theory as proposed was shown to be inadequate as early as 1927 by Walter Cannon, in part because the visceral arousal they thought provoked the conscious emotions was simply too slow to do the job. Another point was that James and Lange saw a central role for feedback from the body’s periphery to the central nervous system, but Cannon showed that this feedback could be prevented without a resultant loss in subjective emotion.A third problem was that by itself, the theory did not account for why one emotion should be produced rather than another. Physiological symptoms of arousal are a good indicator of a raised level of emotion, but there is nothing in the symptoms themselves to distinguish between fear, let us say, and anger. This last concern is addressed by another feedback theory proposed by Daniel Schacter and Wolf Singer in the 1960s. In their account, the feedback from the nervous system is supplemented by a second set of feedback from the cognitive system. To oversimplify slightly, the gut reaction causes the arousal, and the cognitive reaction interprets it. The extent to which cognition is a necessary player in the production of subjective emotion and the way it relates to other contributors Embodied Consciousness
• 133
is a central concern of Joseph LeDoux, one of the leading researchers into emotion in the 1980s and 1990s.Working at the Center for Neural Science at New York University, he has made a close study of fear and concludes that emotion and cognition are two parallel but interacting neural processes. “Raw” or uninterpreted emotional processing occurs at a nonconscious level, and its outcome may or may not end up in conscious awareness.We might compare this with the evidence for parallel processing in the visual system, which was discussed in Chapter 3. For instance, LeDoux says that initial emotional responses can be made quickly on the basis of crude information, such as the approximate position and direction of a fast-moving stimulus. Then more detailed matters, such as recognition and interpretation of the stimulus, will be handled independently and more slowly by the cognitive system (LeDoux 1996). We saw in the case of the visual pathways that signals passed from the retina to the cortex by way of the lateral geniculate nucleus (LGN). The LGN is one of a number of structures known jointly as the thalamus, and LeDoux suggests that the thalamus is the point at which information that will trigger an immediate emotional reaction is diverted straight to the amygdala. Other signals will proceed to the cortex in the usual way, and in due course more detailed information that has been cognitively processed will pass from there to the amygdala, to be integrated with the fasttrack signals already received. Under normal circumstances, the cortical information would reinforce the emotional response at the amygdala already engendered by the direct signal from the thalamus, giving emotional content and shape to the initial emotional surge. But LeDoux suggests that the cognitive input from the cortex might be manipulated in certain instances so that it modified or even reversed the earlier reaction. For example, someone suffering from a spider phobia would have a fast-track negative emotional reaction to a glimpse of the phobic object, and therapy for such a patient might involve the cortex sending calming signals based on the rational knowledge that spiders are harmless. Critics such as Doug Watt, director of neuropsychology at Quincy Hospital, Massachusetts, say that such an idea is still too cognition-dominated— smacking of the old MacLean threefold hierarchical brain model—and does not do justice to the evidence that emotion is itself a central organizing process for consciousness (Watt 1999, 193). This is not a charge that can be laid at the door of another leading neuroscientist of emotion, Jaak Panksepp, of Bowling Green State University in Ohio. Panksepp focuses on what he calls the “grade A blue ribbon emotions,” prototype emotional systems that we appear to share strongly 134 • Embodied Consciousness
with all mammals. They are things like fear, play, and separation distress (a category on which he has undertaken extensive study) and also a system for nonspecific arousal that he labels “seeking” or exploratory. By starting with possible similarities between subjective emotion in human and nonhuman animals, Panksepp focuses from the start on the evolutionarily older parts of the brain that we share with other creatures, rather than on the huge cortical regions in humans that are our most distinguishing feature. This approach inevitably tends to downplay the relative importance of cognitive functions in relation to emotion and thus avoids a common tendency within consciousness studies to view emotion as just one aspect of conscious experience among others. Instead of treating emotion as an additional element, an appendage to cognitive processes that adds color to them, Panksepp explores the role that emotion might have in actually driving cognitive activity and underpinning all aspects of consciousness. Of particular interest and significance is his decision to search in the more primitive parts of the brain for the basis of biological and social value. It is generally assumed that evaluation must be a “higher” function that takes place in the cortex and is restricted to sophisticated brains. But Panksepp finds the roots of emotion in postulated “value generators” that lie in the comparatively primitive midbrain and brainstem regions. He believes that his prototype emotional systems come together in a midbrain structure known as “periaquaductal gray” (PAG). If true, it would make PAG an essential structure in the limbic system, supplying crucial integration of emotional processing deep within the subcortical region of the brain. Panksepp also indicates direct pathways linking his proposed midbrain architecture for prototype emotion with Bernard Baars’s and James Newman’s ERTAS architecture for consciousness (see Chapter 4). This concern with value necessarily goes hand in hand with questions about the “self,” another concept normally thought of as developing at a “high” brain level and needing consciously accessible representations of self in cortex and prefrontal systems. Here again, however, Panksepp goes against the tide and regards the self as originating in unconscious and mostly subcortical regions of the brain, being deeply grounded in the brainstem. The other researcher who most closely shares these views on the self and emotion is Antonio Damasio, whom we have already met in the context of research into Phineas Gage. Damasio’s hypothesis places the foundation or precursor for self (what he calls the “proto-self ”) in the subcortical region, especially the hypothalamus, whose main function is to keep the body’s basic physiological variables—temperature, hormone balance, and so on— Embodied Consciousness
• 135
within the narrow range essential for life and health. Carrying out this function does not require consciousness, but it does mean the hypothalamus must have some way of monitoring the state of the body, which makes it a natural site for the proto-self. According to Damasio, the proto-self is not conscious, but it contributes to a series of joint “mappings,” each derived jointly from the proto-self, some external object, and any changes in the proto-self initiated by interaction with that object. These “second order mappings,” as he calls them, form the basis of a “core” self and of “core” consciousness, which enable the cortex to integrate this information with sensorimotor mappings related to interaction with and perception of the object. Central to Damasio’s scheme is the idea that a brain can’t be conscious unless it represents external objects, a primitive self, and also the way in which the self is being altered by interaction with the object. To this extent, all consciousness is self-consciousness. This core consciousness and core self depend only upon what Damasio calls the “wordless knowledge” of the world and of our interactions with it. To advance to the final stage of “extended” consciousness and the “autobiographical” self, memory and language need to be added. Extended consciousness comes about in a parallel way to core consciousness and, Damasio insists, is totally dependent on the integrity of core consciousness. As before, there is a mapping of the changes in the proto-self generated by the interaction with the object, but in this case—crucially—attention is paid to internal objects in the mind rather than external objects in the world. There is now a third layer of self-representation. To the proto-self and the core self is added what Damasio calls the autobiographical self, which allows access to the rich and expanding range of images that inform the way we interpret and respond to current events. Emotion plays a dual role in Damasio’s scheme. It is foundational for core consciousness because changes associated with the activation of emotions are among those changes in the proto-self that are recorded in the second-order mapping that generates core consciousness. In addition, it is itself a potential internal object for the attention of extended consciousness. The result is a meshing together of emotional and cognitive processes, so that in human choices and actions the purely logical move is always likely to be tempered by an emotional intuition or gut feeling, not always open to conscious awareness. To bring this chapter full circle, we may note that Damasio has found that damage to the prefrontal cortex—as in Phineas Gage’s case—is one way in which this balance can be upset (Damasio discussed in Watt 2000, 74).
136 • Embodied Consciousness
8
The Once and Future Self
e all have a poignant and particular fear of the dementia associated with Alzheimer’s disease and its attendant forgetfulness and loss of orientation. The loss of memory generates a different order of anxiety from other deprivations of age, such as reduced mobility and even chronic pain, because it seems like the loss of one’s very self. The very word “dementia” (literally, “being without one’s mind”) says it all. Indeed, the two subjects that we tackle in this chapter—memory and learning—are at the heart of what it means to be a conscious person. Memory links us to the past and gives us a sense of personal continuity from one moment to the next, and learning is the faculty that enables us to make use of our present experience in the future. Without memory and learning, each moment of our life would be an isolated instant with no context whatsoever. One muchstudied patient (known to medical science as HM), who suffered severe memory loss following an operation to relieve epilepsy, knows this condition only too well. “Every day is alone by itself,” he once reported, although, as we shall see, even in this severe case, the capacity for learning had not altogether deserted him (quoted in Milner, Corkin, and Teuber 1968, cited in Rose 1993, 126).
W
Different Types of Memory Research since the 1970s has resulted in the general agreement that memory is not a single mental phenomenon or biological mechanism. There is much less agreement, however, as to how the various functions we associate with memory should be divided up, and exceptions can be found to just about every neat classification that has been attempted. Much of the evidence for different types of memory comes from the detailed study of amnesiac patients such as HM. His case is unusual in that his brain damage was not the result of illness or 137
The loss of memory generates a different order of anxiety from other deprivations of age, such as reduced mobility and even chronic pain, because it seems like the loss of one’s very self. Here, a wife helps her husband, an Alzheimer’s patient, get ready in the morning. (Stephanie Maze/CORBIS)
accident but of deliberate surgery. That had the positive advantage that his doctors knew exactly what parts of his brain had been removed. On the negative side, the amount was so large—a slice 2 inches thick from across the middle to the front of his head on both sides—that it was not easy to be sure which of the missing parts was responsible for the memory loss. But comparison between the type of memory loss and the type of brain damage in different patients, together with various animal experiments and more recently the use of functional magnetic resonance imaging (fMRI) scans of people undertaking various memory tasks, has enabled links to be drawn between various kinds of mental acts and particular parts of the brain. HM was twenty-seven years old when he underwent the surgery in 1953 to reduce the crippling effects of his epilepsy. In the late 1990s, when he was in his seventies, he was still actively engaged in different research projects. The chief symptom of his condition is an inability to remember anything that has happened since 1953. It extends to not recognizing a photograph of himself as he now is. By contrast, he would recognize a photo taken in the early 1950s and can in general remember things that happened up to about a year before his operation. He can also remember new things over a very short time span; for instance, he can repeat a telephone number he has just been given. But if you had been introduced to him this afternoon, he would greet
138 • The Once and Future Self
you as a stranger this evening. He would have no recollection of ever having met you before.We might see this as an exaggerated version of a common experience as we get older: I can still reel off the Latin declensions I learned forty years ago at school, but I have to think hard to remember the name of the new neighbor who moved in last week. This type of amnesia has led to a distinction being made between short-term and long-term memory (STM and LTM). In the theory’s heyday, in the 1950s through the 1970s, these were conceived of as two quite separate stages or systems for creating and handling memory (see Squire 1998, 56). The idea was that information from any stimulus went first to a “sensory register,” from where it passed to a short-term memory store. George Sperling published results in 1960 that suggested at least a very brief period of attention—say a quarter of a second—was needed to bring about this transfer. This view was generally adopted and supported by other findings, although some researchers claimed that in certain cases, completely unattended information could enter STM. There have been various proposals, not necessarily mutually exclusive, concerning how the information is held (encoded) during the short-term stage. One possibility is that we picture the information in some way, in what is called visual coding (see Posner and Keele 1967), and another is that we silently rehearse the sound of what is being remembered (acoustic coding) (Baddeley 1966). We do, of course, sometimes repeat something out loud as a way to keep it in mind for a short period—a telephone number, perhaps, or an instruction to find a street address—but what is suggested here is a subvocal repetition. A third possibility is semantic coding, in which we attach some meaning to the thing to be remembered (Schulman 1974). A four-digit number that matches a significant year like 1776 is easier to recall than a completely random number. This last strategy links with another characteristic of short-term memory: it has very limited capacity. “The Magical Number Seven, Plus or Minus Two” was the title of an article published by George Miller in 1956, in which he drew attention to the range five-to-nine as the limit on the number of items of unconnected information people can hold at one time (see Document 4, this volume). By giving four unrelated digits a single meaning—the year of American independence, in the example above—we effectively reduce the load of information from four “chunks” (to use Miller’s own term) to one. The small capacity of STM offered one explanation for why it is so short-term. A study by N. C. Waugh and D. A. Norman in 1965 found that unrehearsed verbal stimuli tended to be quickly forgotten because they were interfered with by subsequent incoming items. As The Once and Future Self
• 139
new information was received, earlier contents were displaced and had either be transferred to long-term memory or lost. Where no such later stimuli were applied, the originals were remembered for longer, suggesting that their traces did not automatically decay with the passing of time. However, work by Alan Baddeley, N. Thompson, and M. Buchanan at Bristol University in the United Kingdom in 1975, using words rather than numbers, did suggest a direct loss from STM over time. They found that the number of words recalled from a list was higher if they were short words, fewer if they had many syllables, but always approximated to the number of words that could be repeated in two seconds.With a longer list, there was a deterioration of memory for the words, even without the application of new stimuli to drive them out. Long-term memory, by contrast, was regarded has having limitless capacity. This trait, of course, would give rise to problems when it came to storage and retrieval, but before that, the theory had to show how information entered long-term memory in the first place. The main two proposals were extensions of the ways it was believed to be held in STM. The first view, proposed by Richard Atkinson and Richard Shiffrin in 1968, was that if information in STM was rehearsed enough, whether by vocal or visual means, then it transferred automatically to LTM. Second, either instead of or in addition to the repetition route, if meaning were applied to information in STM (semantic encoding), then information might be transferred to the longterm store. And once there, by whatever means, it was generally believed that it was the meaning of the items that held the key to how they were stored and retrieved. The phrase “once there” in the previous sentence raises the question of the physical region of the brain responsible for memory. In this two-fold model of memory, it was possible that long-term memory might be associated with a different part of the brain from that used for short-term memory. In the surgery undergone by HM, so the argument went, the part of the brain holding preoperative long-term memories must have been left intact, because he could recall his early life quite well. But the parts required for holding new information in short-term memory—and transferring it where appropriate to longterm memory—must have been among those removed because he was not able to remember anything that had happened since the operation. Since the loss of the hippocampus and the central section of cortex known as the temporal lobes (above and just forward of the ears) were responsible for the symptoms displayed by HM, they were taken to be involved in short-term but not long-term memory. The 140 • The Once and Future Self
site favored for LTM was the prefrontal cortex, the region at the frontmost part of the brain. Not everyone, however, was impressed with the shortterm/long-term distinction between memory stores. Fergus Craik and Robert Lockhart, for instance, argued in 1972 that STM and LTM were not two distinct stages, each with its own system of brain parts to use, but were different aspects of a single underlying process that operated at a number of different levels. Their idea was that sensory information is processed at multiple levels simultaneously and that the “deeper” the processing, the more likely it is to be remembered. Information that has strong associations with existing knowledge will be processed at a deeper level and be assimilated into the stock of memories with which it easily fits. Novel stimuli that do not fit into any familiar pattern of knowledge will be handled more superficially and are less likely to be retained. This theory supports the common experience that we find it easier to remember things that are meaningful to us: we are unable to take in (as we say) a lot of new ideas all at one go. Craik and Lockhart also suggested that information “being attended to” receives more processing than unattended stimuli. This idea has subsequently become a major feature of theories of consciousness. Processing of information is assumed to be unconscious and automatic at all levels, unless we attend to a particular level at which processing is taking place. If this is an accurate picture of what is going on in the brain, the mechanism of attention—that is, the mechanism that leads to consciousness—should be thought of as a kind of interruption in other processing, rather than a separate cognitive process in its own right. The distinction between STM and LTM is still made in appropriate cases, although there are some drawbacks to the short-term/longterm terminology. For one thing, how short is short? One version of the model has the time span of short-term memory as little as ten seconds or so which would explain why we can remember telephone numbers for about that length of time without repeating them to ourselves or working out some mnemonic to help retain them. (HM could remember a number for up to fifteen minutes, but only so long as he kept saying it over and over to himself; he forgot it as soon as he stopped repeating it.) But a lot of the things we are able to remember (such as where we parked the car) need to be held available for much longer than ten seconds, although they need not, and indeed ought not, be committed to permanent long-term memory. Note that the ability to forget—in appropriate circumstances—is just as important as being able to remember. The memory “My car is parked by the The Once and Future Self
• 141
front door of the college” was useful last week, when I did park by the front door, but it would be misleading tonight, when I parked down by the road. Another curious finding is that amnesiacs like HM tend not to be able to remember things that happened in the year or so immediately before their brain damage, which has led to the suggestion that it can take as long as twelve months for memories to be laid down permanently. Observations like these mean that in place of short-term memory, researchers now tend to speak of “working memory” (WM). This is a more flexible concept, one that relates to function rather than to a specified length of time. Alan Baddeley, one of the promoters of the new term, recently defined it as “a limited capacity system allowing the temporary storage and manipulation of information necessary for such complex tasks as comprehension, learning and reasoning” (Baddeley 2000, 418). In other words, working memory is precisely that: a working system, a system that is not just holding information but doing something with it. In Baddeley’s model, there is a key role for what he calls the “central executive,” which determines those items among the stored memories to be the focus of attention at any moment. The central executive is required to be very active, selecting, initiating, and ending the processes of encoding, storing, and retrieving. As with the earlier conception of STM, it has input from the sensory register, but it also has the ability to draw on the long-term memory system, as well as to transfer information to it. The original version of WM, proposed by Baddeley and Graham Hitch in 1974, was a three-component scheme in which the central executive worked closely with two temporary storage systems, called the “visuospatial sketchpad” (VSSP) and the “articulatory loop” (AL). More recently (in an article published in 2000), Baddeley has added a fourth component called the “episodic buffer.” The need for it has been suggested by the scheme’s shortcomings in relation to conscious awareness, a question not directly tackled in the original model. The VSSP manipulates and temporarily stores visual and spatial information. The articulatory loop, which handles verbal information, has two aspects. One is a phonological memory store, which can hold sounds, traces of acoustic or speech-based material, for just a very short time (about two seconds, as shown by Baddeley’s study on the memory of words, discussed above). The other consists of articulatory subvocal rehearsal (repeating words silently), without which the contents of phonological memory store are very rapidly forgotten. That gives the central executive four sources of material to work with. Baddeley equates WM with the supervisory attentional system 142 • The Once and Future Self
(SAS) proposed by Timothy Shallice, of the Institute of Cognitive Neuroscience at University College London (see Shallice 1982, discussed in Baddeley 1990) According to Shallice, the SAS is a system of limited capacity but can be used for a number of purposes, especially tasks involving planning or decision making. The SAS would be activated in situations in which automatic processing is inadequate because of novel or dangerous circumstances or strong habitual responses or temptations need to be overcome. Baddeley’s theory has been developed so as not to conflict with known neuroscience but has been criticized at the practical level for failing to explain how WM actually works. The central executive, for instance, looks suspiciously like what scientists call a black box: we know what it needs to do according to the theory, but we are given little or no idea how it does it in practice. One experimental fact Baddeley does have is that its functioning is impaired by extensive damage to the frontal lobes, and he thinks these areas are likely to be important locations for the physiology of both the episodic buffer and the central executive. Patients with such injuries lack flexibility in awkward situations and the ability to control their processing resources. This finding puts one in mind of Phineas Gage’s change in personality, noted by Antonio Damasio and others, following the accidental destruction of much of his frontal and prefrontal cortex (see Chapter 7). A psychologist making use of an idea related to Baddeley’s is Bernard Baars, who explains WM as a product of consciousness. This notion makes sense in terms of the global workspace (GW) theory put forth in his 1988 book, A Cognitive Theory of Consciousness. GW theory is based on the belief that the detailed workings of the brain are widely distributed and the contents of consciousness are disseminated though a vast number of currently unconscious networks throughout the brain. There is no centralized command telling neurons what to do, Baars says, but there is—to apply a theater metaphor—a “spotlight of attention” that determines the contents of conscious experience at any given moment. The information in the spotlight is available to the whole audience; it is transmitted throughout the global workspace of the brain. For Baars, this process is the practical meaning of conscious experience: making information globally available. GW theory was first propounded in 1988 as a largely psychological theory, but Baars showed in a survey in 2002 that the almost twenty-five years since then have produced accumulating evidence from neuroscience to strengthen its biological credibility. Making information available brings us back to working memThe Once and Future Self
• 143
ory. Baars refers to Miller’s “magical seven” as the limiting number of separate items or “chunks” we can hold in our memory at any one time. But he points out that in fact, we can only be consciously aware of one of those seven things at a time. So there are two aspects to the limited capacity of consciousness: seven chunks available in working memory (on the darkened stage) and just one chunk in consciousness (under the spotlight) at any given moment. But, Baars insists, this limitation does not require divorcing WM from consciousness. It may be the case that only one item is conscious in any second or so, but all the active components of WM have to be conscious to be manipulated. As he wrote back in 1997, “input, output, and manipulated items in WM apparently need to be conscious” (Baars 1997, 302). This is where Baars’s discussion of working memory goes beyond the earlier work of Baddeley, although as we have seen, Baddeley himself has more recently tried to address this question. The original WM theory had avoided this point, Baars says, because of the lingering behavioristic taboo that outlawed the discussion of consciousness in psychology. A rather different distinction than short-term–long-term is that between declarative and nondeclarative memory, which helps make sense of the fact that someone who has received a bang on the head might say—quite honestly—“I cannot remember anything; I cannot even remember my name,” and yet be able to remember the English words and grammar necessary to speak that sentence. The terminology “declarative/nondeclarative” comes from Larry Squire, professor of Psychiatry and Neurosciences for the past thirty years at the University of California School of Medicine in San Diego; an alternative found in the literature is “explicit/implicit” memory, and one might almost call them “conscious and nonconscious” memory to spell out the difference between them. In the example just given, conscious memory is a complete blank, unable to call to mind even something as simple as the speaker’s own name, whereas some nonconscious system is retrieving the complex requirements for speaking grammatical English and using the appropriate words to convey the intended meaning. Explicit memory is what we normally mean when we speak of human memory, and it has been extensively studied by psychologist Endel Tulving, who spent nearly all his working career at the University of Toronto in Canada, where he has been professor emeritus since retiring in 1992. He made a proposal, which has been widely accepted, that explicit memory itself consists of two distinct elements: facts (or semantic memory) and events (or episodic memory). In this context, facts are free-floating pieces of information, such as “the Ja144 • The Once and Future Self
panese bombed Pearl Harbor in 1941,” whereas events are episodes in a person’s own lived experience, such as spending a vacation on Honolulu and visiting the harbor or going to see the movie Pearl Harbor. Tulving noted that with episodic memory, a person not only has the memory itself but can usually remember something about the setting in which the memory was learned.With semantic memory, however, we cannot generally recall the context of the initial learning. This implies a genuine difference in the nature of the two kinds of memory. Sometimes an injury can leave semantic memory for facts intact while destroying episodic memory. An example of this was Tulving’s patient NN, who sustained head injuries in a road accident. His general knowledge remained excellent, but his memory of things relating to his own past—such as the name of his school—was totally lost (discussed in Toates 2001, 283). Implicit memory concerns skills and habits and for this reason is sometimes referred to as “procedural memory.” It is the way we remember how to do things like walking upright and riding a bicycle, which once had to be consciously learned but then became—as we say—“second nature,” so that we actually find it hard to explain how we do them. Just try putting into words the procedure you use for tying your shoelaces, and you will see what I mean. Patients like HM, who are severely amnesiac when it comes to explicit memory, typically have no problems with these kinds of procedural skills, which suggests that implicit/nondeclarative memory relies on a quite different physiological basis from the explicit/declarative type.An interesting example of this dichotomy is shown up by the so-called practice effect.We are all familiar with the experience of something becoming easier the more we do it.We start off concentrating hard, remembering the stages, and so on, but then gradually we relax into the routine, and our performance improves.A classic case in which one might predict such improvement is a game of manual skill, such as the one known as the “Tower of Hanoi.” The object of the game is to build a tower—following certain simple rules—of differently sized rings, such that the largest ends up at the bottom and the smallest at the top. HM was tested on this game. As you would expect, every time he was presented with the puzzle, it was as if for the first time. He had no recollection of having played it before, and he had to have the rules explained as to an absolute beginner. Yet as he played it day after day—each time declaring it was his first experience of it—his efficiency improved in accordance with the normal workings of the “practice effect.” In other words, although his postoperative explicit memory was almost completely impaired, his implicit memory— The Once and Future Self
• 145
Implicit memory concerns skills and habits and is sometimes referred to as procedural memory. It is the way we remember how to do things like riding a bicycle that once had to be consciously learned but then became— as we say—“second nature,” so that we actually find it hard to explain how we do ride a bicycle. (Library of Congress)
even in relation to new tasks—remained as effective as in a normal person (Rose 1993, 127–128).
A Mechanism for Memory and Learning So far we have only briefly touched on the question of which parts of the brain might be engaged in different aspects of the memory process, and we have not considered at all the actual biological processes—the physiology—that might account for the mental act of remembering or learning. It is generally assumed that memory is somehow encoded by changes in the activity and/or the structure of neurons, and it was suggested more than a century ago—by our old friend Santiago Ramón y Cajal, the Nobel Prize–winning Spanish anatomist—that the synapses must be the key to these changes. Then just more than fifty years ago, the Canadian psychologist and physiologist Donald Hebb proposed that synaptic connections were strengthened by use, so that a repeatedly used connection would become more likely to be used again; a feedback mechanism, such as was by then known to occur in neurons, would boost this process still further. A possible mechanism for memory was emerging. We saw in the description of the brain in Chapter 2 that a neuron receives signals through a large number of branched structures called dendrites and sends out a signal through a single axon, which is split into many endings. At each ending, there is a synaptic junction, 146 • The Once and Future Self
across which a signal passes when the neuron fires. That signal may either inhibit or excite the dendrites of the target neuron, and when that neuron’s excitatory input is sufficiently large compared with its inhibitory input, it too will fire, sending a spike of electrical activity (action potential) down its own axon to yet more target neurons. Neuroscientists believe that the pattern of firing—created according to which neurons fire and which don’t—is largely responsible for creating the brain states that determine both motor actions and conscious awareness. Since this pattern is a matter determined by which synapses pass on a signal and which don’t, the process of learning and of laying down memories must therefore consist in changing the relative effectiveness of the synapses. In other words, the pattern of influence of one neuron on the others changes: some connections being strengthened, but others are weakened. Brain states corresponding to patterns of neurons whose excitatory connections are strong will tend to predominate. In other words, they will be learned, or remembered, and become habitual. Hebb’s great contribution was to suggest a means whereby these patterns could be manipulated (Rose 1993, 150–152). His theory had two prongs. One said that, as a general rule, preference is given to frequently used pathways. Each time a particular synaptic junction carries a signal, it is strengthened, so that next time the neuron fires, it is even more likely to be used. This strengthening must result from some physiological change induced by the passing of the signal. It might involve the production of more of the relevant neurotransmitter, so that activity at a particular synapse carries more punch, as it were. Or it might result in the creation of more synaptic junctions between the two neurons concerned, so that a single firing by the first cell would activate more connections with the second. Either way, the outcome would be a strengthening of the bond between the two cells, most likely by an increased sensitivity of the postsynaptic neuron and a consequent increase in that particular pattern of firing, and it corresponds to a greater likelihood of the brain state associated with that pattern being learned or remembered. In contrast, if a synapse between two neurons is rarely used, it gets weaker and falls further into disuse, so that for all practical purposes, the possible patterns it might have formed a part of cease to exist. The brain states they would represent thus either never arise or else are forgotten. The second prong of Hebb’s theory takes the control of the neurons and their pattern of firing even further into their own hands, as it were. By the 1940s, when he was developing these ideas, neuroscientists had already discovered that neuronal pathways were not one-way The Once and Future Self
• 147
streets. If neuron A signaled neuron B, then when neuron B fired, it was as likely to pass a signal back to cell A as it was to pass a message on to a further cell C.What is more, the message could be either positive or negative, excitatory or inhibitory. So by feeding back or not and by the positive or negative nature of the feedback, a neuron could in part determine the frequency of use—and therefore the strength—of its own connections with other cells in both directions. A crisscrossing network of neurons could thus set up a variety of preferred patterns of firing, a mechanism for learning and memory. In addition to studying the firing patterns and physiology of actual neurons, there have been many attempts to explore the brain’s learning processes by creating networks of artificial neurons. That has been done by simplifying the picture and picking out certain essential features of neurons and their interconnections. This much-simplified brain has then been reverse engineered by programming a computer to simulate these features. In artificial neural nets (ANNs), the synaptic junctions are represented by the links between the electronic units that act as simple artificial neurons. These links are initially “weighted” to favor some over others, and each “unit” or artificial neuron converts the pattern of incoming signals into a single outgoing signal, which it sends to other units with which it has contact. To determine the total incoming signal, it first multiplies each active link by the “weight” on the connection through which it came and then adds all these weighted inputs together. The output that results from this total input will depend on how the unit was originally programmed. (J. G. Taylor contributed to this discussion of ANNs.) To simplify,ANNs can be thought of as having three categories of artificial neuron: linear, threshold, and sigmoid. In a linear unit, the output is directly proportional to the total weighted input, so there is a whole range of possible output strengths of signal. This variable kind of response makes the linear unit what is sometimes called an “analog device.” With threshold units, as the name implies, there are only two possible outputs, depending on whether the total input reaches the predetermined threshold value. If the threshold is reached, then a preset fixed signal is sent; if the total weighted inputs do not reach the threshold, then no signal is sent. This all-or-nothing kind of response makes the threshold unit what is known as a “digital device.” In sigmoid units, the output varies continuously as the input changes (that is, it is analog in character rather than digital), but the input-output relation is not a straightforward linear one. This last kind of artificial neuron is probably the closest of these three to how real neurons function in our brains and the kind most open to devel148 • The Once and Future Self
opment. As ANNs have become more sophisticated, many more variants of artificial neurons have come into use, with probabilistic and even chaotic responses. When the artificial neural network is set up, the connections between units are specified in advance. The presence or absence of a connection determines whether it is possible for one unit to influence another one at all. The weights assigned to each link specify the strength of that influence. A simple example of an artificial neural network consists of just three groups or layers of units: input, intermediate (or hidden), and output. The activity of the input units represents the incoming external information that is fed into the network, as it might be from the eyes or ears in a real brain. The activity of each hidden unit is determined by the signals from the input units and the weights on the connections between the input and hidden units. And the final output from the third set of units depends in turn on the activity of the hidden units and the weights on the connections between them and output units. The key layer here is the middle one because the weights between the input and hidden layers determine when each hidden unit is active, and—crucially—the hidden units are free to modify these weights and so choose which inputs are represented in the output. This is how the neuronal feedback in Hebb’s picture of the brain’s activity is brought into the computer model. Now we have to consider how an ANN learns and remembers. Again as a very simple example, we may consider how a triple-layer network is trained by being presented with a pattern of signals for the input units, together with the pattern that represents the desired output. The actual output is then compared with the desired output, and the weight of each connection is changed to produce a better approximation of the desired output. In a laboratory setting, the experimenter knows in advance the desired output pattern and gives the network an appropriate “learning rule” to regulate each weighted connection until that output is achieved. This is the difference between neural nets and ordinary computers. The digital computer is linear and entirely inputdriven. Each step, in any conceivable situation, is determined by its initial program, and a given input will always produce the same output. But parallel processing systems, as the ANNs are properly called, are designed precisely to produce a certain output that will almost certainly not be the one initially produced by the input concerned. So their learning rules—unlike ordinary computer programs—lead to changes in the network’s response, even when the input does not change. In other words, they learn by experience to respond differently in order to achieve their goal. That is why supporters of artificial intelligence The Once and Future Self
• 149
(AI) claim that these networks offer a model for the mind-brain that is not prone to the same criticisms as earlier computer models. However, although many learning rules have been devised and ANNs have proved very effective at certain tasks, such as face and voice recognition, it is still a painstaking task to find out which representation and learning procedures are actually used by the brain. One recent success showed that attractor nets are used in the hippocampus, a brain structure that, as we have already seen, has long been associated with memory. Many other results are being achieved as well, including some that relate to the existence of self-organizing maps to achieve orientation sensitivity in early visual cortex. But there is still a long way to go. An added complication with ANNs is the fact that they are not actual machines, but simulations that must be run on a standard digital computer. It is maintained by many working in the field that this is not a problem, because anything analog can be modeled on a digital computer to any degree of approximation required. Igor Aleksander at Imperial College in London, for example, is a very optimistic and prominent promoter of new developments in AI. His WISARD program (Wilkie, Stonham, and Aleksander’s Recognition Device) was launched commercially in 1984. It is a pattern recognition device that is a simple form of neural network with binary weighted links between units. It showed that a simplified net could still be effective, and it had the advantage of greater speed in training and operation than more complex systems.WISARD was followed by the more sophisticated MAGNUS (Modular Array of General Neural Units), in which the feedback from the intermediate units were connected to each other as well as to the input units. However there are some within the machine-intelligence community who are less sanguine about the claim that a digital computer can adequately model analog devices. The skeptics include Richard Wilson, a former colleague of Aleksander and therefore someone in a good position to know (personal communication). ANN training is of three main sorts: unsupervised (in which there is no training data, but the net tries to detect structure in the data), reward-based (with a simple reward from outside to response), and supervised (containing a full training set). All these have been heavily used in ANNs over the years. But it remains a shortcoming of ANNs that their training at present still depends upon an external operator to choose the desired output and devise the learning rules. Unless we fall back on the hypothesis of an intelligent creator, no such external assistance was available to the human brain as it developed 150 • The Once and Future Self
over evolutionary time. Even so, if these ideas about neural networks are set alongside the views of people like Gerald Edelman and Rodney Cotterill (see Chapter 7), in which actual and anticipated outcomes of action are said to be compared and the resultant match or mismatch acted upon, some convergence between biological and artificial neural networks does not seem impossible. Cotterill, at least, is optimistic about the production of consciousness in a machine-generated program and has just published his first report on the experimental program he has named CyberChild (Cotterill 2003). Turning back from artificial systems to biological ones, I now consider in more detail proposals for the actual sites in the human brain that relate to the different kinds of memory that we have been discussing. Until the advent of brain scanners, the chief way in which neuroscientists looked for this sort of information was by studying people like HM, who had suffered damage to known parts of their brains, and then correlating the damaged areas with the characteristic symptoms of their impaired memory. One structure in the brain that has long seemed to be essential for competent memory is the hippocampus, an evolutionarily early part of the cortex positioned near the center of the brain and constructed more simply than the far larger sheets of cortex that developed later. In 1973 Tim Bliss at the Institute of Cognitive Neuroscience in London discovered a property of hippocampal cells that could explain why the hippocampus is so crucial for laying down memory traces. We have already seen that every time an ordinary neuron fires, the pathway of which it is part becomes strengthened, and there is an increased likelihood of its being excited again, using the same synaptic junctions to receive and pass on signals. It is on heightened alert, as it were. But this normally is a short-term effect, and the pathway weakens again if not reused fairly soon.What Bliss and his colleague Terje Lømo found was that in the case of the hippocampus, two things were different. First, it was the cell on the other side of the synapse from the one that fired that became more reactive, and second—and much more significantly— this state of heightened reactivity could last for hours or even days. They named this phenomenon “long-term potentiation” (LTP), and they claim that it is the brain’s mechanism for the boosting of synaptic strength, the key to learning and memorizing, which Hebb had predicted and on which modern neural network theory depends. There is evidence that the hippocampus is especially concerned with what we have called episodic memory, that is, memory for events within one’s own personal experience. Memory for factual events of a more detached kind—semantic memory—seems to be The Once and Future Self
• 151
impaired especially when there is damage to the temporal cortex, which lies (as the name implies) below the temples. Both the hippocampus and the temporal cortex were effectively removed by HM’s surgery, and he consequently suffered a very general loss of memory, but studies on patients with more limited areas of brain damage show more specific symptoms of memory failure. There is an interesting contrast in this respect between the effects of two kinds of dementia. The all-too-familiar symptoms of Alzheimer’s disease begin to manifest themselves in impaired episodic memory, and physiological evidence shows that it is the inner structures of the brain, such as the hippocampal region, which first show deterioration in these cases. But the less common condition known as Pick’s disease, which in its final stages leads to total dementia similar to Alzheimer’s, is associated in its initial stages with quite different symptoms that appear to be speech impairments. In fact, these early symptoms have been shown to be caused by a developing loss of semantic memory. People’s speech appears to be affected because they cannot remember the names of things, so they go quiet or fumble for words. The initial physical deterioration in these patients is found not in the hippocampus but in the temporal cortex (see Greenfield 2000, 82–84). We shall return shortly to the loss of declarative memory (both its episodic and semantic forms) and consider possible ways to combat it. First we turn briefly to the neural basis of nondeclarative (procedural) memory, which works at a nonconscious level and enables us to carry out routine procedures such as walking and talking without having to think about them. Animal studies have shown that damage to the prefrontal cortex and the hippocampus, which effectively destroys declarative memory, does not cause any loss of procedural memory. This more primitive memory system is associated physically with some older subcortical parts of the brain that are known to be involved in the planning of movement. They include a group of structures known collectively as the basal ganglia and also the cerebellum, situated at the lower back of the head. These regions of the brain are linked to the motor cortex, from which signals for movement are sent to the muscles, and this connection makes sense in view of the fact that much procedural memory is concerned with posture and movement. Also, they were not among the parts removed in HM’s surgery, which explains his unimpaired procedural memory, as demonstrated in his improved performance with the Tower of Hanoi game. Positron emission tomography (PET) scan studies on normal human subjects have confirmed the finding that procedural (non152 • The Once and Future Self
declarative) memory is laid down independently of the hippocampus and the medial temporal lobes. An interesting sidelight is thrown on the physical sites of nondeclarative memory by the loss of motor control that is typical of Parkinson’s disease and Huntingdon’s chorea. These symptoms could be interpreted as forgetting how to control one’s body, and this way of thinking about them is made more plausible by the fact that the basal ganglia and the cerebellum are the regions of the brain found to degenerate in patients with these diseases. The symptoms are the mirror image of amnesiac syndrome, since they involve a loss of procedural memory but generally no loss of declarative memory.
Combating Memory Loss I have suggested that degeneration of hippocampal cells is a cause of declarative memory loss in diseases like Alzheimer’s and that symptoms of Parkinson’s and Huntingdon’s sufferers derive from similar damage to the basal ganglia and the cerebellum, causing a loss of procedural memory. The question arises whether any medical intervention can be used to halt or even reverse these distressing losses. The answer is almost certainly yes, but the method at present involves injecting fetal brain cells, and their use is still ethically sensitive. The potential benefits of therapy involving the addition of just a small quantity of tissue have indeed been dramatically demonstrated in some studies carried out using rats. In the case of Parkinson’s sufferers, there have also been a number of neural transplants carried out in human patients. As Ramón y Cajal showed all those years ago (see Chapter 2), the neurons that make up the massive network that forms the brain’s nervous system are all individual cells. They are not physically connected to each other.Although they communicate across the synaptic gap from axon to dendrite, it is possible to draw them apart—and even to remove some of them from the brain entirely—without inflicting any damage on the individual cells. If they are introduced into a different brain, then assuming that matters such as tissue matching and so on have been addressed, they are quite capable of nestling into their new home and forming new synapses with neighboring cells. The malfunctioning of the hippocampus, which is responsible for the memory loss in Alzheimer’s disease, does not in the very earliest stages result from damage to the hippocampal cells themselves. It is a special class of neurons in an adjoining structure called the septum that degenerate first. In a healthy brain, these cells stimulate the neurons of the hippocampus, The Once and Future Self
• 153
and without that impetus from outside, as it were, the memory system cannot operate, even though it is itself still intact. In the experiments about to be described, it is these septal neurons that are taken from the embryonic rat and implanted in the amnesiac adults. Two experimental setups were used (see Bennett 1997; Toates 2001). The first consists of a simple T-maze, in which the rat has to decide whether to turn left or right at the end of a tunnel in order to be rewarded by food. There are pairs of trials. In the first trial, the left branch is blocked and the compulsory right turn leads to the food. In the second trial both options are open, and the food is removed from the right to the left branch. Ordinary rats soon learn by trial and error that they need to turn left the second time round, and by the third week of testing (with about six pairs of trials per day), they get it 100 percent correct, despite always getting the reward in the right branch in the first run of each pair. However, with rats whose septal neurons adjoining the hippocampus have been surgically removed, the lesson is never learned. Even after twelve weeks of testing, on the second trial of each pair they still only get to the food 50 percent of the time. That is the random rate. They have not learned anything; they have not laid down a memory trace based on their previous experience. The same result is found in senile rats whose septal neurons have degenerated naturally (in the ratty equivalent of Alzheimer’s disease). But when healthy septal neurons from an embryo rat are injected into the appropriate place in the brain of the damaged rats, a remarkable recovery takes place. After three weeks, there is a noticeable improvement to above random chance (about 65 percent instead of 50 percent), and by ten to twelve weeks, the success rate has leveled off at 90 percent correct choices on the second trial of each pair. That is not perfect, but it is a huge improvement over the damaged state and would represent a revolution in quality of life if the same results could be achieved in humans with senile dementia. The injection of embryonic hippocampal cells instead of septal cells, gives no improvement above random chance, showing that it is specifically the septal neurons that are essential for triggering the memory system. The second experimental setup employs a Morris water tank and uses two groups of rats, young ones (about six months old) and old ones (more than three years). The tank contains opaque water and has a platform—large enough for a rat to sit on—placed about an inch below the surface, where it cannot be seen. A rat is placed at an arbitrary position in the water, and has to keep swimming until it finds the platform by touching it with its feet while swimming. Then 154 • The Once and Future Self
it can sit on the platform and take a rest. The stand with the platform is placed off-center, and around the tank are set easily distinguishable objects such as curtains, a clock, a pot plant, and a large picture of other rats. These objects allow the rat to know where it is and in what direction it is swimming and give it a way of fixing the location of the platform, once it is found.When a young rat is first put in the tank, it does not know that there is anywhere to rest, and it just swims around aimlessly. Once it has found the platform, however, it is able in subsequent trials to swim more or less straight to it and sit down, irrespective of whereabouts in the tank it is initially placed. Rats don’t swim for pleasure. Once the rat has established the location of the platform, presumably by laying down a memory trace of the relative positions of the objects around the tank, it can and does find it quickly and efficiently. Then for the final set of trials, the experimenter unkindly takes the platform away. The rat cannot now find the resting place and hunts for it furiously in the place it knows it should be. The animal’s swimming route—as revealed by a video camera placed above the tank—shows it continually crisscrossing over the spot where the platform had been. Even though it does not find it, the young rat keeps looking for it in exactly the right place. When one of the old rats is put in the tank for the same set of trials as the juveniles, one of two things can happen.With some of the seniors, the pattern of behavior follows more or less exactly that of their young cousins. They still have healthy memories. But with others the story is different. They just swim round aimlessly, and if by chance they bump into the platform on one trial, they are not able to steer for it on the next occasion. They have senile dementia, ratty Alzheimer’s. When the platform is removed, these age-impaired rats do not hunt for it, nor do they concentrate their swimming in the area where it had been. They have no memory of its being there and are not distressed by its absence. But if these senile rats are then injected with embryonic septal neurons at the hippocampus, their behavior is transformed, and over a series of trials they follow exactly the same pattern as the juvenile and healthy senior animals (see Gage and Bjorklund 1986). I personally find this the most exciting and encouraging piece of research that I have come across in ten years’ engagement with the science of consciousness. I trust and pray that it will soon be translated into practical therapy for the millions of people afflicted by dementia in old age. Our sense of self, the distinction we make between “me” and the rest of the world that is “not me” is crucial to the self-identity and worth of each of us, but it is a complex business much debated by The Once and Future Self
• 155
philosophers and psychologists. There is general agreement that there are two aspects to our self-awareness. One is our sense of personal identity here and now, what is called “synchronic unity.” It is taken for granted by most of us, and only when we are confronted by the experiences of schizophrenics and sufferers of dissociative identity disorder (previously called “multiple personality disorder”) do we even become aware of it as something special and precious. The second aspect of the self relates to our sense of identity over time, or diachronic unity. In what sense am I the same person as I was ten years, ten minutes, or ten seconds ago? Is the autobiographical self, as Gerald Edelman calls it, a continuing and really existing person, rightly held accountable for past deeds and future intentions? Or is it just a useful fiction, like the concept of a center of gravity in physical objects, as philosopher of mind Daniel Dennett would have us believe? Whatever the answer is to these deep questions, and irrespective of whether the self is a fact or a fiction, few would dispute the centrality of memory to our own sense of ourselves. It follows that any therapies that can protect, enhance, or even restore our memory are likely to be treated as a pearl of great price.
156 • The Once and Future Self
9
Quantum Physics and Consciousness
ne of the exciting things about consciousness studies is that surprises are always turning up. Not least of these is the way in which quantum mechanics (QM), the mathematical laws governing the behavior of microscopic systems, contributes to much of the debate. The reason for this influence is not far to seek.We have seen before in this book that René Descartes’s hard and fast division between physical things and mental things, each with its own set of incompatible features, still lies at the heart of much discussion on consciousness. Even when his particular brand of substance dualism has been rejected, its ghost seems still to haunt the discussion. New dualisms rise up to take its place, such as the distinctions between subjective versus objective, first-person versus third-person, and things known by experience versus things known by report. What quantum theory has done is to blur these distinctions. The subjective mental world and objective physical world now seem, after all, to be inextricably bound up with each other.
O
Quantum Mechanics Quantum theory developed out of investigations carried out 100 or so years ago into the very tiniest particles of matter that it was possible to detect (such as the electron, a subatomic particle that was discovered by the Englishman J. J. Thomson in 1897). The classical laws governing how solid objects move had been formulated by Isaac Newton in the seventeenth century. A given object can only be in one place at any one time, and if we know its present position and its momentum (calculated from its speed, direction, and mass), then its position at any other time, past or future, can be calculated. The position and momentum together give a complete description of the object’s “state,” as it is termed. It is all quite simple. 157
The classical laws governing how solid objects move were formulated by Isaac Newton in the seventeenth century. (Courtesy of Thoemmes Press)
But at the beginning of the twentieth century, it was found that very small objects seemed not to obey Newton’s laws. “Quantum objects,” as they are called—things such as electrons—do not have a definite position and momentum that we can determine separately in order to calculate their state at a given time.With quantum objects, we have to work with the combined information represented by the state, and it is not possible to guess at how it is made up—that is, what contribution is made by the position and what contribution by the momentum. It would be like trying to unscramble eggs or tell whether a dozen is made up of six pairs or four threes. The state of a quantum object is not given as a bare number, like twelve, but as an equation called the “wave function.” Although it does not tell us exactly where the object is, it does give us the probability of its being found in any given position. In a standard experiment, we shine a single light particle (called a “photon” by its discoverer, Albert Einstein) at a “half-mirror.” This is a mirror with a semireflecting surface. It is designed so that, on average, half the photons striking it will be let through, and half will be reflected back. Which has ours done? For any length of time after the light particle meets the mirror, the wave function will give us the choice of two possible positions, one in front of the mirror and the other behind it. However, once we have taken a look and found the particle in one place or the other, that position is fixed. The other possibility ceases to be. In the jargon of quantum mechanics, the wave function has “collapsed.” Now for the $64,000 question: we know where the photon was after we observed it, but which side of the mirror was it on a split second before the observation was made? The way I have told the story, your reaction ought to be, “That’s easy.A split second before you measured the photon’s position it must have been on the same side of the mirror as you found it when you looked. It is just like tossing a coin. Before you do it there is a fiftyfifty chance of heads or tails, but once you have tossed it—even be-
158 • Quantum Physics and Consciousness
fore you look to see which side is facing up—the result is already fixed.” That is the commonsense reaction. It was also the attitude taken by no less a giant in the world of physics than Einstein himself. He said that a quantum object behaves just like a classical one. It always has a definite position, even if we in our ignorance do not know where it is. Einstein, however, was in the minority. Most physicists followed the lead of the Danish researcher Niels Bohr (1885–1962), who proposed what has become known as the standard or “Copenhagen” interpretation of QM, the name being taken from the city where his research institute was based (see Herbert 1985). Bohr made the unlikely assertion that until the observation was made, the photon was in neither one position nor the other. It is not just that we are ignorant of its position; it literally has no definite position. Only when it is observed and the wave function collapses, is its position determined. And that position will accord with the statistical chances predicted by the wave function, just as the likelihood of a given number of spots showing when a pair of dice is thrown can be predicted on a statistical basis. It was this analogy and the suggestion that the state of the universe all depended on probability and the chance event of something being observed that so appalled Albert Einstein. He wrote to a fellow physicist (Max Born), who went along with the Copenhagen interpretation and remonstrated with him on this very question. The words he used have become part of the folklore of quantum physics. “You,” Einstein wrote, “believe in a God who plays dice, and I in complete law and order” (quoted in Stewart 1990, 1). Einstein accepted the mathematics of the wave function, but he regarded the uncertainty it contained as a mere smokescreen, which temporarily obscured the true facts from us. According to him, it should not be understood as a literal description of an uncertain world. The popularity of Bohr’s apparently absurd view—that a physical object like an electron could literally have no definite position until its position was actually observed—can perhaps be understood better if we consider another classic experiment from particle physics. First we aim our quantum objects, let’s say a stream of electrons, directly at a phosphor screen, just like that used in a domestic TV set. The effect is to create a bright spot at the center of the screen, as the electrons striking it cause the phosphor to glow. The more electrons that are fired per second, the brighter the spot becomes, and if the rate is turned right down, the spot gets dimmer and dimmer until eventually it is replaced by individual blips, lighting up as the phosphor is excited by single electrons but with enough of a gap in between them for the screen to go dark in the interval. This is Quantum Physics and Consciousness
• 159
clear evidence that the stream of electrons is made up of individual particles, which at a low enough density can be counted one by one. But that is only the first part of this experiment. For the second stage, we place a partial barrier between the source of electrons and the screen. But this time, instead of a halfmirror, we use a sheet of material with an adjustable hole in the center. The electrons can get through the hole but not through the solid material surrounding it. When the hole is large compared with the size of an electron, they behave just as they did when there was no barrier at all. But if the size of the hole is decreased, the spot of light on the screen gets smaller as well, rather as the jet of water from a garden hose will get smaller if the nozzle size is reduced. That is perhaps to be expected. But what happens next is definitely not expected: as the hole gets smaller still—to a size more like that of the electron itself—the size of the bright spot on the screen starts to get bigger again. And when the hole is made even smaller, the screen stops showing a single spot of light and shows instead a series of concentric light and dark rings rather like the target used by archers, with a bull’s eye in the middle. When this happens, the position of the rings is not haphazard. They conform to a mathematical pattern that has been familiar to physicists since the 1830s, when it was discovered by then Astronomer Royal George Airy and named the Airy pattern in his honor. This pattern has only one known cause: it is the pattern that results from a wave—any sort of wave—being forced to go through a circular hole whose size is comparable with the wavelength involved. The only explanation for the electron’s making this precise pattern is that it is behaving like a tiny wave and not like a tiny particle. So we have to draw the uncomfortable conclusion that the first part of this experiment—firing the electrons directly at the screen—proves they are particles (because at low density they can be individually counted), but the second part of the experiment—firing the electrons at the screen through a small circular hole—proves that they are not particles at all but waves. Since Newton’s days in the seventeenth century, there had been an argument among scientists as to whether light was composed of waves or particles (“corpuscles,” as Newton called them). Then in the first quarter of the twentieth century came this experimental evidence that these very tiny quantum entities like light and electrons are both particles and waves. Some people said that this finding could be explained sensibly if the waves were made up of the particles, just as a wave on the ocean is made up of the water or a sound wave is made up 160 • Quantum Physics and Consciousness
of gas molecules in the air. But this way out of the difficulty was not open. It is known that systems of this kind cease to exhibit wavelike behavior if the density of molecules is drastically reduced. Sound cannot travel in a vacuum. But in the case of the electron, it was found that even if the density was reduced to just a single electron per minute passing through the hole in the second stage of the experiment, the positions of the individual blips, recorded on a photographic plate over a period of several days, created the Airy pattern. Even acting singly, electrons exhibited wavelike as well as particlelike features. It is not just the role assigned to chance that makes the standard view so unpalatable to scientists. There is also the problem that it implies two realms coexisting in the physical world: one realm is inhabited by subatomic particles and governed by the strange laws of quantum mechanics, and the other is made up of larger objects—from marbles to planets—that follow Newton’s classical laws of motion. This division is problematic for the simple reason that large (or “classical”) objects are themselves all made up of tiny atomic and subatomic or “quantum” objects. So if we consider this table at which I am sitting as a single piece of furniture, it is a classical object obeying one set of rules; but if we consider that it is made up of millions of electrons and the like, we must treat it as a collection of quantum objects obeying quite another set of rules. Surely they can’t both be true, so which is the description of the real world? Bohr said it was a mistake to try to think of the quantum realm in isolation. It cannot be understood, he said, separately from the classical world of measuring instruments. This is because the properties of a quantum object do not belong to the electron or whatever in itself but are the joint product of the whole experimental apparatus. Consequently, it is meaningless to ask questions about the quantum world per se or to try to describe it. We should just accept that the mathematics work and leave questions of reality to the philosophers (Bohr 1934). That remains the standard Copenhagen position, but others—besides Einstein—were less sanguine. Even the man who formulated the notorious wave function in the first place—Erwin Schrödinger (1887–1961)—was far from happy with the implications of his own equation. That was how he came to devise a thought experiment to demonstrate how absurd it all was—and unwittingly created the most famous nonexistent cat since Lewis Carroll. This is the story he told as a kind of reductio ad absurdum against his own work. Repeating it here will delay our application of all this to the science of consciousness, but it is too famous a part of the history of quantum physics for me to leave it out. Quantum Physics and Consciousness
• 161
Computer graphic depicting the famous “Schrödinger’s Cat” thought experiment. In this hypothetical situation, the cat is thought to be both alive and dead until observed.This is because a quantum event is set up to trigger the release of a lethal poison that kills the cat. According to quantum physics, the unstable particle exists in an intermediate “probabilistic” state until it is observed. The Austrian physicist Erwin Schrödinger devised this experiment to demonstrate the bizarre philosophical implications of quantum theory. (Mehau Kulyk/ Science Photo Library)
Imagine a standard quantum experiment—such as the one with the half-mirror—in which the apparatus is contained in a box that also houses a sealed bottle of cyanide with a hammer suspended above it and a cat. The setup is arranged so that if the photon takes one of its possible paths it will harmlessly hit the side of the box, but if it takes the other, then it will trigger a light sensitive device that will release the hammer. The hammer will then swing down and smash the bottle, releasing the cyanide and killing the cat. (I emphasize that this is only an imaginary experiment; no actual cat has ever had to play the 162 • Quantum Physics and Consciousness
part of Schrödinger’s cat.) Now the point of the story is this: At the start of the experiment, the cat is alive. By the end, it is either alive or dead, with a 50 percent chance of each possibility. But what about the period in between? According to the Copenhagen interpretation, the path of the photon is only fixed when it is observed. Until then it has taken neither possible path; or, if you prefer, it has taken both of them. So what is the state of the cat between the time that the photon strikes the mirror and the time the experimenter opens the box to observe the result? Einstein—because he did not accept the Copenhagen version of events—would say that it was either definitely alive or definitely dead, with a fifty-fifty chance of either outcome (just like the unobserved coin already being either heads or tails). But Bohr and his colleagues would say that until the observation was made, the position of the fatal photon was not definite; so it is undecided whether it did or did not trigger the light-sensitive device. Hence, the state of the cat is said to be neither dead nor alive or else both alive and dead simultaneously. That, claimed Schrödinger, is patent nonsense, so there must be something wrong with the theory. Others said it just showed the perils of trying to treat the quantum world as a real world instead of a purely mathematical construct.Yet others asked: Did not the cat observe the photon’s path? (Schrödinger’s original 1935 German paper is available in English translation in Wheeler and Zurek 1983.) The question of whether the cat counted as an observer opens into a wider question that brings us to the relevance of all this to consciousness studies.We have already seen what happens in a typical experiment in quantum physics.When an observation is recorded—say, on a phosphor screen or photographic plate—quantum entities (like photons or electrons) will appear as particles in precise positions. But their observed distribution is predicted by the wave function, and in appropriate conditions they exhibit an Airy’s wave-associated ring pattern. This behavior suggests that while unobserved they were behaving as waves—which can spread out in more than one direction at once—but once they are observed, they have just a single position, a characteristic of a particle. The act of observation “collapses the wave function,” and an actual state precipitates, as it were, out of the cloud of previously possible states. This is what scandalized Einstein: that the chance event of an observation should not merely uncover but actually create the reality of the state of a system. But the question arises:What is so special about a device such as a light-sensitive plate, that its interception of a quantum entity should count as an observation and so “create” actuality in this way? Quantum Physics and Consciousness
• 163
Consciousness and Quantum Science The person who pushed this question to the limit was John von Neumann (1903–1957), one of the greatest mathematicians of the twentieth century and the mathematical genius behind the modern computer. He asked us to imagine what has become known as “von Neumann’s chain,” that is, a whole series of steps from the moment a quantum entity—such as a photon from an experimenter’s light source—sets out on its way to the point when its eventual position on the light-sensitive screen is noted in the experimenter’s notebook. Where, he asks, in that whole chain of events, is the vital step that counts as the observation that collapses the wave function and turns potentiality into actuality? Is it when the photon does or does not go through the mirror? Is it when it hits the screen? Is it when the experimenter looks at the screen or writes down the position? What is the unique moment? Von Neumann’s own answer was that every stage simply involved atoms and molecules interacting with each other— the light source, the photon itself, the half-reflecting mirror, the lightsensitive screen, the retina of the experimenter’s eyes, his hands writing with his pencil in his notebook—all of this was simply molecules interacting. Except for one unique step. That, said von Neumann, was the point in the chain when the physical signal reaching the experimenter’s brain became a conscious experience in the experimenter’s mind (von Neumann 1932/1955). Von Neumann thus concluded that, if there was indeed a collapse of the wave function—by which we mean a moment at which the quantum wave of possibility became an actual fact in the world of classical objects—then that collapse had to be identified with the moment of conscious observation. The mere moment of a photon striking a screen and causing a blip could not be the vital reality-creating moment because without a conscious observer to interpret it, that blip was not an observation or a measurement, but just another meaningless interaction of subatomic particles. Only the moment of conscious observation could qualify as the “magic” moment. And if that turned out to be true, then a revolution had indeed occurred in science. Von Neumann and those physicists such as Eugene Wigner (1902–1995) and Henry Stapp, who followed his lead, drew the inference that there is no logical end to the chain of events in a quantum experiment until the point of recognition of a measurement in a conscious observer’s mind (Wigner 1961/1983). He therefore amended the standard Copenhagen interpretation—which says, remember, that 164 • Quantum Physics and Consciousness
the quantum world can only be understood in relation to the classical world of measuring instruments—to say that not just any recorded measurement but only conscious observation could precipitate physical actuality out of quantum possibility. There had long been philosophers (called idealists) who claimed that mind was prior to matter, that the physical world was somehow the creation of consciousness. But that was mere speculation. Here you had the unbelievable spectacle of physicists of all people, hard-nosed scientists whose whole life and work turned on the objective study of the physical world, coming up with their own suggestion that maybe the physical world was not so objective after all. That is the fundamental reason for the discussion of quantum physics with respect to consciousness studies. The name most often associated with these ideas today is that of Henry Stapp at the University of California at Berkeley. Like many others, he finds the core of the mind-brain problem in the gulf between the intuitive sense that our thoughts cause our bodily actions and the classical theory of matter that makes any real causal effect of our thoughts on our bodily actions unthinkable and impossible. That classical theory was based on the notion that only matter can affect the activity of matter. The physical world was taken to consist in a vast number of tiny components, each of which would change or stay the same entirely according to the influence of its immediate neighbors. There were no causal connections other than those attributable to these local interactions. To be sure, these microscopic particles accumulate and are perceived by us to be large objects, such as rods or pistons, or large systems, such as oceans or hurricanes. And these large objects and systems can be considered to exert causal influences on surrounding objects and systems. But according to the principles of classical physics, these influences are completely reducible to local mechanical interactions between microscopic neighbors. This is the essence of physicalist reductionism, and there is no room in this scheme for any entity that can actually grasp large complex structures—such as human bodies—as whole units and guide our physical actions on that basis, in the way that our thoughts appear to do. However, as we have seen, quantum mechanics constitutes a radical conceptual departure from that classical ideal because the thinking human observer is brought into the actual dynamic development of the world. In Stapp’s view, the role of the conscious human observer in quantum dynamics provides the basis within contemporary physical theory for the actual control of actions by thoughts. This is a kind of influence that is not reducible to matter alone and for which there was no place in classical physics. Quantum Physics and Consciousness
• 165
In 1990 Stapp wrote an article, later published in his book, Mind, Matter,and Quantum Mechanics (1993), in which he took the widely canvassed but rather vague notion that quantum theory and consciousness were somehow linked and nailed it down to a specific quantum interpretation of the mind-brain relation. He adopted the generally accepted view among neuroscientists that a person’s body in its environment is somehow represented within the brain by certain patterns of neural activity, known as the body-world schema. There are also patterns of brain activity associated with every conscious thought (see Chapter 4 on the neural correlates of consciousness). Each time a particular pattern is activated, it changes the physical structure slightly, so that next time the brain is stimulated in a similar way, that same pattern is more likely to be activated again than a different pattern that has not occurred before. This process facilitates the recall of earlier thoughts and experiences and contributes to memory and a sense of personal continuity (as we saw in Chapter 8 on memory). Stapp treats the activity of an alert brain as essentially a search process: the brain, conditioned by earlier experience, searches for a satisfactory response to each new situation that the organism faces. By a “response” he means a carefully tuned pattern of firings of some collection of neurons. This pattern he calls a “template for action,” or an executive-level template, and it is based on the brain’s current body-world schema. A “satisfactory response” will be one in which the executive-level template initiates an action that improves the organism’s well being. Stapp suggests that there are two kinds of templated actions: one he calls “attentions” and the other “intentions.” An attention, or attentional event, is backward-directed and updates the schema in the light of recent changes to the body or its environment; it is clearly important that the body-world schema, on which all templates for action rely, should be as accurate and up-to-date as possible. An intentional event is future-oriented and causes one particular body-world schema to be selected from a range of possible ones to become the actual one. In Stapp’s terminology, an intention actualizes a body-world schema. It is important to be clear that this does not in itself constitute a bodily action in the environment. An actualized schema is an image within the brain, and its production is a brain process confined within the brain. But it is an image within the brain of a particular intended state of the body-world, and its selection from all the possible intended states is the precursor to the bodily action in the environment that will bring about that state. We are now at the point to which the last few paragraphs have been leading. I have said that an intention, that is, an intentional event, 166 • Quantum Physics and Consciousness
is the selection of one outcome from among a range of possibilities. This statement sounds very much like the description in quantum theory of what happens when the wave function collapses, and it is Stapp’s belief that each actualized body-world schema is indeed the outcome of what he calls a phenomenal quantum event. That, in his view, is the key to the causal interaction between thoughts and actions. Let’s run over the causal links again, tracing them backward from the final action: the body’s ultimate action in its environment is caused in a straightforward automatic physical way by the brain’s actualized body-world schema; the actualized body-world schema in its turn is caused in a straightforward automatic physical way by the brain’s intentional event; the intentional event in its own turn is caused in a straightforward automatic physical way by the “template for action”; and it is at that executive-level template for action where Stapp postulates the collapse of the wave function occurs, selecting one response to some new situation faced by the organism from among the many possibilities. Remember that the wave function does not represent actuality itself but only probabilities or “objective tendencies” for the next actual event. So the collapse of the wave function is necessary to bring about an actual situation that corresponds with human experience. But what causes the wave function to collapse in the way it does? That is the crucial question because whatever causes the collapse according to the causal chain just outlined causes the ultimate bodily action in the environment. In the standard Copenhagen view of the collapse, one of the alternative possibilities generated by the wave equation is selected and made actual, and that selection comes down to chance. But Stapp is no happier with the idea of chance governing everything than Einstein was. He concedes that the chance element was acceptable for Bohr and company because they made no claim to describe in any detail what was “really happening.” It was enough for them that the mathematics worked to describe observed outcomes of measurements. But Stapp wants to make an assault on a description of reality itself, especially the reality underlying our experience of mental causation. This follows from his desire to understand how thoughts control actions. He turned to quantum theory in the first place because classical physics fails to provide an answer to this question. Now he does not regard Copenhagen’s “pure chance” as a satisfactory answer either. This way of speaking, he says, is merely a mask for ignorance of the true cause. So he refers back to von Neumann’s version of the quantum story, which does offer a candidate for the cause of the collapse of the wave function: conscious observation. Quantum Physics and Consciousness
• 167
There is, however, an important difference between the original context of von Neumann’s proposal and the way the idea is applied by Stapp. Von Neumann’s imagined “chain” consisted of a large system, including the quantum entity to be measured, a device to measure some property, a human observer’s eye, and finally a human brain. In that context, the observer’s consciousness determined the outcome of the earlier distant experiment on the atomic system. But Stapp focuses on the mind-brain system itself, and the outcome determined by the person’s conscious observation is not the position of some distant photon but the selection of a particular template for action in the person’s own brain, for instance, the intention to raise an arm. He postulates that the critical “observation” is the conscious event that is experienced as the decision to raise one’s arm and that the physical brain event (neural correlate) that corresponds to it is none other than the collapse of the wave function, the same collapse that excludes all the other superposed possibilities to bring about this particular template for action, “raise the arm.” That is, he is saying that a conscious event (the occurrence of the psychological decision “raise the arm”) is represented in the physical description of nature by a corresponding neural event (the collapse of the wave function), which is the same neural event that sets off the straightforward automatic physical chain of events that results in the raising of the arm. This sequence of events is in line with what seems, intuitively, to be the role of consciousness, but Stapp is not home and dry yet. It is one thing to propose a correlation between the psychological decision to act and the brain event that initiates that action; it is quite another to show a real causal connection. To see how Stapp does this, we need to take a step back and look carefully at how experience fits into his quantum model of the mindbrain that we have been considering. Everyone agrees that much— probably most—of the brain’s activity is automatic and nonconscious. In Stapp’s proposal, this ongoing automatic activity is punctuated by a series of conscious events, each of which actualizes a template for action. By means of the automatic spread of neural activity that it initiates, this newly created executive-level template automatically controls three kinds of processes: motor action, the collection of new information (including monitoring the ongoing processes it has initiated), and—most significant for our purposes—the formation of the next template for action. Classical physics was deterministic and said there would be only one possible “next template” leading to a single possible outcome. Quantum physics says there will be many possible “next templates,” each with its own possible outcome, but human ex168 • Quantum Physics and Consciousness
perience says that there is only ever a single outcome and that such outcomes are always classically describable. To bring the quantum process into accord with this human experience, the Copenhagen school stipulated that only one of the many possibilities would be actualized (by the chance event of the collapse of the wave function) and that only the actualized possibility was capable of being experienced. Stapp says that it was the “great and essential move” of the Copenhagen theorists to realize that their theory was not really about the hidden subatomic world of possibilities at all but about the observed world with its classical properties and behavior (Stapp 1996, 204). Von Neumann and Wigner added the connection of the actual to experience by proposing that no quantum possibility could be actualized except by being experienced (that is, the wave function could only be collapsed by the act of conscious experience). Stapp builds on both these traditions, emphasizing the surprising facts that even the classical aspect of nature does not come from the physical side but from the experiential side and that the experiential aspect of the actualization events is the cause of their classical character. This is not to deny the physical side of nature but to understand it in a new way. Being physical no longer means being “material” but being a structure in space and time that somehow holds (“encodes”) knowledge or information created by earlier events. Although it contrasts with the old view that equated “physical” with so-called dead matter, this approach is not totally novel, and Stapp sees it as essentially orthodox quantum theory. He accepts and builds on the Copenhagen view that the state of a system specified by quantum theory represents knowledge in a general and observation-independent sense and acknowledges Werner Heisenberg, a leading figure in the Copenhagen school, as the source of his idea that this space-time structure is active, in the sense that its encoded information creates tendencies for future events to occur. These future events are themselves experiential-type events, so they create more knowledge or information that is, in turn, encoded in the physical structure in the way specified by the quantum equations. This dynamic understanding of what it means to be physical is the key to the question of causation. We began with the “interaction problem,” the classical idea that matter can only be influenced by other neighboring matter and therefore thoughts cannot cause physical actions. By abandoning this picture of nature, in which the physical is implicitly equated with the material, in favor of one in which the physical is understood as an active structure balancing the experiential aspects of nature, Stapp removes the interaction problem at a stroke. It enables him to say both Quantum Physics and Consciousness
• 169
that large-scale classical things like human-bodies-in-action derive their essential character from the fact that they are experienced and also that they do so not in spite of their being physical, but as an aspect of what it means for them to be physical. So when he says that conscious thoughts cause bodily actions, he is not claiming that something nonmaterial is influencing matter; he is giving an example of natural physical dynamic events unfolding in a world that is coherently and inherently both material and experiential.
A Possible Quantum Origin for Consciousness The interpretation proposed by von Neumann and developed by Wigner and Stapp suggests that the collapse of the wave function is caused by the conscious experience of a quantum system. It is important to remember, however, that this is a minority view and that in the physics community, there is no agreement about the cause of the collapse of the wave function. One alternative is Wojciech Zurek’s decoherence model. To understand it, recall that the original clue that quantum systems had a wavelike character lay in their ability to produce an Airy pattern of rings. These rings are an example of an “interference” pattern, and interference—in the special sense of the word applied to the behavior of waves—only occurs when overlapping waves have what are called “coherent phases.” It is a characteristic of a quantum system in a state of superposition that the branches of the wave function that describe its different possibilities all have coherent phases with respect to each other, hence the ability to produce Airy rings. However, Zurek and his colleagues have shown that this coherence is lost when a quantum system interacts with the many independent parts of its environment. Decoherence occurs (Zurek 1991). The result of decoherence is that the branches of the wave function can no longer interfere with each other. Thus instead of a quantum system in which the mutually exclusive possibilities A and B and C are described as all in place simultaneously, we now have a classical situation in which only one of the possibilities A or B or C is actually the case. In other words, the wave function has collapsed, and decoherence is sufficient to explain it. “What else,” Zurek asks, “is there to explain?” (Zurek 1991, 43). But not all physicists are convinced. The shift from A and B and C to A or B or C may indicate that the wave branches describing the different possibilities can no longer interfere, but it does not necessarily mean that they can no longer coexist in superposition. So decoherence does not necessarily mean the collapse of the wave function and the consequent shift from a quantum to a classical description. 170 • Quantum Physics and Consciousness
In this last view, Stapp’s collapse-by-consciousness theory and Zurek’s collapse-by-decoherence theory are both incomplete. Both need an extra element from outside currently known physics to explain the transition from the quantum to the classical description. Oxford mathematical physicist Roger Penrose is among those who say that neither conscious observation nor decoherence can fully explain the collapse of the wave function. In an interesting twist, he and his colleague Stuart Hameroff, who is an anesthesiologist in Tucson, Arizona, turn Stapp’s theory on its head. They claim that instead of conscious experience collapsing the wave function, in certain circumstances the spontaneous collapse of the wave function might itself provide the physical mechanism for bringing about consciousness (Penrose 1994a, 1994b; Hameroff 1987, 1994). Penrose and Hameroff propose that in the right conditions— conditions that are found in parts of the brain’s cells called “microtubules”—the spontaneous or “objective” collapse of the wave function by quantum systems may provide the physical mechanism that causes conscious events to occur. Despite Penrose’s eminence and although no one has actually been able to prove him wrong, the theory has come in for as much criticism as Stapp’s. Penrose was criticized in part because he has developed Hameroff’s ideas about microtubules to support his independent mathematical arguments against the computational model of mind (see Chapter 6). Supporters of the computational approach and of strongly reductionist theories of mind in general have thus been among the keenest critics of the microtubule hypothesis. Rick Grush and Patricia Churchland, for instance, in a scathing conclusion to a paper directed against Penrose and Hameroff, write: “Nothing we have said in this paper demonstrates the falsity of the quantum-consciousness connection. Our view is just that it is no better supported than any one of a gazillion caterpillar-withhookah hypotheses” (Grush and Churchland 1995, 28). Let’s look at the argument. Penrose opens his account by reference to what he calls the “awkwardness” of describing the world on the two different levels of classical and quantum physics and the consequent need to find a satisfactory way of moving between them. The classical level is used for large-scale objects, summarized in Newton’s laws of motion, Clerk Maxwell’s electromagnetic equations, and Einstein’s equations for relativity. At the other end of the scale, for small things, is the quantum level described by the Schrödinger equation. Within both these levels, everything that happens is completely deterministic and computable. Penrose points out that if we remain firmly in the quantum Quantum Physics and Consciousness
• 171
A cartoonist’s view of the argument over quantum theories of consciousness. Pat Churchland and Rick Grush are pictured as ostriches with their heads in the sand after they dismissed the theory of Stuart Hameroff (the caterpillar) and Roger Penrose (the rabbit) as “no better supported than any one of a gazillion caterpillar-withhookah hypotheses.” (Imprint Academic)
realm, governed entirely by the wave equation, the system is completely deterministic. There is nothing random in the way quantum mechanics behaves at this level. The element of probability, which we commonly associate with quantum mechanics, comes about only when a quantum event evolves up to the classical level for observation and measurement. That is to say, the element of randomness occurs only in the transition between the two levels, the transition that we have been calling the collapse of the wave function, but for which Penrose prefers an alternative term: “the reduction of the state vector.” The different terminology does not change the theory. If we just think of all the possibilities encoded in the state of a system at the quantum level “reducing” or “collapsing” to a single actuality when observed at the classical level, that is enough to follow the argument. As we have seen, there is a fundamental difference between quantum and classical levels of description. In ordinary classical physics, we would say that there is a probability of an event happening or not happening and that only one or other outcome can actually occur; but such a description is inadmissible in quantum physics. There are familiar difficulties in moving from one world to the other—as illustrated by the plight of Schrödinger’s cat—and Penrose thinks we must look outside present-day physics to clear up this paradox. He regards the process of collapse or reduction as currently explained just a stopgap idea, an approximation to sidestep the fact that no one knows what is really going on. Always a bold spirit (he teamed up with Stephen Hawking to predict “black holes”), Penrose propounds a new way of thinking about collapse, which he calls “objective reduction” (OR). He asks us to imagine a situation, somewhat
172 • Quantum Physics and Consciousness
like the case of Schrödinger’s cat, in which two possible states—let us say it is a heavy ball in either an original or a new position—are in a quantum superposition. In standard quantum theory, the ball-in-itsoriginal-position and the ball-in-its-new-position remain in a stable superposed condition until there is an instantaneous reduction/collapse to one state or the other. But Penrose’s OR version of the theory says that under certain conditions, this superposition is an unstable configuration. This proposed instability is Penrose’s own suggestion and is something going beyond the usual idea of reduction. It can be thought of in a similar way to the instability of a radioactive isotope, which is decaying into stable products. Though a superposed state of a system starts off in a superposition, it will reduce into a state in which the ball is in either one position or the other and will do so at a decay rate that Penrose reckons he can calculate. In most physical situations, the environment would provide the major contribution to the state’s reduction/collapse. Since the influence of the environment is random, it follows that whatever happens in the course of the reduction is effectively random as well because of the dominant effect the environment has on the reduction. This is the basis of Zurek’s decoherence theory of collapse/reduction. What Penrose claims is that the familiar randomness of the reduction in standard quantum measurements is a direct result of this environmental effect. If we could isolate the system very well and shield it from any random environmental influences, then he says that we would start to see the difference between the random procedures we normally have in quantum effects and the controlled decay rates associated with OR, which he says must really be going on. He concedes that experimental tests are a good way off at present but insists that there is nothing in principle to say that we cannot test the idea. Penrose is convinced that OR holds the key to how noncomputational action—such as he claims must be present in the mind—enters the scene. But to justify biologically his claim that human thought has capacities that are in principle impossible to simulate on a computer, he has got to find a place in the brain where this new OR physics could be relevant. We have been thinking of brains until now chiefly in terms of neurons and the connections between them, but Penrose is well aware that at this comparatively large-scale level, it is hard to see how the subtle quantum effects he proposes can be of any great significance. But, he says, if we look more closely at what cells are made of, especially at tiny components called microtubules, we may find a possible site in the brain for OR to occur. Microtubules are a part of the cytoskeleton, a kind of internal Quantum Physics and Consciousness
• 173
scaffolding that supports the cell walls of neurons and many other cells in the body. They are long tubular structures that can grow or shrink, and they link by interconnections between neighboring cells. There is evidence of a link with consciousness in their probable involvement in the reversible loss of consciousness associated with anesthesia, which is where Stuart Hameroff’s original professional interest with them lies. But when Hameroff first drew Penrose’s attention to these tiny structures, what interested Penrose most about them was that—as their name implies—they are tubes. There are various indications from physics that tubes of this size are able to maintain quantum coherence and to keep anything inside them reasonably isolated from the outside. We have seen the need for such isolation from the random effect of the environment if Penrose’s proposed OR effects are not to be swamped. An added bonus, which makes this seem like a very promising area for his search, is the presence of tubulin proteins of which microtubules are constructed. These protein molecules can apparently have two different physical states, which might be capable of a kind of “superposition” like that envisaged for quantum entities. In this case, the tubulin dimers (as they are called) might provide a mechanism for scaling up the nonrandom (coherent) quantum effects that Penrose postulates, and the cylindrical structure of the microtubules provides the necessary environmental isolation for OR to occur in the brain. But quantum coherence along one microtubule is not enough. Penrose needs to have this coherence linking one microtubule to the next to reach an effect of the required scale. As already mentioned, microtubules do form an interconnecting network across cells, but whether this network or any individual microtubule actually supports quantum coherence is unknown. Penrose’s claim is that if quantum coherence does exist on this large scale, there will be a reduction according to his new OR procedure. He makes an argument that this reduction process is connected with quantum gravity, but its exact nature is unknown, because it is in the part of fundamental physics we have not yet understood. He further argues that both this quantum gravity process and conscious understanding must be noncomputational (see Chapter 6) and that this coincidence supports the claim that consciousness arises from his proposed quantum gravity process of objective reduction. All this sounds very speculative, and it is hardly surprising that the scientific community has remained skeptical. However, we saw in the opening chapter that physical science at any given time is not allknowing and can even find itself taking up ideas it once rejected. Be174 • Quantum Physics and Consciousness
fore writing off Penrose and Hameroff completely, let us close this chapter with another cautionary tale from the history of quantum theory, which illustrates the seesaw nature of scientific progress. In 1935 Albert Einstein, together with two of his students at Princeton University, Nathan Rosen and Boris Podolsky, first described the prediction of quantum mechanics that Einstein called “spooky action at a distance” (spukhafte Fernwirkungen) and that we now call nonlocality (cited in Born 1971, letter from Einstein to Max Born dated March 1947). At that time, they could only imagine the experiment that forms the context of their prediction, and it was fifty years before advances in technology enabled a version of it to be carried out for real by physicist Alain Aspect and his collaborators at the University of Paris. But the thought experiment was original to them, and is known to history by their three initials, EPR. Its essentials can be described in terms of a number of different experimental settings.A description commonly used today starts with the production of pairs of photons sharing a single wave function—a kind of identical twin situation, except that these twins are more like mirror images of each other—and lets them fly off in opposite directions. They are a genuine pair, not one photon allegedly in two places. Each is traveling at the speed of light—a photon is, after all, a particle of light—so the distance between them is growing at twice that speed. Since neither energy nor matter nor information can travel faster than the speed of light (as Einstein had shown), their speed effectively prevents any possibility of information passing from one of the photon twins to the other. Yet no matter how far apart they are, observing either one of them not only fixes the state of that one (by collapse of the wave function) but of the other one as well. Einstein claimed, since his own relativity theory ruled out faster-than-light communication, that the state of the photons had to be definite before measurement and that QM must therefore be an incomplete theory. That is what is known as the EPR argument. It does not say that quantum theory is wrong, but it does claim to prove that it is incomplete. Despite the regard in which Einstein was held, the EPR view was always controversial. One reason was that our mathematical friend John von Neumann had three years earlier produced a proof that demonstrated the exact opposite, namely that any theory attributing definite states to unobserved quantum entities must end up contradicting itself (von Neumann 1932/1955). However, the proof in the form von Neumann had offered it was itself due for a bumpy ride. In the 1930s it was shown to contain flaws, and it was only in 1967, ten years after his death, that a rigorous version of it—the KochenQuantum Physics and Consciousness
• 175
Specker theorem—was published by two other scholars. Even at the end of the twentieth century, its conclusions were not accepted by all mathematicians (Wick 1996, 67–69). In the meantime, the Irish theoretical physicist John Bell, while on sabbatical leave in 1964, set himself the holiday job of once and for all proving Einstein right. But fate was against him, and he found that he had proved the opposite. “The result was the reverse of what I had hoped,” he admitted. “But I was delighted, in a region of wooliness and obscurity, to have come upon something hard and clear” (quoted in Herbert 1985, 212). What Bell actually proved was not that Einstein was wrong but that even if he was right—even if the Copenhagen view of QM was incomplete and there was indeed an observer-independent reality, even if photons and the like really did have definite (though unknown) states before they were measured—then that reality had to be “nonlocal.” Nonlocality means that objects cannot always be treated as separate even if they appear to be observed at different positions in space. There can be correlations between them that are reminiscent of the superposition state from which they emerged. This is the direct denial of one of the doctrines underpinning classical physics, which said that matter is composed of microscopic particles subject only to “local” interaction, by which is meant the physical influence of their immediate neighbors. But—and at last we come back to the science of consciousness—if reality is nonlocal then there is a possibility, even within a physicalist framework, of correlations additional to local influences, ones that theoretically are completely determined, even if they are not completely known to us. Some see in the process of the reduction of a nonlocal superposition state the key that opens up again the possibility of genuine choice and free will. For them at least, twentiethcentury physics has restored credibility to an aspect of our consciousness that an earlier deterministic science seemed to have taken away forever.
176 • Quantum Physics and Consciousness
10
Decision Time
ere is a nightmare scenario: Every time you decide to do something, it is not really your choice at all. The decision has in fact been taken a moment earlier without your knowing it, and what you think of as your own spontaneous action is in fact just the necessary consequence of a chain of events already under way. Free will, selfcontrol, personal choice—whatever you like to call it—is an illusion. Philosophers and theologians have argued for centuries over whether or not everything that happens in the universe is determined beforehand. But it was all talk. Then twenty years ago, in 1982, a professor at the University of California School of Medicine at San Francisco calmly announced that when we make voluntary movements, our brains have already set events in motion up to half a second before we decide to act. The professor in question was a neurophysiologist named Benjamin Libet. Now long retired and in his eighties, he is still a controversial figure, writing articles and arguing with critics about the results of his experiments. They remain a bone of contention, and there is even a researcher at the Institute of Cognitive Neuroscience in London, Dr. Patrick Haggard, who has recently been trying to replicate and check out some of Libet’s work (Haggard and Libet 2001). I should say at once that Libet himself has never accepted the conclusion, drawn by many others, that his results showed freedom of choice to be an illusion. But the arguments go on, and they serve to highlight a wider problem that stalks the study of consciousness. That problem is time. In this set of experiments, Libet asked his subjects to flex their wrists at random intervals of their own choosing. He also asked them to note the position of a spot on a revolving dial at the precise moment they decided to make each flexing movement. Meanwhile, he was monitoring their brain activity with an electroencephalograph (EEG). He then compared the time at which they reported making
H
177
the decision (calculated from the position of the spot on the revolving dial) with the time that their EEG indicated the “instruction” to flex the muscles of the wrist had been sent. His results consistently showed that a small but perceptible time before the subjects made the conscious decision to act, their brain had already begun the physical processes that would culminate in the action. Libet and his colleagues found that volitional acts were preceded by an electrical readiness potential (RP) that arose in the brain some 350 milliseconds (msec; about one-third of a second) before the conscious decision to act was experienced. In other words, our conscious decision to act follows the nonconscious initiation of the action. On the face of it, this sequence of events appears to be a straightforward experimental refutation of conscious free will. However, Libet’s experiments on the timing of brain events and consciousness, more of which are described below, have generated more controversy and conflicting interpretations than any other work in cognitive neuroscience. The published arguments include an extended symposium in the prestigious journal Behavioral and Brain Sciences in 1985 and a long discussion in Daniel Dennett’s popular book Consciousness Explained (1991). Even as I put the finishing touches to this chapter in the fall of 2002, there arrived on my desk the latest issue of another leading journal, Consciousness and Cognition, which was given over entirely to articles on Libet’s data and “timing relations between brain and world.”As already noted, Libet himself does not conclude that his work requires us to jettison the idea of free will because he feels that a conscious brain can (and when appropriate does) exercise a veto over a “volitional” activity that is already in train but not yet carried through to completion (Libet 1999, 51–53). This is possible because conscious awareness of the decision to act, although delayed until after the brain’s readiness potential starts the process, does arise 200 msec before the muscle acts. That gives enough time, says Libet, to abort the act. Another angle is taken by David Hodgson, a judge of appeal of the Supreme Court of New South Wales, Australia, whose role in the justice system makes him a strong advocate of personal free will and responsibility. He points out that even if a particular action has become inevitable, Libet’s experiments do not preclude a role for free will in shaping and modifying the nature of that action. He draws a parallel with the case of the concert pianist, whose fingers are programmed by practice to hit certain notes “unconsciously” but who nevertheless concentrates intensely so as to convey—by the way the notes are played—his conscious feelings at the instant of performance. According to Hodgson, a similar combination of automatic 178 • Decision Time
and conscious control might be a feature of all voluntary acts (Hodgson 2002).
The Timing of Experience Libet’s 1982 publication was not his first controversial paper on the topic of time and consciousness. In earlier work done in the 1960s, Libet used a technique similar to that employed by Wilder Penfield, the brain surgeon who would chat to his patients while performing open brain surgery under local anesthetic (see Chapter 2). Libet himself was not a surgeon, but he had a friend, Bertram Feinstein, who was and who persuaded some of his patients to take part in Libet’s tests (see McCrone 1999, 124). It is well known that when we feel a sensation in, say, our right hand, this feeling is matched by activity in a certain part of the brain, which is correlated in some way with that hand. And if that part of our brain is artificially stimulated by a mild electric current, then we feel as if it is our hand, not our head, that is being stimulated. This phenomenon accounts for the distressing “phantom limb” effect, in which amputees continue to experience pain in—as it seems to them—the limb they no longer have. As with Penfield, Libet carried out his experiments with the cooperation of conscious patients who could talk to him and tell him what they felt. In a typical procedure, one hand—let’s say the right hand—was stimulated by a very mild electric shock to the hand itself. For the other hand—in this case the left—he applied a very mild electric charge to the appropriate part of the brain. What the patient felt in both cases was a tingling in the hand. However, when the brain was stimulated directly, they reported feeling nothing at all unless the current was allowed to remain on for at least half a second or thereabouts (the exact length of time varied from person to person, but it was consistent for each individual, and half a second, that is, 500 milliseconds, was the average). This half-second threshold was still obtained if the electrical stimulus was applied to an earlier point in the brain’s sensory pathway—say, the thalamus—rather than the cortex. But if the hand itself was stimulated, the sensation registered no matter how brief the period of stimulation. That in itself was strange. The next thing Libet discovered was even more unexpected. He tried stimulating both hands simultaneously, in the one case applying the stimulus to the hand itself and in the other applying it directly to the relevant part of the cortex. Then he simply asked the subjects to say which hand had tingled first, left or right.Assuming that it was the brain activity that actually caused the tingling to be felt and knowing Decision Time
• 179
that it takes a small but finite length of time for the electrical or chemical message to travel from the hand to the brain (about 20 milliseconds), one would expect the tingling in the hand that was itself stimulated to be felt a fraction of a second later than the tingling in the hand where the stimulation was directly on the brain. In fact, the opposite was the case. Even if Libet began stimulating the cortex before the hand was stimulated, providing it was not more than half a second before, then the hand stimulation was felt first (see Libet et al. 1964). When Libet published these results in the mid-1960s, many neuroscientists frankly disbelieved them (McCrone 1999, 124). Libet stuck by his findings, but even he could not adequately explain them. His measurements of the electrical activity in the brain agreed with the expected outcome: the neurons that were stimulated directly “reached neuronal adequacy” (Libet’s own term for the physical state necessary for conscious experience) before those where the message had to reach them via the stimulation of the hand. But the patients consistently reported that their conscious awareness of the two sensations came the other way about. Libet himself drew two conclusions from all this: first, that there is a substantial delay before the brain’s activity, initiated by a sensory stimulus, achieves the required neuronal adequacy for eliciting any resulting conscious sensory experience; and second, that after neuronal adequacy is achieved, the subjective timing of the experience is (automatically) referred backward in time. The mind measures the amount of backward referral needed by utilizing as a “timing signal” the objective time of the initial cortical response to the sensory stimulus. Put into simple English, these two conclusions add up to this: it takes the mind-brain half a second to wake up to what’s going on, but it’s able to fool itself into thinking that it has known the facts all along. Critics said this “canceling out” hypothesis was suspicious and unnecessary. It was much simpler to assume that we are conscious of things when we think we are and that both the half-second delay and the backward referral were figments of Libet’s imagination. Fortunately for consciousness research, Libet was not the only person interested in time-related experiments. Consider first a familiar sight: two alternately flashing lights, such as are used as warning signals at road/rail intersections or on emergency service vehicles. We know there are in fact two bulbs, each in a fixed position, lighting up and going out in turn; but in certain conditions it appears to us that there is a single bulb that remains alight all the time but rapidly switches position to and fro. It is the same kind of illusion that enables 180 • Decision Time
the series of still frames in a movie film to give the impression of smooth movement and that makes the lighting displays on buildings such as Caesar’s Palace at Las Vegas so effective. This phenomenon has been of great interest to psychologists of perception, ever since Max Wertheimer first studied it nearly 100 years ago. He gave it the name “phi” (pronounced like “fine” without the n). The key question is this: When do we “see” the light “move?” Under laboratory conditions, two lights were set up so that one of the bulbs—say, the left-hand one—lit up for one-fifth of a second (200 msec) and then went out. After an even shorter time— just 50 msec—the right-hand bulb came on for 200 msec and then went out. After another 50 msec, the left-hand light came on again for one 200 msec, and so on repeatedly. Subjects taking part in the experiment were asked to describe in detail what they observed. As expected, they reported seeing a single light moving back and forth (five times per second). Since in fact the light never did move—one light went off, and after a brief period of darkness another one in a different place came on—it is interesting to ask this: When exactly did the subjects think (erroneously) that the bulb had started to move? At what moment did they actually experience the illusory movement? Common sense would assume that they saw the left-hand bulb light up, and one-twentieth of a second after it went out, they really did see the right-hand bulb light up. If both these sightings are assumed to have taken place at the “correct” time, then the illusory movement must have been “seen” (that is, imagined) in that brief twentieth-of-a-second gap when neither bulb was alight. But there is a problem with this answer. When the experiment is done with the second bulb never lighting up, people only see the left-hand light flashing on and off in its fixed position. They never have the sense that it starts to move off to the right and then comes back. So the illusion of the moving light
MaxWertheimer’s theories on perception and memory put American psychology fifty years or more ahead of where it had previously been. (National Library of Medicine)
Decision Time
• 181
does not occur unless both the bulbs are working. In other words, it seems that we only experience the illusion that the left-hand light is moving to the right-hand position after the right-hand bulb has already lit up. Yet subjects taking part in the experiment insisted that they did not simply see the light in two different places, first on the left and then on the right. They were certain that they actually saw it move across the gap, briefly occupying the space in between, though in fact there was never a light in that gap at any stage in the process. That is very curious. In the 1970s two psychologists, Paul Kolers and Michael von Grünau, introduced a new element into the phi experiment. They wondered whether the phenomenon would still work if the two lights were of different colors, say red and green. The answer they found was that it did. More than that, as the red left-hand light made its illusory journey toward the right-hand position, it appeared to change color in the middle, so that it “arrived” at the right-hand position correctly colored green to match the right-hand bulb. These findings meant that if (as common sense suggested), the brain of the observer was creating the illusory movement in the twentieth-of-a-second gap between the one light going off and the other coming on, then it was having to predict not one but three things: the fact that the second light was going to come on at all, the position of the second light, and the color of the second light. Such predictive power looks highly unlikely. The alternative explanation is that the brain reacts after the second light has come on, and uses the information provided by the second light to create the illusion of movement and color change retrospectively. That also sounds somewhat fanciful, but it is reminiscent of Libet’s claims for backward referral. Neither of these explanations of the “color phi” phenomenon is exactly easy to accept (see Kolers and Grünau 1976). But whatever the true explanation of the phi phenomenon, its very existence—which is beyond dispute—shows that the relation between what “goes on” in the world out there and what we “see” going on is far from simple. And not least among the complications is the question of time. How does the time we experience things relate to the time they actually happen? To explore this question further, Libet experimented with a related phenomenon known as backward masking (see McCrone 1999). It was another oddity that had been known about for years but never explained. As a first stage of the experiment, subjects were given a weak stimulus to make sure they were aware of it. It might be a dim light or a faint sound or a mild tap on the back of the hand. In the second stage, the same weak stimulus was given as before, but 182 • Decision Time
now it was followed almost immediately—about one-tenth of a second, or 100 milliseconds, later—by a stronger version of a similar stimulus, such as a bright flash or a loud bang. The result was that the subject only reported the later and stronger stimulus. The earlier weaker one was hidden or “masked.” It never entered consciousness. (Other fascinating experiments showed that although masked sensations never entered consciousness, they could nonetheless affect subsequent behavior, but that is a story for another time.) The final twist in the backward-masking experiment was to apply a third stimulus 100 milliseconds after the second one. As might be expected, it in its turn prevents the second signal coming into consciousness. But that is not all. Not only is the second stimulus itself masked by the third, but its own masking property is destroyed and the initial weak signal is observed as well as the third one. Libet carried out his own variation on the traditional backwardmasking routine. He applied the initial stimulus to the hand, but the second one was applied directly to the cortex at the point corresponding to that same hand. The usual masking effect was noted, but with one significant change. In the ordinary experiment, the masking effect was strongest if the second stimulus came 100 milliseconds after the first. Beyond that, the effect was lessened, and beyond 200 milliseconds, it virtually petered out. But Libet found that he could stimulate the cortex directly up to 300 milliseconds after the initial stimulus to the hand and still get complete masking (Libet et al. 1972). According to Libet, these results showed that at least 300 milliseconds after the initial stimulus, no sensation of it could have entered consciousness. But he accepted that the evidence might be interpreted differently. It was possible that the weak sensation had registered briefly and then had been swamped by the much larger one that followed, so much so that all trace of it was wiped from the memory. (Some years before his masking work, just after publishing his initial results on the half-second delay, he had conceded: “There may well be an immediate but ephemeral kind of experience of awareness which is not retained for recall at conscious levels of experience.” See Libet 1965, 78.) However, he had yet another variation on the procedure that eliminated this “swamping” explanation. Rather than follow the initial weak stimulus with a strong one that would mask it, he followed it with another weak one in the hope that it would enhance it. And it worked. If the weak stimulus to the hand was followed by a weak stimulus to the corresponding spot on the cortex, the subject reported a single strong stimulus, not two weak ones. Significantly, the stimulus at the cortex could be given as much Decision Time
• 183
as 400 milliseconds after the initial one on the hand, and the effect would still work. Even more significantly, the subject would report feeling the single strong sensation at the time of the first signal, not the second. As with the phi effect, the event was experienced before it occurred. Philosopher Daniel Dennett, who is one of Libet’s severest critics, discusses these findings in his book Consciousness Explained and in “Time and the Observer,” an important article coauthored with Marcel Kinsbourne and published in 1991, the same year as the book (see Document 8, this volume). Dennett’s own view is that consciousness is constantly changing and evolving and that it is a mistake to imagine that there is ever one specific time when a particular event becomes conscious. To use his own invented term, there are “multiple drafts” of any given conscious experience, each constantly giving way to the next, and none of which can claim to be the definitive fact of the matter. According to Dennett, although hardly anyone these days accepts Cartesian dualism as a correct explanation of the mind-body problem, they still have a concept of conscious perception that depends on that outdated theory. He calls this concept the “Cartesian theater,” and he blames it for most of the troubles afflicting the scientific study of consciousness. Its key idea is that there is in the brain some place or some mechanism that marks the definitive boundary between what has been consciously experienced and what has not. It is like the screen at a movie theater, with an ever-changing series of images being presented to consciousness in a set order. If sensation A arrives at the screen first (no matter how long it took to get there), then it will be experienced as happening before another sensation, B, that arrived later. This is the kind of notion of the timing of conscious experience that lies behind most discussion of Libet’s results, the phi effect, and so on. But, insists Dennett, that whole way of picturing what happens is totally and hopelessly mistaken. There is no Cartesian theater in the brain, and there is no clear-cut division in consciousness between what is “already observed” and what is “not yet observed.” This is because there is no one and no thing—other than the whole brain itself—to act as the conscious observer. Dennett makes his case against the “already observed” and “not yet observed” distinction by first assuming it to be true and then showing that this assumption leads to absurdities in cases like the phi phenomenon. He does not deny that the distinction works at large time scales. For example, I can see my coffee cup on my desk at this moment and remember quite accurately that it was not there to be seen an hour ago because I only made the coffee in the last ten min184 • Decision Time
utes. But he does deny that this same clear distinction can be applied to changes happening only fractions of a second apart. Here is the argument. In the color phi experiment, the observer of the flashing red and green lights reports seeing the light move and change color at a point midway between the two bulbs. Since neither light in fact moves at all, this observation is clearly false. As a matter of objective fact, the observer was mistaken. The question is,Where and how did the mistake occur? In describing the experiment earlier, I discounted the possibility of the brain magically predicting the position and color of the second light and so does Dennett.We are agreed that the mistaken observation of the moving light has to result in some way from information reaching the brain and giving the color and position of both bulbs. I suggested above that the brain might react after the second light has come on and use the information provided by both lights to create the illusion of movement and color change. Dennett analyzes this suggestion in more detail. Assuming (for argument’s sake) that the “already observed” and “not yet observed” distinction holds, he asks about the order in which things were seen. One possibility is that at the times the stationary red and green lights came on, each was consciously observed, one after the other with a dark period in between; and then the false memory of the moving light’s having been seen was inserted retrospectively. In other words, what one saw at the time was an accurate account of the external events, but what one remembers seeing afterwards is a mistaken account, in which the red light moved and turned into a green light. Dennett calls this kind of falsification “Orwellian” because it reminds him of George Orwell’s novel 1984, in which the Ministry of Truth constantly rewrote history. In the novel, this meant that what at the time had been the true and publicly known account of what had taken place was forever denied to those who looked back on past events at a later time. Dennett contrasts this kind of manipulation with what he calls the Stalinesque approach. Joseph Stalin did not rewrite history after the events were known about; he ensured that the truth never became public in the first place. He did this by destroying any evidence that might betray what had actually happened and substituting false testimony and bogus confessions at what were high-profile show trials. These events constituted the public record from start to finish. They were not rewritten history; they were the only history. But they were false history nonetheless. So on Dennett’s Stalinesque account of the color phi experiment, the second light, the green one on the right, was never consciously seen at all at the time it first lit up. It lit up in the external world, and information to that effect reached the brain via the Decision Time
• 185
eyes, but it never entered consciousness. Before that could happen, so this version of the story goes, the information went into a kind of preconscious editing room in the brain, where the false image of the light midway between the two bulbs was inserted, so that it was already there and ready to be presented to consciousness at the “appropriate time,” after the image of the red light and ahead of the image of the green light. Now comes the nub of Dennett’s argument against the Cartesian theater and the “already observed” and “not yet observed” distinction. If that approach were true, then both the Orwellian and Stalinesque styles of falsification would be feasible, and—crucially—it should be possible to tell which had been employed in any given situation. As things are, that is not the case. In the phi experiment, when the intervals are kept to the order of a tenth of a second, there is no way in which the subjects themselves can tell which of the two possible mechanisms has fooled them into seeing a nonexistent condition of movement. Neither is it possible for experimental psychologists and neuroscientists, observing those subjects, to say which of the two errors—a mistaken initial observation (that is, Stalinesque) or a false memory of an initially accurate observation (that is, Orwellian)—has taken place. The difference between the two means of falsification, a difference that on large time scales is so obvious, is a difference that makes no difference in the millisecond world of the brain’s perceptual system. Therefore, trumpets Dennett, at this level the “already observed” and “not yet observed” distinction makes no difference either. It does not exist, and so neither does the Cartesian theater exist. Released from the tyranny, as he sees it, of the theater model of consciousness, Dennett is free to expound his alternative multiple drafts model (MDM) of conscious experience. We saw in Chapter 3 that neuroscientists have been forced to accept that their old idea of the visual system as a serial linear process, culminating in a single place in the brain where the whole visual scene came together, was mistaken. Instead, there are known to be many parallel pathways handling different information in different ways and at different speeds. But the search for the neural correlate of consciousness, tracked in Chapter 4, showed that having abandoned the idea of a single place “where” it all comes together, scientists transferred their allegiance to the idea of there being a single time “when” it all comes together. Francis Crick’s 40-hertz hypothesis is an example of this.What Dennett’s MDM theory says is that the single time is as illusory as the single place. “Multiple drafts” is a metaphor taken from the world of writing and publishing, a world that Dennett has known well over many 186 • Decision Time
years. In the old days—up to about 1980—there was a clear-cut distinction between a prepublication draft manuscript, which was a single document still open to revision by the author or editor, and the published book, which represented the final and authoritative version of the text. Today all that is changed. As I write, I have on my home computer at this moment five different drafts of the present chapter—multiple drafts, if you like—and there are two more on my machine at the office. Keeping track of them, cross-referencing and updating them, is a painstaking business. In a similar way, Dennett envisages the various elements of the brain’s perceptual system working in parallel, cross-referencing and updating, and sometimes having to handle conflicting signals, as in the phi experiment. But things get worse, because nowadays there is not even a single publication date of a single fixed text of a book, when it is presented to the public. Authors will often “publish” final drafts of chapters on their websites or email them to colleagues for comment and advice, both of which will be followed by further revision. This putting of material into the public domain is, in Dennett’s metaphor, the equivalent of a perception being presented to consciousness. With the old Cartesian theater model of the mind, as with the old way of book publishing, the concept was of a specific time and place of presenting to consciousness or making public. With the multiple drafts model, as with present-day publishing, there is no such precision. One can now see why Dennett is at odds with Libet. It is not just that he disagrees with this or that interpretation of the results. He finds the whole notion of precise timing in relation to mental events a misconception. But not everyone has been against Libet. His work was warmly applauded and encouraged by the Nobel laureate Sir John Eccles (1903–1997), who knew more about the workings of neurons and their synapses than almost anyone then alive (Eccles 1994). But Eccles was treated as a renegade among neuroscientists because he adhered to an unfashionable Cartesian dualism. In his view, the conscious mind or soul existed independently of the brain and was the leader in their partnership. He explained Libet’s half-second delay as the time it took the poor old biological brain to catch up with what the nonmaterial mind was doing. Eccles was a faithful Catholic, and being regarded as his protégé was a mixed blessing for Libet. It probably accounted for some of the resistance to his results by aggressive atheists such as Dennett. This background all added spice to the controversy stirred up by Libet’s second round of experiments on the timing of mental events, those which related to deliberate intention and therefore to the exercise of free will. Decision Time
• 187
Nobel laureate Sir John Eccles explained Benjamin Libet’s half-second delay as the time it took the biological brain to catch up with what the nonmaterial mind was doing. (National Library of Medicine)
One clue to the cause of the timing paradoxes we have been considering is to note their strong links to a traditional input/output or perception/reaction model of human behavior and to treat them as evidence against such a model. In Chapter 7, we were looking at a number of writers—like philosopher Susan Hurley and neuroscientist Rodolfo Llinás—who reject this interpretation of our relation with the outside world. Indeed, they would not be too happy with the whole idea of a sharp distinction between us and the outside world. Rather, they see conscious organisms as “embodied and embedded in their environment.” Hurley was uncomfortable with any theoretical scheme that identified the personal distinction between perception and action with the subpersonal categories of causal input and output. She preferred what she called a “two-level interdependence” view of perception and action, which appealed to a system of complex dynamic feedback and treated perception and action as mutually and symmetrically interdependent. This approach, like Dennett’s, abandons the quest for a single crucial moment when it all “comes together.” Llinás, meanwhile, conjectured that the whole development of the brain and nervous system in animals was closely related 188 • Decision Time
to their mobility. He explained the body’s motor and sensory systems as two parallel elements in a single overall process that links the animal’s control system to what is happening in its environment. In this view, mental states have from the very beginning been the internalization of movement. So it cannot be the case that our actions are simply a response to our perceptions. Anticipation is essential. This way of talking may hold the key to understanding Libet’s timing paradoxes, as well as explaining how those tennis superstars hit the ball before they have had time to see it.
FreeWill and Causality There still remains a question as to where all this leaves the reality of human free will (volition). Volitional control of our actions depends upon a balance between reliability and flexibility in relation to cause and effect.Without reliability in physical actions, all outcomes would be arbitrary; without some flexibility, all outcomes would be predetermined. In neither case would there be any way of putting one’s own freely made decision into effect. So much is clear, yet establishing that precarious balance has proved exceedingly difficult. Even Immanuel Kant himself, the greatest of the Enlightenment philosophers, declared the “freedom of the will” to be a problem lying beyond the powers of the human intellect.Western debate on this topic reflects a complex ethical and religious inheritance. Insights from Greek philosophy and Hebrew scripture were combined and filtered through Christian and Jewish traditions that developed in late antiquity and through the Middle Ages. These formed the context for the Enlightenment, whose dominant thinkers in turn provide the backdrop against which today’s scholars act and react (see Chapter 1). The main stumbling block in the way of Kant’s understanding free will was that the Newtonian science of his day was completely deterministic. He naturally felt this was incompatible with freedom of action; and yet without freedom of action there could be no moral responsibility, no ethics. But Kant was quite clear that we do all have the experience of facing moral dilemmas and making choices. He therefore concluded that physical determinism must hold sway in the world of appearances (the scientific world that we can observe and measure) but there might still be room in the inaccessible world of things-in-themselves for free will and choice. He could not prove it to be the case, but he could at least show it was not impossible. Although—even if it were true—it would make no practical difference to the observed world, it at least enabled him to reconcile the theory Decision Time
• 189
of deterministic science with the personal experience of moral choice (Kant 1788/1992). Another Enlightenment figure who made an important contribution to the debate on causality and thus on free will was the Scottish philosopher David Hume (1711–1776). When we say, “P causes Q,” we intend to claim that the latter necessarily follows from the former, that if P occurs, then Q must also occur. But Hume pointed out that the most we can actually know is that in the past Q has followed P in all observed cases. There is no absolute guarantee that it will happen again next time. In other words, the so-called laws of nature, or physical laws, are no more than descriptions of observed regularities in the way things happen. Even though many philosophers have challenged this claim and his efforts to extend the analysis to human agency have not been widely accepted, Hume’s skepticism still influences all discussion of causality and, by extension, of volition (Hume 1748/1999). Hume notwithstanding, so far as the physical sciences are concerned, the universe still appears to be a thoroughly deterministic system, even if it is not totally predictable. For instance, our mathematical understanding of nonlinear systems with their “chaotic interactions” and “butterfly effects” is entirely deterministic, albeit unimaginably complex. Chaos theory, as it is misleadingly called, is not truly chaotic at all. Human limitations may mean that we cannot predict all events, but from that it does not follow that they are not predetermined. There is no loophole here for free will. Another scientific theory sometimes invoked in support of free will is the Heisenberg uncertainty principle. Werner Heisenberg showed that at the quantum level of photons and electrons (see Chapter 9) each dynamic attribute is in a paired relationship with another, such that the more accurately one is known the more uncertainty surrounds the other. The first and best known such pair to be identified were position and momentum. However, even if Heisenberg’s principle does indicate genuine indeterminacy in the physical world, such indeterminacy would result only in randomness, and we have already seen that complete arbitrariness is no more conducive to free will than is complete determinism. So again, quantum uncertainty alone cannot explain truly volitional control of human action.Yet this scientific position remains contrary to our own experience, as it was to Kant’s: all of us (including physicists, eliminativist philosophers, and all other sworn enemies of folk psychology) have the direct experience that “we” are in the driving seat of our bodies and can steer them one way or the other through the exercise of our conscious wills. 190 • Decision Time
Not only is that our individual experience, but the entire ethical and legal systems of all societies are based on the distinction between some things that we do through choice—things, therefore, for which we can be held responsible—and other things that we do through necessity. One of the core principles of justice is the principle that, as a general rule, the state should not forcibly interfere with the freedom of its citizens. Only when the citizen has committed an offense—that is, has voluntarily broken a reasonable and publicly promulgated law—does it then become fair that he or she should be coerced or punished. Such a system works on the assumption that the citizen had the choice of either complying or not complying with the law and chose not to comply; so punishment to an appropriate extent is just. However, denial of free will undermines this principle. If a citizen who commits an offense does not have a real choice about the matter but is simply acting out the inevitable consequences of things that occurred before he or she was born, how can it be any more fair or just to restrict the freedom of this citizen than it is to restrict the freedom of another whose similarly caused actions, not involving any crime or misdemeanor, make that person’s freedom seem desirable? Already there are some neuroscientists, such as Colin Blakemore at Oxford University and Francis Crick in California, who are starting to draw some very radical conclusions as to how our legal system could be recast on a more scientific basis (Blakemore 1988; Crick 1994). At the center of the debate is the scientific threat to what philosophers have traditionally called the libertarian conception of free will. This is the belief that as human beings we really are the “ultimate creators and sustainers” of our own actions and destinies. It is sometimes defined as the ability to have “done or willed otherwise under identical internal and external conditions” and is generally thought to be incompatible with belief in a world governed by deterministic forces, whether those forces be supernatural—as in Calvinistic predestination—or physical. For this reason, it is often called “incompatibilist” free will. It is often assumed that this sort of free will corresponds more or less to the prevailing popular conception of human freedom, but some philosophers, such as Ted Honderich, Grote professor of the philosophy of mind and logic at University College in London, have argued that there is no such single conception of free will universally held among ordinary people (Honderich 1993). Be that as it may, among professional philosophers—at least in the English-speaking world—it is the alternative conception of free will, known as compatibilist, that is more favored. This is the approach that attempts to reconcile our subjective sense of personal Decision Time
• 191
control and moral choice with objective determinism. On this view, human will is normally free to the extent that we can all distinguish between doing something under duress (for example, because we are being physically threatened) and doing it unconstrained by such external factors. But this acknowledgment of the difference between free and forced decisions at a practical level is held to be compatible with a belief in ultimate determinism. “I can choose what to do, but I cannot choose what to choose” is one way of expressing this ambiguity. Daniel Dennett is typical of those philosophers who embrace a compatibilist position, when he says that the only sort of free will “worth wanting”—one able to support our moral intuitions about praise, blame, punishment, and responsibility—is compatible with ultimate physical determinism. He compares it to living in a constrained space whose boundaries are so far away that we are never aware of them. To have a theoretically limited freedom in which in practice we cannot ever see or reach the limits, is as good—he claims—as being ultimately free (Dennett 1984). But middle-ground compatibilists such as Dennett get attacked from two sides. On the one side there are libertarians, such as Robert Kane and David Hodgson, who strongly believe that individual responsibility and social justice can derive only from personal choices that are radically free from determining influences (Kane 1996; Hodgson 1998). On the other side stand what we might call the “hard determinists,” such as Colin Blakemore and to some extent Ted Honderich, who urge to the contrary that what is needed is some revision of our moral intuitions. If our burgeoning scientific self-understanding is out of alignment with, let us say, the appropriateness of retributive punishment, then so much the worse for punishment. Libertarians complain that compatibilists don’t in fact offer the sort of freedom most of us want because most of us want to be able to lay blame and inflict retribution to avoid social anarchy. Hard determinists, meanwhile, along with most compatibilists, find that libertarian freedom—with its insistence on self-chosen, uncaused action—is either conceptually incoherent, empirically unfounded, or both. They accept the burden of showing that social life can survive and even prosper without assuming that human beings are the one exception to natural causality. The debate over free will has often been conducted in terms of naturalistic science versus a more or less supernaturalist conception of the self, something radically independent of the rest of nature. Because of the supposed serious consequences for religion, ethics, and social stability that would result from the jettisoning of the concept of 192 • Decision Time
free will, a number of leading scientists and philosophers have pondered ways of reconciling the two sides of the argument. The English neuroscientist Charles Sherrington (1857–1952), for instance, argued at great length and on the basis of detailed physiological and anatomical data that brain processes simply cannot explain mental subjective phenomena, including conscious free will. He noted examples of processes that—in his opinion— must occur in the “mental sphere,” without any neural connections or processes becoming involved (Sherrington 1947, xxiv). His opinion was not the result of ignorant religious prejudice. Sherrington was among the leading physiologists of his day. He was a great admirer of Santiago Ramón y Cajal, bringing him to London in the 1890s to lecture at the prestigious Royal Society, and he knew all there was to know about neurons. Yet still he favored a dualistinteractionist view of conscious thought, finding the view that human beings consisted of two fundamental elements was inherently no less probable than that it rested on one only. He was never greatly troubled by Occam’s Razor. Although in a minority, Sherrington was not alone. One of his pupils, the same John Eccles who took Benjamin Libet under his wing, held a similar view of dualist-interactionism. So did the influential philosopher of science, Karl Popper (1902–1994), with whom Eccles wrote a widely read book on the subject called The Self and Its Brain in 1977. In 1963 Eccles had shared a Nobel Prize for his experimental contribution to the understanding of synaptic transmission. Thirty years later, he applied his deep knowledge of the structural arrangement of the synaptic junctions to work out possible mechanisms for mediating the brain/mind interactions, including that for free will. One of the problems faced by mind-brain interactionists is the
English neuroscientist Charles Sherrington argued that brain processes could not explain mental subjective phenomena, including conscious free will. (National Library of Medicine)
Decision Time
• 193
likelihood of infringing the law of conservation of energy. Eccles suggested that the probability of action at the synapse could be linked to quantum mechanical behavior and that this connection might offer a way of conserving energy while avoiding physicalism. He proposed the existence of units of mental experience called psychons. Psychons would interact with the brain’s nerve fibers so as to affect synaptic transmission and provide a basis for the action of free will. The interactive psychon part of his theory was not testable, as Eccles himself admitted, and the proposal was never taken up by others. More enduring was the idea that quantum theory might provide a key with which to unlock the mystery of free will. As mentioned earlier, it was accepted by all those who knew anything about the subject that one could not provide a quantum solution simply by invoking Heisenberg’s uncertainty principle. But a number of scientists (such as Henry Stapp) and philosophers (such as David Hodgson) still appeal to quantum effects in more subtle ways to provide a basis for libertarian free will. Unlike the ideas of Sherrington and Eccles, the presentday proposals are not dualistic, and they keep strictly within a naturalistic framework. As we saw in the last chapter, critics have claimed that quantum effects are likely to be swamped at the large-scale level of the warm, wet brain. Furthermore, there is a huge difference between quantum-mechanical randomness and the “flexible reliability” essential for the exercise of genuine self-control and free will. But I have already described in Chapter 9 the suggestions that have been made by Roger Penrose, as to how events at the quantum scale could make differences in actions directed by the brain, and by Henry Stapp, as to how such differences could exist between templates for possible action, from which a selection could be made. And David Hodgson has argued that even though quantum randomness might not of itself be the mechanism for free will, it may nonetheless leave a space in the physical process for the operation of rational choice. Another figure in this history is Roger Sperry (1913–1994), who made a pioneering study of “split-brain” patients, which led to the fundamental discovery that the left and right cerebral hemispheres could each gain knowledge not available to the other and consequently could produce different behavioral responses to the same overall situation. Early on, Sperry espoused a philosophical position that mind was indeed not reducible to the properties of the brain’s constituents. He argued that the mind emerged as a unique attribute of the brain’s processes and that the emerging mental phenomenon could in turn causally determine neural activities. This argument provided a basis for free will, but Sperry argued that mind could only su194 • Decision Time
pervene on neural activities, not directly determine them, and that there still remained a deterministic aspect in all this. However, during the last years of his life, Sperry altered that position and argued in favor of the option that mind could causally affect neural functions in a nondetermined manner, which would produce the fully humanistic nature of mind-brain interaction that Sperry sought (Sperry 1976; see Doty 1998 for his later views; Document 7, this volume). Another proposal based on the concept of an emergent mind was put forward by Libet in 1994. He proposed the existence of a conscious mental field that could both unify subjective experience and potentially intervene in neural activities to provide a basis for free will. Libet provided an experimental design that could potentially test this field feature that no other mind-brain theories had provided, but this difficult experiment still awaits execution. Given the immense complexity of the brain, it probably is not feasible to demonstrate unequivocally the presence or absence of volition through analysis of neurophysiological processes. Maybe further careful studies, including the parallel tracking of subjective experience, brain events, and volitional acts, will bring new insights, but the results will always be interpreted in terms of the dominant paradigm. If it is assumed that free will is an illusion, then our science will just underline this pretheoretical position. However, as Libet and some others point out, there really is no evidence available to draw such a strong conclusion. Since our experience is one of agency and free will and seeing that virtually all religious, ethical, cultural, and legal systems are based on such an assumption, there is strong reason to assume this position unless science unequivocally excludes it. Present-day physics neither provides unequivocally for the possibility of free will nor rules it out.
Decision Time
• 195
11
Dreams,Visions, and Art
special fascination has always attended that special state of consciousness that we call dreaming. There is no shortage of vivid sights and sounds in dream consciousness, but with a few exceptions—such as the bell heard in the dream, which turns into the alltoo-waking sound of the alarm clock—there seem to be no external stimuli to account for these perceptions. So the science of consciousness has two related questions about dream states: How and why do they arise? What is the origin of their often bizarre contents? As with other mental states, dreams can nowadays be investigated by a combination of subjective and objective techniques. With regard to the former, most of us have some recollection of our dreams—if we did not, we should not know that such phenomena existed—but typically we forget our dreams very quickly. Dream researchers have experimented by waking up subjects at different points in their sleep cycle and asking them what was passing through their mind prior to awakening. It has been found that there is more vivid recall at some points than at others, and investigating the physiological changes that take place in the course of the waking/sleeping cycle has become a major focus of research using objective measurement. The first major breakthrough came in the 1920s, when the electroencephalogram (EEG) was developed. As we saw in Chapter 2, this machine enables the brain’s pattern of electrical activity to be recorded by means of electrodes fixed to the outside of the head. Its advantages compared with some other technologies are that it is noninvasive (no surgery is needed) and very accurate in the matter of timing. It can tell exactly when activity is taking place, down to hundredths of a second. It is much less precise on the question of exactly where the activity is occurring in the brain. Fortunately, that question can often be answered by more recent methods, such as positron emission tomography (PET scanning) and functional magnetic resonance
A
197
imaging (fMRI). A modern sleep laboratory does not just measure brain activity. Other standard equipment includes the electrooculogram (EOG), which records eye movement, and the electromyogram (EMG), which records electrical activity in the muscles. One of the curious findings is that peak times for dreaming coincide with a complete cessation of general muscle tone and activity but a heightened degree of the rapid eye movements that give dream sleep its popular name of REM. The identification and naming of REM sleep as a distinctive kind of sleep was made as recently as 1953. No objective recording device has yet been devised so far as dream content is concerned. First-person reporting by the dreamer remains the only practical method of inquiry. To my knowledge, the only person to have challenged this assumption was one of my favorite biblical characters, the arch-skeptic King Nebuchadnezzar of Babylon in the Book of Daniel.You may recall that he had a disturbing dream and called his wise men to interpret it, but he refused to divulge the dream’s contents. “Any fool can make up an interpretation once they know the dream,” he told them. “If you’re really any good, then your source of wisdom should be able to tell you the contents as well as the meaning. So you tell me what the dream was, and I will trust your interpretation as well. But if you can’t tell me what the dream was, I shall know you are charlatans and I’ll chop your heads off!” (Or words to that effect.) It must have been a nasty moment for them, but Daniel pulled off the trick, telling the king both the contents and the meaning of his dream, and they were all saved. It’s a wonderful story, but I am afraid that it’s a long way from reality. However, brain imaging studies and research into the neural correlates of the contents of consciousness (see Chapter 4) are bringing closer the day when it will be possible to tell what a person is thinking or dreaming about without asking them. For instance, the activation of a certain area of the cortex would already indicate to a suitably trained neuroscientist that someone was seeing or imagining or dreaming of a face (Frith and Gallagher 2002, 58). At present, we cannot say whose face, but it is a start.
Dreaming Consciousness andWaking Consciousness Study of the sleep cycle using EEG has shown that there is are four discernible stages of what we might call “normal,” or non-REM, sleep, each having its own pattern of electrical brain waves and representing what in layperson’s terms we might call “lighter” and “deeper” sleep. In the course of a night, we rise and fall through these different 198 • Dreams,Visions, and Art
depths of sleep several times, and superimposed on this rhythm—interrupting it—will be periods of REM sleep. The EEG pattern of standard deep sleep is made up of so-called delta waves. They are steady and comparatively slow (one-half to two cycles per second), compared with the eight to twelve cycles per second of the alpha waves that characterize the drowsy falling-asleep stage. They are also of a relatively high-voltage (75 microvolts or more), which is half as great again as during the normal waking state. REM sleep, by contrast, is characterized by an EEG pattern that is very similar to the waking state: fast, random, and low voltage. Thus judged by the brain’s electrical activity, vivid dreaming is much more like being awake than being asleep. Perhaps that should not surprise us, since that is how it feels when we are dreaming, but to be told by a scientist that waking and dreaming are actually much the same still comes as a bit of a shock. I consider why that might be in just a moment. Moving from the EEG to the EOG trace, which records eye movement, we again find that the waking and REM sleep patterns are similar to each other and very different from non-REM sleep. Only the EMG, measuring muscle activity, tells a different story. Muscles are active when awake, slightly active when in non-REM sleep, and totally inactive when in REM sleep. That is just as well; otherwise when we dream of flying through the window we might just find ourselves actually going through it, with dire consequences. This brings us to the nub of the problem about dreams. They seem real to us as we experience them because we seem to be functioning—albeit a little oddly—in the usual kind of way: we see things, we touch things, we hear things, we do things; in short, we seem to be interacting normally with our environment.Yet despite the fact that in my dream I see and hear and run, in physical reality my eyes remain closed, my bedroom is silent, and my body lies still upon my bed. How can this be? The traditional dualist answer is that while my body sleeps, my nonphysical soul or mind goes off on its own journey into the spirit world. There it learns things and does things that may subsequently be of value in the physical world to which it will return when the body awakens. That is how dreams were understood in the ancient world. It is how Nebuchadnezzar’s dream would have been understood by his contemporaries. It is no longer the immediate explanation that most of us would give for normal dreams, although it remains a favorite for many people when asked for an opinion on certain related phenomena, such as near-death experiences (NDEs). It is also the standard paradigm in certain cultures not influenced by European Enlightenment thinking, such as some of the native tribes Dreams,Visions, and Art
• 199
Print depicting Daniel interpreting King Nebuchadnezzar’s dream (Historical Picture Archive/ CORBIS)
of South America, in which shamans (medicine men and women and spiritual guides) use “dream journeys” as a standard method of exploration and diagnosis (see further below). Among scientific students of consciousness, a more mundane but hardly less exciting story is told. It is based on the kind of evidence I have already been describing, which comes from sleep laboratories and the experimental work carried out since the 1970s by investigators such as J. Allan Hobson of Harvard Medical School (Hobson 2001). Their hypothesis, in brief, is that neuronal activity in the brain which in the waking state causes “actual” experiences of sight, sound, movement, and so on, in the dream state causes “virtual” experiences of sight, sound, and movement. The difference is that instead of being triggered by external sensory stimuli, these dream experiences are triggered by spontaneous events in the brainstem. This unpatterned neural activity flows from the brainstem to the cortex, where it results in the often bizarre mixture of sights and sounds associated with dreaming. It is rather as though Wilder Penfield or Benjamin Libet were there, randomly stimulating first this cortical area and then that one, creating a disconnected stream of experience that bears no relation to anything in the subject’s immediate environment. The brain is cut off, as it were, from the outside world. The sensations it experiences are being internally generated, and the subsequent
200 • Dreams,Visions, and Art
“commands” generated by the cortex for the motor system are blocked by inhibitory chemicals that prevent the electrochemical messages from being passed to the muscles and so put into action. Rodolfo Llinás, a neuroscientist at the New York University School of Medicine, whose theory that consciousness is evolutionarily old was mentioned in Chapter 7, has compared the rhythms of electrical brain activity during wakefulness and sleep.With his colleague Urs Ribrary, he used magnetoencephalography to research the pattern of 40-hertz oscillations, whose apparent links to conscious states were popularized by Francis Crick and Christof Koch, as described in Chapter 4. Their results confirmed the similarity of REM sleep and waking states in relation to electromagnetic patterns, but with one significant difference. In the waking state, the thalamocortical system’s internally generated 40-hertz oscillations were “reset” by sensory input to bring the internal conscious imagery into line with perceived external reality. But in the case of REM sleep, no such resetting took place, despite the fact that other studies have shown the thalamocortical system to be accessible to sensory input during sleep. Llinás concludes that this is why our dream world is largely untroubled by anything happening in the outside world (Llinás and Ribrary 1993). The dreaming and waking brain are both attentive to their intrinsic states, but in the former case that attention is not disturbed by external stimuli. So far as the dreaming brain is concerned, the sensory input is not functionally relevant because it is not correlated temporally with the ongoing internally generated thalamocortical activity. In other words, dream imagery is independent of external reality. If a similar disjunction between internal and external stimulation is artificially induced during a time of wakefulness, the result is hallucination. The most influential figure in relation to dreams throughout the twentieth century was Sigmund Freud. Allan Hobson takes some familiar characteristics of dreams that Freud uses in his psychoanalysis and explains them by his new “activation-synthesis” model of how dreaming works. Each of the dream features is now given a biological explanation. That is not done to disparage Freud but simply to point out that his psychoanalytic model was based on a late-nineteenth-century understanding of brain physiology that has been superseded by the beginning of the twenty-first century. Some of the characteristics of dreams explained by Hobson are the following: 1. Instigation. Hobson attributes the onset of dreaming to spontaneous chemical activity in the brainstem, not repressed unconscious wishes or suchlike. Dreams,Visions, and Art
• 201
2. Visual imagery. Dream imagery is the primary response of those parts of the cortex that are normally stimulated by signals originating from the retinas. But in dreams the origin of the signals is not the sensory receptors in the eyes but a set of spontaneous and naturally occurring changes in the chemistry of the brainstem. 3. Delusion. In our dreams we are deluded into thinking that we can do all sorts of impossible things, such as fly unaided. According to Hobson, this is because the part of the prefrontal cortex associated with making sound judgments is not functioning.You may remember from Chapter 7 how poor Phineas Gage suffered a loss of judgment after that part of his brain was destroyed in a rock-blasting accident. Psychologist Larry Squire (who introduced the distinction between declarative and nondeclarative memory discussed in Chapter 8) has described one of the roles of the prefrontal cortex as the placement of remembered events in their proper context. It is easy to see how a failure in this area could prevent normal judgments being made and result in delusions.We might even say that in dreams we can imagine ourselves flying because we have forgotten that we can’t fly. The reason for this loss of function is most probably that chemicals known as neuromodulators, which bring the prefrontal cortex into communication with the rest of the brain, are not produced during REM sleep. 4. Bizarreness. The librettist W. S. Gilbert wrote into the comic opera Iolanthe a vivid description of a dream-cum-nightmare that is sung by one of the characters. Even readers who cannot place the English locations in the following extract will recognize the bizarre sense of inexorable jumping from one unlikely scenario to another that is typical of the dreaming state: You dream you are crossing the Channel and tossing about in a steamer from Harwich, which is something between a large bathing machine and a very small second-class carriage; and you’re giving a treat (penny ice and cold meat) to a party of friends and relations, they’re a ravenous horde and they all came aboard at Sloane Square and South Kensington stations; 202 • Dreams,Visions, and Art
and bound on that journey you find your attorney (who started that morning from Devon), he’s a bit undersized and you don’t feel surprised when he tells you he’s only eleven; etc. etc. (Gilbert 1926) Such oddity is the stuff of dreams, and for Freud it was all bound up with a theory of disguise and censorship. Hobson is at his weakest when talking about how the particular contents of dreams arise; he believes that the form or architecture of dreams is more important for researchers than dream content. Bernard Baars, commenting on this weakness of the activation theory, agrees with Hobson that the oddity and rapid change of contents in dreams represents an attempt by the cortex to make sense of the essentially random signals it is receiving (Baars 1997a, 107). Hobson links bizarreness generally with the engagement of the limbic system (that is, the emotional center of the brain) during dreams and accounts for the predominance of visual and active images by the fact that visual-motor cortex is relatively easily aroused. It seems likely that the failure of the prefrontal cortex to put things in their proper context—mentioned in the previous paragraph—is also a contributory factor here. 5. Forgetting. Dream laboratory studies, in which subjects are awoken during REM sleep and asked what has been passing through their minds, show that most of us dream for at least an hour and a half each night and probably much more.Yet under normal circumstances, we only remember a tiny fraction of this dream time—not more than 5 percent. The physiological reason is the same as that which accounts for the bizarre nature of dreams.We saw above that in REM sleep the lack of certain neuromodulators effectively cuts off the areas where dream imagery is generated from the prefrontal cortex, a part of the brain associated with working memory (see Chapter 8). That means that the contents of dream consciousness never get processed by the memorizing system and so for the most part are instantly forgotten. Only if we are awoken during the dream does the working memory resume its normal function and capture the sights and sounds and feelings of the dream state.
Allan Hobson is convinced that all aspects of dreaming consciousness will ultimately be explained in terms of the chemicals naturally Dreams,Visions, and Art
• 203
The most influential figure in relation to dreams throughout the twentieth century was Sigmund Freud. (National Library of Medicine)
produced and used by the brain for its normal functioning. He accounts for the differences between the sleeping and waking states by the difference in the balance between these various chemicals in the two states. In the title of a 2001 book, he refers to the brain as The Dream Drugstore. He believes that a greater understanding of the way these complex molecules work in normal and dream consciousness will lead to a corresponding understanding of the altered states of consciousness associated with mental disorders such as schizophrenia and the taking of psychedelic drugs like mescaline and lysergic acid diethylamide (LSD). The psychedelics are a subgroup of a more general class of psychoactive substances, and in what follows I use both terms. If we remember that all psychedelics are psychoactive, but not all psychoactive drugs have psychedelic properties, there should be no confusion. The key to it all lies with the way that the brain’s nerve cells or neurons communicate with each other by releasing small amounts of chemicals—normally called “neurotransmitters”—that are manufactured in the cell body. When a neuron is electrically stimulated to a certain critical degree, it “fires,” as we say, and some of the chemical transmitter passes across the narrow gap, or synapse, between cells (as described in Chapter 2). This movement in turn has an effect on the neighboring cells. That sounds fairly straightforward, but since the 1980s this simple picture has been modified. First, it is now apparent that these chemicals that act as neurotransmitters are not simply squirted out, as it were, in short bursts that establish a momentary signal that is then cut off, like a flash of a Morse code lamp.When they leave the cell that produced them, they enter the brain fluid immersing the neurons and have a continuing effect. That is why the term “neuromodulator,” which I used earlier and which implies a wider ability to affect (or modulate) neuronal behavior, is now often used in preference to the more “linear” expression “neurotransmitter.” Second, neuromodulators sometimes seem to work by enhancing or in-
204 • Dreams,Visions, and Art
hibiting the effectiveness of each other’s operation and so have a far from simple relationship with each other. Take serotonin, for instance. It is a neuromodulator that is implicated in many neurological processes, including sleep. It is present in the brain fluid during dreamless sleep, and it is one of the chemicals mentioned above, whose absence during REM sleep is thought to isolate the prefrontal cortex and so result in some of the characteristic oddities of dream consciousness. But the way that serotonin helps to induce deep sleep is to inhibit another neuromodulator called acetylcholine, which has the job of keeping the brain cells alert and active. By suppressing it, serotonin allows the brain to relax and sleep. But during REM sleep, serotonin is itself embargoed, which means that the acetylcholine gets busy again activating the neurons that then become responsible for all our weird dream imagery. Then when the block comes off the serotonin, it can again repress the acetylcholine, and we slip back into dreamless sleep. Even during the daytime, when we are basically awake, most of us experience periods of relative alertness and drowsiness. This is a well-known phenomenon: all conference speakers dread being put in the “graveyard” session after lunch, when half the delegates will have gone off to their rooms to sleep, and the other half will probably be asleep as well, even if they are sitting in the lecture hall.What is happening in our daily pattern of higher and lower arousal (to use the technical term) is that the balance between our serotonin and acetylcholine levels is shifting, first one way and then the other.When serotonin gains too much of an upper hand, we doze off; when acetylcholine is in the ascendant, we are bright and alert. In our sleep, the natural suppression of serotonin results in REM sleep and its attendant dreaming. But what happens if we artificially suppress the serotonin levels in our brain while we are awake? Well, that is exactly what happens if we take the psychedelic drug LSD, and the effect of that is well known to us, by repute if not by personal experience. The outcome is a kind of hallucinatory waking dream, in which the bizarreness of REM dreams is absorbed into waking consciousness. That cannot be the whole answer, however. Mescaline, for example, the psychoactive substance used and written about by Aldous Huxley in the 1950s, has a hallucinatory effect similar to LSD, but it appears not to affect the serotonin system at all (Huxley 1972). But a drug such as the more recently introduced ecstasy—which is known to cause a huge release of serotonin—does not make its users fall asleep or lose consciousness, at least not immediately. Another familiar drug that acts on serotonin levels is Prozac. Physiologically, it artificially Dreams,Visions, and Art
• 205
raises the level of the neuromodulator in the brain fluid by inhibiting its normal reabsorption by the cells that produced it. Mentally, it reduces depression. This particular drug seems relatively free of unwanted side effects, although in some patients it is ineffective because their natural system of checks and balances feeds back to the serotonin-producing cells the message that the concentration of the chemical is getting above normal, and further release is curtailed. The importance of balance in dealing with brain chemicals is well illustrated by another neuromodulator, dopamine. Its overproduction is associated with schizophrenia, a disorder whose symptoms bear some likeness to dream states. But if it is suppressed too vigorously, the poor patient, relieved of his or her psychotic symptoms, begins to exhibit physical side effects typical of Parkinson’s disease. This tradeoff between sensory and motor disorders is in some ways reminiscent of a contrast we saw in Chapter 8, on memory. A loss of declarative memory (a conscious function) in Alzheimer’s patients resulted from degeneration in the cortex, whereas the loss of motor control, interpreted as the consequence of a loss of nondeclarative memory (a procedural or nonconscious function), in patients with Parkinson’s disease was associated with the atrophying of neurons in the subcortical region called the cerebellum. So far, I have simply assumed that dreaming is a state of consciousness, albeit one that is somewhat different from normal waking consciousness. Not everyone would have accepted this assumption in the past, and it raises the question of whether it is possible to establish real-time contact between the waking consciousness of one person and the sleeping consciousness of another.Well, this is exactly what is claimed in relation to the phenomenon known as “lucid dreaming.” The most prominent serious researchers in this field are Stephen LaBerge and William Dement at Stanford University in California, and their work has provided what many scholars regard as objective evidence for the view that dream states are indeed conscious states (see LaBerge & Dement 1982; LaBerge 2000; and Document 10, this volume). Lucid dreaming requires, on the part of the dreamer, a self-consciousness that one is dreaming. This in turn enables a certain degree of manipulation of the normally random course of the dream narrative. We could say that because the dreamer knows that it is “only a dream” and not “really happening,” the idea of dictating what happens next is no more strange than if we are writing a story. Adepts at lucid dreaming say it is quite easy to get started, but not everyone would agree, and some researchers remain skeptical about the whole enter206 • Dreams,Visions, and Art
prise. The following notes are based on Allan Hobson’s personal reports. Just put a notebook or tape recorder by your pillow, he advises, and tell yourself, before going to sleep, to be on the alert for the crazy kinds of things that should indicate to you that “this is a dream.” Then whenever you wake up, record immediately everything you can remember of your sleep time. In a matter of weeks, apparently, this regime will ensure that your dream recall increases by leaps and bounds, you are progressively more aware of when you are dreaming, and you begin to be able to exercise voluntary control over your dreams. Because the lucid dreaming state is in some important ways closer to the waking state than is ordinary REM sleep, this voluntary control can include telling yourself to wake up—so that you can record your dream contents—and then to reenter the dream on going back to sleep. But vivid recall and even voluntary waking are still a significant distance from real-time communication with an awake person. Lucid dream states, like all REM sleep, involve the loss of awareness of external stimuli and the loss of all muscle movement, except the minimum needed to maintain essential body functions (such as breathing) and the eponymous rapid eye movements that give dreaming sleep its name. LaBerge and Dement have found that lucid dreamers can in fact exercise some voluntary control over these eye movements and so use them as a means of sending signals to the “outside world” while asleep and dreaming. To send messages in the other direction, use is made of the familiar phenomenon that hearing is the last sense to be lost on entering sleep or anesthesia and is the one most likely to break through the sense barrier of a sleeping person. In one experiment, LaBerge and Dement arranged in advance that on hearing a particular tone, the lucid dreamers would deliberately make three long right-and-left eye movements (to indicate having heard the tone), then count out one to ten in order to measure ten seconds, and then make the three eye movements again to indicate the task was finished. They would then estimate another ten seconds (this time without counting) and make the eye movements a third time at the end of the estimated period. As a control, the same test was done when fully awake. Comparing the traces of EEG, EOG, and EMG measurements, the EEG and EMG differ as one would expect between the waking and REM sleep states, but the eye movements are recorded as being almost identical. In fact, the period of ten seconds mentally estimated without counting is more accurate in the lucid dreaming state than in the waking one. What seems to be happening in lucid dreaming is that a conscious Dreams,Visions, and Art
• 207
state is formed that is similar to ordinary REM dreaming but with some additional features more typical of waking consciousness. In addition to voluntary eye movements, other muscular actions—such as a voluntary suspension of breathing and the movement of a finger— have also been reported as being carried out in response to a tone during lucid dreaming. If such claims are substantiated, it suggests that we should think in terms of a sliding scale between sleeping and waking conscious states. Objective studies of these intermediate states, using all the available monitoring and scanning technology, should help both to confirm the validity of the reports and to indicate the areas of the brain and the types of neuromodulator that become activated or deactivated in various states of consciousness.
God and the Brain Stanley Krippner is a psychologist based at the Saybrook Graduate School in San Francisco. His research spans two aspects of the current chapter: dreams and visions. They form part of the range of what are sometimes called “altered” or “nonordinary” states of consciousness.As well as being the former director of a dream laboratory, Krippner has an interest in the application of altered states to ritualistic situations associated with healing and spiritual guidance, in what is known as shamanism. He describes shamanism as a group of techniques by which its practitioners enter the “spirit world,” purportedly to obtain information that is of use to guide and heal members of their social group (Krippner 2000). Shamans regard the totality of outer and inner reality as one vast system of signals, which they can access and decode by deliberately altering their states of consciousness. Chanting, drumming, dancing, fasting, sleep deprivation, lucid dreaming, refraining from—or participating in—sexual activity, and the consumption of mind-altering substances are among the methods used to induce the necessary changes in awareness. The inducing of trances often involves the simultaneous use of two or more of these techniques. Anthropologist Michael Winkelman has studied the records of religious and magical practices in nearly fifty different societies, both past and present. He found that the role of the shaman develops and changes as societies become more settled and structured in their religious and political life. He has proposed that the evolution of the shamanic class in primitive societies across the globe represented a biologically derived specialization of function and that neurological developments in the brain made possible the special kind of consciousness and knowing that is associated with shamanism (Winkelman 208 • Dreams,Visions, and Art
Hamatsa shaman possessed by supernatural power after having spent several days in the woods as part of an initiation ritual. Shamans regard the totality of outer and inner reality as one vast system of signals, which they can access and decode by deliberately altering their states of consciousness. (Library of Congress)
2000). One of the substances still widely used by the indigenous tribes of the upper Amazonian region is ayahuasca, a psychoactive brew made chiefly from a mixture of two plants. Its use and effects— especially the powerful visions for which it is best known and the mystical states it can induce—are currently the subject of considerable scholarly study. Professor Benny Shanon, a cognitive psychologist at the Hebrew University of Jerusalem, and Dr. Luis Eduardo Luna, an anthropologist who has set up a research community in the Brazilian rain forest, are leading figures in this exploration, which is both intellectual and experiential (Shanon 2003; see Luna and Amaringo 1993 for examples of artwork inspired by ayahuasca visions). The religious application of psychedelic drugs such as ayahuasca Dreams,Visions, and Art
• 209
is indicated by the term entheogens, a loan word from Greek that was coined in 1979 and means “bringing forth the divine within.” Professor Huston Smith, who before retirement researched the philosophy of religion at Syracuse University in upstate New York, has studied these substances since the 1950s, and he emphasizes that the chemical alone is not enough to produce an entheogenic (religious or visionary) effect. Three elements—the potential entheogen itself, the people consuming it, with the background and expectations they bring to the occasion, and the whole context in which the activity is undertaken—subtly interact to bring about a unique experience. Experimenters with psychoactive substances, from Aldous Huxley in the middle of the twentieth century through Huston Smith to Benny Shanon, are unanimous in their insistence that chemicals do not cause visionary experiences but rather they occasion them. Just as the baiting of the hook and the patience of the angler do not cause the fish to bite but create the conditions under which it might happen, so the appropriate use of entheogens provides the occasion for a visionary or mystical experience to occur. Huston Smith believes that the lesson to be learned from the entheogens and the broadly similar experiences to which they give rise in many different settings, over many centuries, and across continents, is that “there is another Reality that puts this one in the shade” (Smith 2000, 133). That, of course, is the traditional religious view. But another possibility is that the broad uniformity of experience results from the broadly similar construction of all human brains. In this view, the correlation with similar brain physiology accounts for the similarity of the conscious experiences reported by ritual practitioners and consciousness researchers alike, when they have consumed these substances. The same general point applies equally to all experiences that are credited with a religious or spiritual dimension. The neurologist Oliver Sacks, whose sympathetic accounts of some of his patients with unusual symptoms have become best-selling books and in one a case a successful commercial movie, once discussed with me the case of the medieval mystic and visionary, Hildegard of Bingen. She believed that her visions were of heaven; Sacks—in an early book of his on migraine—drew attention to the similarity of her descriptions with some of the symptoms of that disabling condition. But he did not want to say that her alleged visions were “only migraines.” He saw no contradiction between acknowledging the medical condition and at the same time treating it as an enabling circumstance for her mystical experience. “It would be reductive to call these just migraines,” he said. “It would be misleading. I certainly think that physi210 • Dreams,Visions, and Art
cal states can exist as porters to the spiritual” (Sacks 1994, 239). The language here is very close to Huston Smith’s description of entheogens as “occasions” of visionary experience. Few people have done more to promote research into the neurophysiology of religion than professor Andrew Newberg of the University of Pennsylvania, who collaborated closely with the neuropsychologist and anthropologist Eugene d’Aquili until the latter’s death in 1998. One of Newberg’s pieces of research involved taking experienced meditators and making brain scans of their meditation states. Using the technique known as single photon emission computed tomography (SPECT), he could allow subjects to meditate in conditions of their own choosing rather than carry out the whole exercise in, say, the noise and discomfort of an fMRI scanner (d’Aquili and Newberg 1999). SPECT works in a way similar to PET (see Chapter 2) and dates from the early 1960s, when the idea of emission tomography was first introduced. As a clinical tool, it predates PET, and its imaging is inferior because the attainable resolution and sensitivity are lower. However, the availability of new chemicals to improve the performance of SPECT, particularly for the brain and head, and economic aspects of the method (it costs about a third of PET) make it attractive for researchers with limited budgets. One of Newberg’s volunteers was a colleague who had practiced Tibetan Buddhist meditation for twenty years (see Begley 2001). The exercise took place in a darkened candlelit room with incense sticks burning and an atmosphere of stillness and calm.When the meditator reached what he regarded as the crucial spiritual point, he indicated the fact to Newberg, who began the flow of a radioisotope tracer through a needle previously inserted into a vein in the volunteer’s arm. In this way, the blood flow reflected by the position of the emitted radiation—recorded shortly afterwards in the scanning machine—actually related to the time of the meditative state. In addition to Buddhist meditators, Newberg did scans of Franciscan nuns. The meditative techniques and aims of the two religious groups were different, but their scans showed a common feature. The prefrontal region associated with attention showed activity because the meditations all included a degree of focusing upon a mental image of some kind. Activity was also indicated in part of the limbic system (hypothalamus, amygdala, hippocampus), and Newberg thinks that these two areas of activity result in a “reverberating loop” that intensifies the effect of the focused attention. But what is known as the orientation association area, part of the posterior parietal (PP) cortex at the top of the head toward the back, showed dark on the scan, indicating a Dreams,Visions, and Art
• 211
lack of activity.As its name suggests, this region is normally associated with the processing of information that allows the body to get its spatial bearings, to assess where it is positioned relative to the environment.We came across the PP in Chapter 3, as part of the upper visual pathway, the one connected with spatial vision. Among other things, the association area enables us to tell where the limits of our bodies are; that is, it marks the self-other boundary. Newberg thinks this region of the cortex is somehow being inhibited by the reverberating loop and that its deactivation explains a commonly reported feature of mystical and meditative states: a sense of unbounded oneness. Sometimes described as being at unity with the whole universe, this state was labeled “cosmic consciousness” in the book by that title by Richard Bucke; the Christian tradition speaks of mystical union with Christ, the “unitive way” that is the ultimate goal of the spiritual life. Other religious and philosophical traditions speak simply of an absorption into the Absolute, or the One. Newberg is keen to remind us that the presence of a neurological correlate, which might explain the biological aspect of the sense of being boundless, does not necessarily make it an illusion. If anything, it is a way of affirming the physical as well as spiritual reality of the reported experience. Another part of the brain thought to have links with religious experience is the inferior temporal (IT) lobe. Part of the lower visual pathway, it was associated with object vision and color discrimination in Chapter 3. It may be thought of as the visual association area equivalent to the spatial association area in PP. We also came across IT in Chapter 4, as one of the possible sites for the neural correlate of visual experience. So stimulation of this area—by whatever cause—might well lead to visions that could have a religious interpretation. An extreme form of disruption to the normal pattern of neural activity in IT is temporal lobe epilepsy, a kind of cortical electric storm with dramatic consequences. I have been involved in a small way with one current research project into possible links between epilepsy and normal religious experience. This work is being carried out by Michael Trimble and his colleague Michiko Konno at the Institute of Neurology in London. Vilayanur Ramachandran, director of the Center for Brain and Cognition at the University of California at San Diego, is a neuroscientist who regularly courts controversy by his reductive explanations of the higher aspects of human experience such as religion and art. In his coauthored 1998 book Phantoms of the Brain, he pointed to Saint Paul, Saint Teresa of Avila, Fyodor Dostoevsky, and Vincent Van Gogh as being among the world-renowned historical figures whose religious and artistic genius are suspected by historians and scientists to 212 • Dreams,Visions, and Art
have been linked to epilepsy (Ramachandran and Blakeslee 1998). In his novel Lying Awake (2000), Mark Salzman assumes a causal connection between epilepsy and religious ecstasy. The story hinges on whether his heroine, Sister John, will undergo surgery to relieve the unpleasant symptoms of her condition, knowing that it will almost certainly bring to an end her mystical visions. At a less extreme level, Ramachandran speculates that all religious experience and feeling results from enhanced electrical stimulation in IT. Richard Bentall of Manchester University in the United Kingdom has proposed an equivalent origin for verbal, as opposed to visual, religious perceptions. He speculates that if Broca’s area—associated since the 1860s with speech production—becomes activated during meditation or a similar highly focused mental exercise, then what we would normally recognize as our own inner voice can be misinterpreted as the voice of God or some other external agent (see Cardena and Krippner 2000). This misattribution would arise in the same condition of lost orientation to which Newberg attributes the breakdown of the self-other boundary and the consequent religious sense of oneness. Newberg sees the same basic mechanism being responsible for the self-effacing effects of attention-focusing repetitive chanting or drumming or dancing in ritualistic settings. We have already mentioned how an extreme version of this effect can be self-induced by shamans, but the same results can be seen at a lower level in other formalized settings that need not even be religious. The relentless musical beat and strobe lighting at a disco—even without the added influence of drugs—can result in a greater or lesser loss of individuality among the dancers and a corresponding growth in the sense of togetherness. The beat of the drum and tramp of the marching boots of the military may contribute in a similar way to an individual’s sense of comradeship and incorporation into the company, so lessening the normal concern for self-preservation.
Art and the Brain From the bizarre images of dreams and the little-understood images of religious visions, it is not a huge step to the more common—but no less curious and controversial—kind of visual perception that constitutes the enjoyment of visual art. Indeed, there is a strong association between art and religion, even if in some cases it is a negative relationship. The prohibition of images in some streams of the Abrahamic faiths provides the most obvious example of this hostility. But if, as was suggested in Chapter 3, perception is really geared for Dreams,Visions, and Art
• 213
Neuroscientist Vilayanur Ramachandran declared that all art is caricature and that its appeal comes down to a few basic facts about the way neurons work.This, he claims, explains many familiar experiences—such as why men find the hourglass figure of Marilyn Monroe sexy. (Library of Congress)
action and survival rather than aesthetic pleasure, then why do we bother with art, and why do we find it pleasurable? Professor Ramachandran, with characteristic boldness and disregard for established opinion, has declared that all art is caricature and that its appeal comes down to a few basic facts about the way neurons work. This, he claims, explains many familiar experiences—such as why we recog214 • Dreams,Visions, and Art
nize a cartoon squiggle quicker than a full-color photograph, and why men find the hourglass figure of Marilyn Monroe sexy. With his colleague William Hirstein, Ramachandran proposes a number of “laws of artistic experience,” three of which seem to be especially significant: a psychological phenomenon called the “peak shift effect,” the principle of “grouping,” and the benefit of focusing on a single visual cue (Ramachandran and Hirstein 1999). The peak shift effect is a well-known principle in animal discrimination learning. For example, if a rat is taught to discriminate a square from a rectangle and is rewarded for the rectangle, it will soon learn to respond more frequently to the rectangle. Moreover, the greater the ratio between the long and short sides, that is, the less square it is, the “better” the rectangle is in the rat’s eyes. That is the “peak shift effect.” Ramachandran argues that this principle holds the key to understanding the evocativeness of much of visual art. The accentuated hips and bust of the Goddess Parvati in the Chola bronze, for instance, give what is essentially a caricature of the female form. The artist has chosen to amplify the essence of being feminine by moving the image abnormally far toward the feminine end of the female/male spectrum. Ramachandran conjectures there may be neurons in the brain that represent sensuous, rotund feminine form as opposed to angular masculine form. The result of the artistic amplifications is a superstimulus in the domain of male/female differences, to which these neurons respond. The artist striving to evoke a strong emotional response may exploit the peak shift effect in other ways than shape. For instance, a Boucher, a Van Gogh, or a Monet may be thought of as a caricature in color. A second basic principle suggested by Ramachandran is grouping. Consider the Dalmatian dog picture. It is seen initially as a random jumble of splotches, and the number of potential groupings of these splotches is huge, but once the dog has been “seen,” our visual system links only a subset of these splotches together, and it is impossible not to hold on to this group of linked splotches. Our neuronal circuitry works in such a way that the discovery of the dog and the linking of the dog-relevant splotches generate a pleasant “Aha” sensation, and we can no longer not see the dog. Artists understand the pleasure given by such effects, and they exploit them in their work, but Ramachandran insists that the original value of such grouping to pick out objects was a matter of life and death: Evolution selected for survival, not for artistic enjoyment. So in the jungle, spotting a striped tiger among the striped foliage not only earned the reward of a pleasant sensation but saved one’s life into the bargain. Dreams,Visions, and Art
• 215
The principle of grouping.When your neuronal circuitry discovers the dalmation among the random splotches it generates a pleasant “Aha” sensation, and you can no longer not see the dog. (Imprint Academic)
The third principle emphasized by Ramachandran is the need to isolate a single visual modality (for example, shape or color) before amplifying the signal in that modality. He claims that the brain’s ability to do this makes an outline drawing or sketch more immediately effective as art than a full-color photograph. Think of a photograph of Albert Einstein, with depth, shading, texture, and so on. What is unique about Einstein is the form of his face (as amplified by the caricature), not the other details, even if they do make the picture more humanlike, so they actually detract from the efficacy of the form cues, by creating a distraction. This phenomenon explains not only why one “gets away” with just using outlines, but also that they are actually more effective than a photo, despite its having more information. In art, “less” really is “more.” Ramachandran believes that his principles can be tested experimentally, employing the skin conductance response (SCR) technology used in “lie detectors.” The size of the SCR is a direct measure of the amount of limbic (emotional) activation produced by an image. It is a better measure, as it turns out, than simply asking someone how much emotion he feels about what he is looking at because the verbal response is filtered, edited, and sometimes censored by the conscious mind. Measuring SCR allows direct access to unconscious mental processes. The experiment would compare a subject’s SCR to a caricature of, say, Einstein, with his SCR to a photo of Einstein. Intuitively, one would expect the photo to produce a large SCR because it is rich 216 • Dreams,Visions, and Art
in cues and therefore excites more modules. If one found, paradoxically, that the caricature actually elicited a larger SCR, it would provide evidence for the operation of the peak shift effect and other principles. Similarly, one could also compare the magnitude of an SCR to caricatures of women (or to a Chola bronze nude or a Picasso nude) with the SCR to a photo of a nude woman. It is conceivable that the subject might claim to find the photo more attractive at a conscious level while registering a large unconscious aesthetic response—in the form of a larger SCR—to the artistic representation. That art taps into the unconscious is not a new idea, but such SCR measurements may be the first attempt to test such a notion experimentally. Not surprisingly, Ramachandran’s attempt to reduce aesthetic experience to a set of physical or neurobiological laws has met with stout criticism. First, there has been a predictable feminist outcry against a heavy reliance on the female form and the erotic in his examples (Wheelwell 2000). Then he seems to equate arousal (as measured by SCR) with a positive aesthetic response—an assumption felt by critics to presuppose the reductionist case he is trying to prove. Taken together, these two points have formed the basis of an accusation that Ramachandran is confusing high art with pornography—a charge that he vehemently denies. Ramachandran’s “science of art” has also been attacked from the scientific side, on the grounds that he has not yet conducted any serious empirical tests of the ideas. At best, what have been offered are a manifesto for a research program and some suggestions for possible lines of investigation. Even then, it has been pointed out that the narrow range of examples used hardly justifies his lofty claims to be dealing with the whole of art, let alone to have uncovered the “laws of aesthetic experience.” Criticism has centered on the lack of proportion between the narrow view of art taken by Ramachandran and the grandiloquent claims he makes for his theory (Wallen 1999; Ione 2000). Another brain scientist who has chosen to write on art and the brain is professor Semir Zeki, a neurophysiologist based at University College in London, who is a world authority on the workings of the visual system. He says the chief task of the brain is the search for knowledge, in particular the search for the permanent characteristics of things—constancies, as he calls them—in an ever-changing environment. And he claims that art has exactly the same function. Indeed, he treats art as something created by the brain to extend its own reach in this task, and he suggests that the methods exploited by artists are in fact reflections of the workings of the neural system. Zeki made his reputation arguing for a modular and parallel-processing understanding of Dreams,Visions, and Art
• 217
the brain’s visual system, in other words, taking different aspects of reality and isolating them from each other in order to deal with them separately. That is exactly how he sees the various movements in art making their contributions. Henri Matisse and the fauvists, for instance, concentrating on color almost to the exclusion of all else, reflect the work of that part of the lower visual pathway (the area of the visual cortex known as V4) that is believed to process color. Zeki actually goes so far as to call artists neurologists who study the brain with their own unique methods, but he has to admit that the conclusions they reach about the organization of the brain are “interesting but unspecified” (Zeki 1999, 80). At bottom, like Ramachandran, he is a reductionist who puts biology before the art, claiming that any worthwhile theory of aesthetics must be based upon an understanding of the physical workings of the brain. Reductionism does not have to be destructive, however. Ramachandran, in answer to his critics, specifically denies that to explain some higher level of activity in terms of its components is to explain it away. And Zeki insists that no profound understanding of the workings of the brain is likely to compromise our appreciation of art. On the contrary, the influence is more likely to work in the opposite direction, and we shall come to appreciate the “biological beauty” of the brain (Zeki 1999, 95). Both these writers give prominence to the “top down” element in vision, which is necessary to make sense of the raw data supplied by the eyes. Without this element, there would be no Dalmatian dog, only the black and white splotches.We have seen that dreamed images and mystical visions are generated internally by the brain’s electrochemical activity, without reference—at least at the time they are experienced—to any external visual stimulus. And throughout this book, we have discovered that even everyday looking around us is far from a purely passive receiving of bare facts. In their different ways, Zeki and Ramachandran both seem to be saying that art works by cooperating with our brains to enhance our capacity to see both accurately and creatively.
218 • Dreams,Visions, and Art
12
What Is It Like to Be Conscious?
n 1974 the philosopher Thomas Nagel published a short paper in the Philosophical Review with the intriguing title, “What Is It Like to Be a Bat?” It has justly become one of the most-referenced contributions to the philosophy of mind, partly because its title is so memorable, but chiefly because it focuses on the topic at the heart of consciousness studies: the nature of subjective experience (see Document 5, this volume). It is often discussed in terms of “qualia,” a word introduced into the philosophy of mind in the 1920s (Lewis 1929, 121). It is the plural form of the Latin word “quale,” meaning “having some quality or other,” and it refers to those subjective qualities such as sound and color, pain and anticipation, that make up our conscious experience. They are the focus of this final chapter. The debate about qualia highlights the central problem faced by anyone attempting to construct a science of consciousness. As Nagel puts it in a brief article on qualia in The Oxford Companion to Philosophy, science demands a description and analysis of its subject matter “in objective physical terms which are comprehensible to any rational individual independently of his particular sensory faculties.” But qualia resist such treatment. They have a “subjective character . . . comprehensible only from the point of view of certain types of conscious being” (Nagel 1995, 736). It is this business of a creature’s “point of view” that so exercises Nagel. It is not just the problem—hard enough in itself—of how to study, classify, record, and analyze subjective phenomena objectively. It is the problem of putting oneself in the place of other conscious beings and experiencing what they experience in the way they experience it. For, as Nagel says in a phrase that has entered the literature, an organism is conscious “only if there is something that it is like to be that organism” (Nagel 1974/1997, 519). The question is: How can we ever know what it is like to be anything or anyone other than ourselves? How can we experience the world from any point of view other than our own?
I
219
Even among ourselves, it is not possible for me to be absolutely sure that you are conscious and experience things as I do.You and I can both look at a red poppy, and if we both have tolerably good color vision we shall agree without any difficulty that it is indeed colored red.And unless we are especially asked to think about it, I will assume that you are having the same sensation when you see red as I am having when I see red, and vice versa. But how do we know that is really the case? I have a consistent experience of seeing red that enables me to say, on any number of different occasions, that a given poppy is red and not blue. The same is true for you. But how do we know that the experience I call “seeing red” is not the experience that you call “seeing blue?” This question is known to philosophers of mind as the problem of the inverted spectrum, and it has been around since the days of John Locke (1632–1704). It is part of the wider problem that centers on the private nature of conscious experience: only I can experience what I experience, and only you can experience what you experience. So the colors of the spectrum, that I call (and experience as) red, orange, yellow, green, blue, indigo, and violet might be the colors that you also call red, orange, yellow, green, blue, indigo, and violet, but that you experience as what I call (and experience as) violet, indigo, blue, green, yellow, orange, and red. In practice, however, we all assume that each of us experiences red in much the same way and that we all suffer broadly comparable pains when we have a toothache. But even with other humans, there are limits to this commonality of experience. For example, it requires great imagination and empathy for me to enter into the world of someone blind from birth. That person’s appreciation of the world must be very different from mine. Sighted people will automatically think of such a world negatively, as just being their own sighted world with one vital mode of access missing. There is no reason, however, why a person who has never seen anything should experience his or her world as being in any way depleted. (A sense of loss will of course be true of a sighted person recently blinded, but not of someone born blind.) The person blind from birth has as total a world as you or I. But it is difficult—probably to the point of impossibility—for a sighted person to experience that world. Now, Nagel says, if even such a comparatively small act of experiential imagination is nigh on impossible, just ask yourself what it is like to be a bat. The question is nicely poised. If he had asked what it is like to be a stick insect, our curiosity would not have been aroused. The world of such a creature is just too far removed from our own for us to be interested. If he had chosen a dog, we might have been 220 • What Is It Like to Be Conscious?
fooled—because we so anthropomorphize our pets—into thinking we knew the answer. A bat is just right for the purpose he has in hand. It is a mammal and so has important features in common with us, and its system of echolocation suggests a sensory modality that is neither hearing nor seeing but somewhere between the two. The thought of what such a sense might be like engages us and teases us and ultimately baffles us, which is exactly what he wants. He wants to impress upon us the way in which qualia are tied to the point of view of the experiencer and so are ultimately resistant to objective study. He is not denying that a bat’s conscious states are caused by its brain’s neural activity, nor that they are closely bound up with its behavior. He is not even saying that a purely physical explanation of consciousness would necessarily be false; but, he says, “We do not at present have any conception of how it might be true” (Nagel 1974/1997, 524). That is because no amount of physical research will ever reveal to us a bat’s qualia, and to that extent consciousness remains mysterious. We simply don’t know what it is like to be a bat. Nagel’s “bat” question is related to but needs to be distinguished from another aspect of the qualia debate known as “the knowledge argument.” (My interpretation of the knowledge argument has been informed by Alter N.d.) This argument is directly deployed against the central claim of full-blown physicalism, which says that not only are qualia physically caused but that if we have a full physical description of the brain state associated with a given quale, then we know everything about it that there is to know. Opponents of this view say that even if it were possible to have such a complete physical description, there would still be something more to be known, something accessible only by subjective experience. The central figure in the knowledge argument is a fictitious neuroscientist called Mary, who was introduced into the literature by philosopher Frank Jackson in 1982. He posed the question like this: imagine a brilliant neuroscientist called Mary. She knows everything there is to know about the neurophysiology of color vision. But from birth she has lived and worked in a totally black-and-white environment. Now suppose that one day she is released from her black-andwhite room and becomes able to see colors for the first time. The question is: Will she know something new, as a result of experiencing color, that she did not know before, from all her brilliant research about it? Jackson claimed that she obviously would learn something new and that this was enough to prove physicalism false. Here is the line of argument: What Is It Like to Be Conscious?
• 221
1. Mary, before her release, knows everything physical there is to know about seeing red (because that is stipulated in the way the story is set up). But 2. Mary, before her release, does not know everything there is to know about seeing red (because she will learn something new about it on her release). Therefore 3. There are some truths about seeing red that escape the physicalist account, and so 4. Full-blown physicalism is false, and qualia cannot be identified with physical properties.
Jackson’s physicalist opponents attacked the knowledge argument on two fronts, both depending upon certain distinctions within the broad meaning of the verb “to know.” David Lewis and Laurence Nemirow drew a distinction between “knowing that” and “knowing how.”What Mary now has, according to them, is not new knowledge (in the sense of knowing that something is the case that she did not know before) but a new skill; that is to say, she knows how to use a new (subjective) route for arriving at facts she already knew to be the case by another (objective) method (Lewis 1988). Jackson responded to them by accepting that she would acquire new abilities but insisting that she would gain new factual knowledge as well. The other defense of physicalism against the knowledge argument came from Paul Churchland, whose eliminativist views were studied in Chapter 5. He said that Jackson committed a logical fallacy by using the term “knowing about” in a slightly different way when talking about physical brain states from when he was talking about conscious experiences. In the former case, it related to what Churchland termed “knowledge by description” and in the latter case to what he labeled “knowledge by acquaintance” (Churchland 1985). In terms of the numbered outline of the knowledge argument given above, Churchland’s objection can be summarized as follows. The logical pattern of Jackson’s knowledge argument is this: 1. Mary knows everything physical about X. 2. Mary does not know everything about X, therefore 3. There is something about X that is not physical.
But if, as Churchland claims, there is an equivocation in the meaning of “know about,” then premise 1 and premise 2 need to be amplified thus: 1a. Mary, before her release, knows everything physical there is 222 • What Is It Like to Be Conscious?
to know by description about seeing red (because that is stipulated in the way the story is set up). But 2a. Mary, before her release, does not know everything there is to know by acquaintance about seeing red (because she will learn something new about it on her release).
The logical pattern has now changed to this: 1a. Mary knows everything physical about X. 2a. Mary does not know everything about Y.
Since X and Y are different things, no conclusion regarding them can be drawn from the two premises. Jackson could counter by denying the equivocation. He could insist that premises 1 and 2 both include “knowledge by acquaintance” in their use of “know about.” But in that case, Churchland says, premise 1 can be true only if we assume in advance that knowledgeby-acquaintance is nonphysical (because, by the terms of Jackson’s story, Mary has no knowledge-by-acquaintance of color prior to her release). Since Jackson cannot claim in advance the very thing he is trying to prove (that is, that knowledge-by-acquaintance is nonphysical), premise 1 is false, and so the conclusion is also false, even though the form of the argument is now valid. Churchland further argues that all this is more than a philosophical quibble because the description/acquaintance distinction relates to known aspects of brain physiology, differences related to the distinction between declarative and nondeclarative memory that were described in Chapter 8. Jackson himself conceded in 1998 that the knowledge argument does not refute physicalism, but there are others who still hold to it, such as William Robinson at Iowa State University, who says that admitting “knowledge by acquaintance” has physical causes does not rule out its containing “a constituent that is physically caused but not itself physical” (Jackson 1998, 77; Robinson 2001).
Searle, Dennett, and Chalmers It will be helpful at this point to link up the question of qualia with that of the mind-body problem, introduced in Chapter 5.We may do so by considering three well-known philosophers of mind whose names have already been mentioned in this book. They had a sharp public exchange in the New York Review of Books in the late 1990s. One of them— John Searle—had attacked the other two—Daniel Dennett and David Chalmers—in critical reviews of their books, and the journal then What Is It Like to Be Conscious?
• 223
published rejoinders from each of them, together with a further response from Searle (Searle 1997). It was a very lively debate. All three start from a broadly physicalist position, so they all face the question: If qualia cannot be attributed to Descartes’s “mind stuff,” where are they to be located? Dennett is the most thoroughgoing physicalist among them. For him, the only things that really exist are those that can be described by objective, scientific, third-person methods; in other words, those things that make up the physical world. Since he accepts the Cartesian view that mental states in general—and qualia in particular—are not part of that physical world, he is forced to conclude that they do not exist at all.According to him, our subjective first-person experiences only seem to exist; they are illusions born of our mistaken judgments about the physical functions of our brain and nervous system. Searle, like most people, thinks that to say this is plain daft. Consequently he refers to Dennett’s best-known book Consciousness Explained (1991) as Consciousness Denied. Other opponents regularly parody the title as Consciousness Explained Away. David Chalmers, in his book The Conscious Mind (1996), agrees with Dennett that conscious experience is not part of the physical world, but he is unwilling to deny its existence all together. So he is forced into the position of claiming that it is a nonphysical feature of the world, an additional fundamental property of the universe, alongside mass and electric charge and space-time. This is a desperate remedy. In the first place, no philosopher likes defying Occam’s Razor by multiplying the number of basic entities in the world.As we saw earlier, it is a generally accepted principle in philosophy that when any problem arises, a solution that avoids adding new fundamental features is always considered more elegant and more likely to be true than one that requires novel additions. Second, Chalmers’s suggestion here implies panpsychism, the idea that everything in the universe is (at least potentially) conscious, and—as you may imagine—this idea is not exactly top of the philosophical pops. Indeed, it is a view that meets with ridicule in many quarters, despite having a long and honorable philosophical pedigree. Chalmers faces this head on. He says that he does not see panpsychism as an inevitable consequence of his views, but that if it should turn out that way, then he could live with it. After all, he writes, “perhaps a thermostat, a maximally simple information processing structure, might have a maximally simple experience?” (Chalmers 1995, 217). In picking on the thermostat, Chalmers is adopting a favorite example at the fringes of claims about consciousness. For instance, John Searle tells elsewhere how John McCarthy, the inventor of the term “artifi224 • What Is It Like to Be Conscious?
cial intelligence,” once told him that his thermostat had beliefs, three of them, to be exact: it’s too hot in here, it’s too cold in here, and it’s just right in here. Searle finds the views of both Chalmers and McCarthy ridiculous (see Searle 1984, 30). The third problem for Chalmers is that the solution he offers is, by his own admission, a variety of dualism. It requires “bridging principles” to link conscious experiences to the physical processes of the brain and nervous system with which they are associated, but of which, by definition, they are not a part. The need for these rules governing the correlation between mental and physical states is a familiar awkwardness in all dualistic accounts of the mind-body relation. Searle pillories Chalmers’s whole edifice as managing to combine the worst of two worlds, neither of which he likes and which are normally regarded as being mutually exclusive. These two are dualism, which posits a radical difference between the mental and the physical, and functionalism (the view outlined in Chapter 6 that says systems with identical functional organization will have identical conscious states). So Chalmers is almost impossible to categorize. When he relates conscious states to function, he looks like a standard moderate physicalist; but his flirting with dualism appears to take him out of the physicalist camp altogether. He is one of the rising stars among the philosophers of consciousness, a full generation or more younger than the old archrivals Dennett and Searle. I think it is his refusal to let go of any potentially valuable insights, even if they do seem contradictory, that makes his contribution so valuable and keeps it central to the debate. When Searle leaves off criticizing others and comes to state his own position, it is arguable that he fares no better than his rivals. First, he says the physical brain causes conscious experience and that consciousness is in turn “realized” in the brain. The two are inseparable, just like the property of liquidity and the substance water, which both produces it and also is the medium in which it is realized. That makes Searle a physicalist. But he also insists that consciousness cannot be “reduced” to its physical substrate (that is, you cannot say consciousness is “nothing but” brain cells firing, as Dennett claims). That means he is not a full-blooded physicalist. But neither will he allow consciousness to be treated as an additional nonphysical feature of the world (as Chalmers suggests). Searle is in a cleft stick. To avoid Chalmers’s dualism, he relies on the assertion that consciousness has what he calls “a first-person ontology,” (Searle 1997, 212) which he explains as meaning that it “only exists when it is experienced” (Searle 1997, 213). But against What Is It Like to Be Conscious?
• 225
Dennett’s reductionism he has to maintain that consciousness is one of “those real features of the world that exist independently of observers” (Searle 1997, 211). It seems to me that he is trying to have his cake and eat it too: if consciousness only exists when it is experienced, then it cannot exist independently of being observed, since to experience something is to observe it. But Searle seeks to avoid this contradiction by drawing a distinction between experience and observation, so that the experiencer of qualia does not count as an observer of them. I have challenged him on this distinction and he has tried to explain to me why he is not contradicting himself. But I could not follow his argument, and I am not alone in that. It is not possible to talk or read about qualia and the philosophy of consciousness and avoid any mention of “zombies.” They are not the Haitian living dead but imaginary characters dreamed up by philosophers of mind to help them tease out the issues involved in their discussions. Try to imagine, they say, a creature that is physically identical to you in every way, goes through exactly the same actions, and says exactly the same things as you but that lacks conscious experience. Such a creature would be your “zombie twin” (for the earliest account of zombies I can find, see Kirk 1974). Zombies are a kind of litmus test for philosophies of mind. If you can imagine a creature that is physically, behaviorally, and functionally identical with you and yet without consciousness, then—so the argument goes—you cannot be a full-blooded physicalist. Daniel Dennett, as full-blooded a physicalist as you could wish to meet, certainly cannot imagine such a creature and has written of “the unimagined preposterousness of zombies” (Dennett 1995). But most philosophers, including both John Searle and David Chalmers, do find it possible at least to imagine a zombie, and some—Chalmers in particular—have made a lot of the “conceivability of zombies” in arguing for the reality of qualia (Botterell 2001). Nonphilosophers are liable to retort that being able to imagine a zombie has nothing to do with anything. After all, it is quite easy to imagine something that does not exist, from a flying pig to a blue banana, or even a wooden puppet that is fully conscious. Surely physicalism is at risk only if zombies can actually be shown to exist in addition to being imagined. But conceivability is a sophisticated concept in philosophy and cannot simply be equated with imagination, as ordinarily understood. Another question that divides philosophers is whether qualia (as opposed to the things that cause them) can exist apart from when they are being experienced. Searle, for example, would say that although light of a particular wavelength may exist unobserved, the quality of, 226 • What Is It Like to Be Conscious?
let us say, redness, which is normally associated with that wavelength, does not exist unless the light is actually being seen by a colorvisioned person. It can be argued, however, that qualia do exist “unperceived,” and one of the philosophers who maintains this position is Michael Lockwood of Oxford University (see Feser 1998; Lockwood 1998). If he is right, it has major implications. If phenomenal qualities (as he calls qualia) can exist unperceived, some believe that a way would then be open to a general position that is neither physicalist nor dualist but still accepts the reality of the physical world. Here is an indication of how this rather complicated argument goes. We have seen that physicalism holds that matter alone exists and that mental phenomena are just a special class of material phenomena. (Its mirror image, “idealism,” says that mind alone exists and that so-called material objects are simply the products of our imagination.) Dualism, of course, takes mind and matter to be equally real and fundamental. Lockwood bypasses all these options. In the view he defends, qualia belong to a more basic category than either the mental or the physical; they are the intrinsic qualities of all the objects that make up the world, whether mental or physical. Such a position holds (against dualism) that there is only one basic kind of “stuff ” and that it is neither mental nor physical but something neutral between them, out of which both mind and matter are constructed. As I mentioned when discussing the mind-body problem in Chapter 5, a distinguished recent supporter of this “neutral monism,” as he called it, was Bertrand Russell, and an earlier proponent was the seventeenth-century philosopher Baruch Spinoza. Lockwood’s ideas do have some similarities with Chalmers’s proposal that consciousness is a “fundamental non-physical feature of the world,” but Chalmers regards qualia as the nonphysical aspect of an underlying “something,” which also has a physical aspect (he suggests, somewhat implausibly, that this something is “information”). Lockwood, by contrast, says that qualia are themselves the underlying “something” that we experience under the two aspects of the physical and the mental. This view has led to Lockwood, like Chalmers, being accused of setting out on the road to panpsychism, but it is a destination that Lockwood seems more eager than Chalmers to avoid. For completeness, it is appropriate here to recall here the robustly physical understanding of qualia held by New York neuroscientist Rodolfo Llinás (see Chapter 7). He is an out-and-out physicalist— “Neuronal activity and sensation are one and the same event”—but unlike Dennett, he rates qualia as being absolutely real and of fundamental importance (Llinás 2001, 218). For Llinás, qualia are properties even of What Is It Like to Be Conscious?
• 227
single-cell animals, an essential half of their combined sensory and motor system that enables them, like us, to interact with their environment. He draws attention to the view, widely held among philosophers, that animals either have no conscious experience or that having it they have no use for it, and he asks whether in view of this we should deny that qualia exist. His answer is emphatic: “Qualia, from the perspective of the workings of the brain, constitute the ultimate bottom line. Qualia are that part of the self that relates (back) to us! It is a fantastic trick! One cannot operate without qualia; it is a property of mind of monumental importance” (Llinás 2001, 221).
The Hard Problem of Consciousness The time has come to begin drawing together the various threads of debate covered in this volume and, in so doing, to return to the paradox with which we opened: the perceived impossibility of studying first-person subjective experience using the tools of third-person objective science. It is a problem that troubles philosophers more than scientists.We saw in the last paragraph that a neuroscientist like Llinás can conduct his research on the basis of an identity theory of brain and conscious mind. He is apparently untroubled by the way philosophers have almost entirely abandoned the straightforward mind-brain identity theory associated with Ullin Place and others in the middle of the twentieth century (see Chapter 5). But the philosophical problem will not go away by being ignored because it is not just a problem about methods of researching consciousness but about the very nature of consciousness itself and its relation to the rest of the world. There are various ways of formulating the difficulty, such as the talk, introduced in the early 1980s by Joseph Levine of Ohio State University, of an “explanatory gap” between physical brain states and mental states (Levine 1983). But the most commonly used expression is one popularized by David Chalmers, when he drew a distinction between the many “easy problems” associated with consciousness research and the one “hard problem.” The hard problem is how and why in the physical universe there should be any such thing as conscious experience at all (see Chalmers 1995; Document 9, this volume). Chalmers characterizes the so-called easy problems as being concerned with cognitive abilities and functions and how we might explain them. A satisfactory explanation of these easy problems need do no more than specify a mechanism that can perform the function. Examples he gives include the ability to discriminate, categorize, and react to environmental stimuli; to integrate information; and to ac228 • What Is It Like to Be Conscious?
cess and report on internal states. The mechanisms in some of these cases are already sufficiently well understood for them to be simulated on computers and put to practical use; for instance, in face and character recognition. And even if some mechanisms still elude us (Chalmers suggests that it might take a couple of centuries to complete the empirical work), at least we know in principle how to set about discovering them. By contrast, the additional question, “Why in humans is the performance of these functions accompanied by conscious experience?” belongs to a quite different order. That is the hard problem. The fact that we can produce examples of humans functioning without conscious awareness of the performance involved—cases of blindsight, for instance, or tying one’s shoe laces— only makes the question harder. Such cases are taken by some people to imply that consciousness is not only hard to explain but not really even necessary. Reactions to Chalmers’s posing of the hard problem fall into a number of fairly clear categories. One response is to deny the hard/easy distinction altogether. It will come as no surprise that one of those who takes this line is Daniel Dennett. Far from being a useful contribution to research, the attempt to sort the easy problems of consciousness from the really hard one is, he says, “a major misdirection of attention, an illusion-generator” (Dennett 1996, 4). He flatly denies Chalmers’s claim that a full explanation of functions does not suffice for the explanation of experience. Their disagreement here is all of a piece with their opposing attitudes toward zombies. Dennett finds the very idea of zombies preposterous (because to him consciousness is nothing over and above function), whereas Chalmers finds them conceivable, precisely because he does not see how a functional explanation alone can engage the question of a conscious accompaniment to the process.Valerie Hardcastle, a philosopher at Virginia State University at Blacksburg, sees no hope of this particular gap—between explanations acceptable to those who approach the question from Dennett’s perspective and those who take Chalmers’s position—ever being bridged. This, she says, is because explanations are what she calls “social creatures.” They are designed to satisfy particular people asking particular questions in particular historical and philosophical contexts. She and Dennett are both materialists, trying to explain to each other what consciousness is within an agreed reductionist late-twentieth-century scientific framework. Only those who, in her words, “antecedently buy into this project,” stand any chance of being satisfied by the explanations it offers (Hardcastle 1996, 13). What Is It Like to Be Conscious?
• 229
Hardcastle is not saying that all reductionists will agree with any given materialist explanation of consciousness, but at least they will agree that it is the right kind of thing to count as an explanation. Those like Chalmers, however, who incline to a dualist or nonreductive approach, will never be able to accept as complete any description of consciousness that excludes the possibility of something existing over and above the low-level physical components of a system. Equally, a Chalmerian explanation—precisely because it will have to have some added element in order to satisfy his own nonreductionist convictions—will never be acceptable to her or Dennett. No amount of new evidence or more painstaking argument will make the slightest difference.As she implies, something more akin to a religious conversion from one approach to the other is the only thing that will bring about a change of heart. Chalmers himself describes Hardcastle’s assessment of the situation as “far too bleak” and challenges her view that there can be no useful debate between the reductionist and nonreductionist camps (Chalmers 1997, 15). Not all those who question Chalmers’s hard/easy division do so from the same standpoint. A collection of essays on this topic in the mid-1990s contained one contribution titled “There Is No Hard Problem of Consciousness” and another whose title proclaimed “There Are No Easy Problems of Consciousness.” The first title showed that one need not be as unsympathetic as Dennett to the hard problem to argue for a concentration of resources on the more practical scientific questions that Chalmers labels as easy. It may well turn out that when the easy problems are solved, the hard one will be found to have dissolved (O’Hara and Scutt 1996). A quite different line was taken by philosopher Jonathan Lowe at Durham University in England, who wrote the second title. In his view, the range of mental abilities and functions that Chalmers allowed to be explicable in terms of computational or neural categories (that is, his easy problems) already represented a sellout to reductive materialism. In Lowe’s opinion, there is such an intimate intertwining of experience and thought that to try to isolate any cognitive function from our capacity for phenomenal consciousness is a doomed exercise. Consequently, none of the alleged easy questions will turn out to be genuinely independent of the hard problem (Lowe 1995). I have begun this survey of reactions to the hard/easy split proposed by David Chalmers with those who, from whichever side they come, find fault with it. But they are in a minority within the consciousness research community. Among the majority who broadly welcome the distinction as a helpful clarification, there are two fur230 • What Is It Like to Be Conscious?
ther kinds of responses to the challenge of the hard problem. On the one hand are those researchers whose instinct is to back away from trying to know something that they regard as being beyond our comprehension; on the other are those who are determined to rise to the occasion by devising a genuinely scientific approach to the study of subjective experience. The first group has been dubbed “the new mysterians” by Owen Flanagan of Duke University, a critic of their position. Their best-known representative is the English philosopher Colin McGinn, who has been professor of philosophy at Rutgers University in New Jersey since moving from Oxford University in 1988. Flanagan’s tag “mysterian” should not be taken to imply that McGinn himself, or others like Nagel who share his pessimism about ever cracking the hard problem, have a mystical or religious view of consciousness. They do not, and neither do they doubt that consciousness has a natural physical explanation. McGinn specifically says in one place that “consciousness cannot arise by magic; it must have some basis in matter” (McGinn 1999, 99). It is the capacity of the human mind to understand its own origin and workings that they question. It might be thought—since he does not believe in any mystical sources for human consciousness—that McGinn’s conviction of its being beyond our understanding must arise from a belief that it is terribly complex. But that is not so either. In his book The Mysterious Flame (1999), he is at pains to distance himself from the idea that consciousness is some kind of evolutionary pinnacle or the most impressive piece of organism design to date. On the contrary, he writes, “Consciousness, I believe, is biologically primitive and simple, comparatively speaking” (McGinn 1999, 62). There is more than a hint here of Rodolfo Llinás’s scientific influence. So if it is not mystical and not overly complicated, why is he so sure that consciousness is beyond our understanding? What is it that drives McGinn, in the same book, to say on the one hand that having a brain is what enables us to have a mental life and that the brain is the “seat of consciousness” (McGinn 1999, 4) and on the other hand that the bond between the mind and the brain is a deep mystery, “a mystery that human intelligence will never unravel?” (McGinn 1999, 5). The answer lies in something he calls “cognitive closure” (McGinn 2002, 182. My discussion of McGinn is informed by Ross 2002). To understand the point he is making here, it is necessary to distinguish between two different reasons why someone might fail to understand something. I cannot read a word of the Chinese language or make any sense of the equations of higher mathematics, not because the human brain in general is not competent to handle such things, What Is It Like to Be Conscious?
• 231
but because it just so happens that I have never learned Chinese, and I have forgotten all I was ever taught at school regarding mathematics. I am ignorant in both these fields, but there is nothing inherently impossible in my mastering either or both of them. That is not the kind of ignorance that McGinn has in mind when he speaks of cognitive closure. He is referring to a more deep-seated inability, something in the very makeup of the human brain that means “the objective nature of the brain is not exhausted by our conception of it” (McGinn 1999, 66). This inability to grasp the true nature of the brain and therefore of consciousness is not like the ignorance of a child, who will learn as it gets older, or the failure of the ancient Romans to grasp the essentials of atomic physics, a failing overcome by their biologically identical descendants 2000 years later. It is more like the inability of my cat to learn a foreign language or to grasp the basic principles of differential calculus. In cases like these, there is— claims McGinn—something about the equipment that is just not up to the job, and no amount of time or effort will make the slightest difference. That is cognitive closure. It is because of a failure of this kind, the new mysterians say, that humans will never crack the hard problem of consciousness. McGinn does not see this as a reason to throw up our hands in despair. There is no reason why human beings, biologically constituted as we are, should be able to understand the mystery of consciousness. It is nothing to be ashamed of or upset by. It is just a fact of life. He tells us that this truth came to him as he lay in bed turning these things over in his mind. “It was one of those flashes of insight,” he says, “that I had read about in other people’s memoirs. Maybe the reason we are having so much trouble solving the mind–body problem is that reality contains an ingredient that we cannot know” (McGinn 2002, 182). The trouble with sudden revelations is that they can be very difficult to get other people to take seriously. Critics ask why it is, even if we accept the concept of human causal closure with respect to certain topics, that consciousness should be one of the victims of it. One way McGinn tries to answer this is by exploring the concept of space. He starts by tackling the Cartesian argument that consciousness does not, on the face of it, have a place in the ordinary spatial world. This view can, of course, be challenged. A visual experience, E, may be correlated with a set of neural structures and events, N, which are locatable in a certain region of the brain and take up a particular space in the skull. Being less specific, I may associate my own consciousness with the region of space occupied by my own body rather than that 232 • What Is It Like to Be Conscious?
occupied by yours, and within my own body, I locate my thoughts closer to my head than to my feet. But, says McGinn, we may grant such elementary points and still be a long way from “undermining the intrinsic non-spatiality of the mental” (McGinn 1995, 221). In the first place, we do not make judgments about the location of conscious events by direct perception of them but by association with the physical objects or events that we think of as causing them. If at this moment, I am inclined to locate my thoughts about this paragraph in the physical region of the desk at which I am writing, that is because my eyes and ears and brain are at this place.As McGinn puts it, mental location is derivative, or parasitic upon physical location. And in the second place, insofar as I can be said to perceive my mental states anywhere, it tends to be other than in my head.At this moment, I instinctively locate my visual awareness of the clouds as being up in the sky and my awareness of an itch on my arm as being a few inches above my elbow. But a moment’s further thought reminds me that conscious states do not actually occupy the space where we perceive them to be, as shown by the “phantom limb” effect, in which pain is felt in a part of the body that has been amputated or maybe never grew in the first place. From these and other considerations, McGinn concludes—in a somewhat quaint phrase—that “consciousness is not spatially well-behaved” and that consequently Descartes’s view that it is has no location in space remains well founded (McGinn 1995, 223). That is a decidedly awkward conclusion for someone like McGinn, who as a materialist is committed to the view that the mental must have its origin and basis in matter. Faced with this situation, classical philosophy of mind has insisted that something has to give. Either we keep the nonspatial character of the mental and abandon the claim to its material origins (that is what the Cartesian dualist does), or else, if we must cling to the assertion that the mind emerges from physical matter, then we have to abandon its apparently nonspatial character. This second view is the standard materialist one. It draws an analogy with our perception of objects like wood and stone, which appear in ordinary circumstances to be solid but are shown by experiments in atomic physics to be made up chiefly of empty space. In a parallel way—it is argued—our perception that consciousness is nonspatial is an illusion. McGinn refuses to adopt either of these traditional expedients. Instead he identifies a third option, which is nothing less than a root-and-branch overhaul of physics. There must, he says, be a radical incompleteness in our current view of physical reality, since that view is unable to explain how physical reality has given rise to consciousness, with its nonspatial character. What Is It Like to Be Conscious?
• 233
McGinn claims that this failure can only be accounted for if there are aspects of the brain that are unrepresented in our current physical science. Given that mind has emerged from matter, the matter from which it has emerged must have properties (currently hidden from us) that make it possible for the nonspatial to develop from the spatial. Consequently, if we are ever to explain consciousness, we need to dig down to a more fundamental physical level than that of neurons and brain cells. In fact, says McGinn, the only way to solve the mindbody problem is to come up with a new conception of space itself. This in itself need not be thought of as an insuperable problem. Just such a radical overhaul happened in the seventeenth century with the discoveries of Isaac Newton and again in the twentieth century with the relativity theories of Albert Einstein. But there is in the present situation a crucial new factor, that, according to McGinn, puts this next reshaping of the concept of space into a class all of its own. A generation ago it was convincingly shown by the philosopher Peter Strawson that the entire structure of human thought is based upon a conception of space in which objects are separately and individually arrayed. Thus the only way in which we can even think about something nonspatial (such as consciousness) is to impose upon it an alien “conceptual grid” provided by our idea of matter in space. Unlike the revolutions of Newton and Einstein, the proposed McGinn revolution in our understanding of space would involve removing that conceptual grid itself, thus making thought about it impossible.We would, in effect, have sawed off the branch we were sitting on. That is what makes McGinn so certain that the problem of consciousness can never be solved by human minds. It is the cause of cognitive closure. We turn finally to a group of consciousness researchers who accept David Chalmers’s identification of the hard problem of consciousness but do not accept that it is utterly beyond our grasp to solve it. Here I take as my first representative the biologist Francisco Varela (1946–2001), who until his untimely death was senior researcher with the National Center for Scientific Research in Paris, France. His commitment to the notion of embodied consciousness was noted in Chapter 7, and it was his firm conviction that only a rigorous method of exploration and analysis of first-person experience could ever unlock the secrets of consciousness. The basic discipline of his method was supplied by the German philosopher Edmund Husserl (1859–1938) and was characterized by Varela as “a style of thinking” (Varela 1996; see also Varela and Shear 1999). Husserl’s method is called “phenomenology,” and Varela named his adaptation of it “neurophenomenology.” This rather ungainly word was deliber234 • What Is It Like to Be Conscious?
ately chosen as something of a European counterblast to the coining by Patricia Churchland of the term “neurophilosophy,” a word that for Varela summed up the arid Anglo-American approach to the philosophy of mind. Varela felt passionately the need to take “lived, first-hand experience” as the proper “field of phenomena, irreducible to anything else,” that alone could yield fruitful results. If such a gentle and generous scholar could ever be said to be scathing, it was when he spoke of the “theoretical fix” or “extra ingredient” that others might seek to provide the bridge between mind and body (Varela 1996, 330). Some critics saw his Husserlian method of careful description of experienced phenomena as a resurrection of the old and discredited introspectionism of Edward Titchener and Wilhelm Wundt (see Chapter 1), but it had an unbroken tradition as old as and parallel to theirs, including many honored names, of which that of the Frenchman Maurice Merleau-Ponty (1908–1961) is perhaps the most significant. Detailed accounts of the application of the technique of epoche, or “bracketing” of experience, to access and record with precision our phenomenal awareness defy brief description.Varela himself summed up the immensity of the task when he reframed the hard problem in two senses: “(1) It is hard to train and stabilize new methods to explore experience, (2) it is hard to change the habits of science in order for it to accept that new tools are needed for the transformation of what it means to conduct research on mind and for the training of succeeding generations” (Varela 1996, 347). Among those who are persevering with this arduous work are psychiatrist Jean Naudin in Marseilles, France and Varela’s former colleagues in Paris, psychologist Pierre Vermersch and Natalie Depraz. A further example of the disciplined first-person approach to discovering “what it is like to be conscious” is provided by the use of spiritual and meditative techniques, especially those of Buddhism and the Hindu or Vedic traditions.When set alongside older “scientific” introspectionism and also continental phenomenology, these subjective methods display certain common features. First, to be of use as tools for the study of consciousness, they all require what has been called “a moment of suspension and redirection,” when the attention moves from the content of the experience to the mental process that is taking place. Second, they depend upon specific training to pursue this initial suspension into a more full content, and here the role of a second person to engage with the experiencer is important. One might say that the skill of the “mediator” is as important as that of the “meditator” in applying these methods. And third, there needs in all these What Is It Like to Be Conscious?
• 235
An example of the disciplined firstperson approach to discovering “what it is like to be conscious” is provided by the use of spiritual and meditative techniques, especially those of Buddhism and the Hindu or Vedic traditions. Pictured here is a dhyani Buddha figure seated in a meditative attitude. (Library of Congress)
approaches to be meticulous recording, discussing, and validating if they are to be truly scientific investigations.
In Place of a Conclusion In a much-quoted entry in The International Dictionary of Psychology (1989), Professor Stuart Sutherland dismissed consciousness in the following single paragraph: Consciousness: The having of perceptions, thoughts, and feeling; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be 236 • What Is It Like to Be Conscious?
aware of the external world. Consciousness is a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.
I trust that the reader who has persevered with me though the present volume will demur at least from the final sentence. But in other respects there has been much in the story told here to account for Sutherland’s dismissive remarks, even if not to justify them. My intention has been to help newcomers to this field of study to find their bearings and be stimulated to follow the unfolding science of consciousness. Given the pace of consciousness studies and the way in which new emphases can be detected in research programs almost weekly, it would be impossible—and very foolish—to attempt any kind of conclusion. The subject is very far from concluded, which is well demonstrated by the way that this final chapter (without my designing it as such) has naturally picked up as still open questions many of the issues set out in the historical overview that made up the first chapter. But insofar as it is possible to foresee the shape of future developments, then it seems likely that the continuing growing together of many disciplines and approaches will be the hallmark of consciousness research over the next decade, with philosophers and scientists continuing to stretch and challenge each other in creative and cooperative rivalry.
What Is It Like to Be Conscious?
• 237
References
Alter, T. N.d. Knowledge argument. In A Field Guide to the Philosophy of Mind, ed. M. Nani and M. Marraffa. http://host.uniroma3.it/progetti/kant/field/ ka.htm. (cited 23 April, 2003). Armstrong, D. M. 1968. A Materialist Theory of Mind. London: Routledge and Kegan Paul. Atkinson, R. L., and R. M Shiffrin. 1968. Human Memory: A proposed system and its control process. In The Psychology of Learning and Motivation, ed. K.W. Spence and J. T. Spence.Vol. 2. London: Academic Press. Baars, B. J. 1988. A Cognitive Theory of Consciousness. Cambridge: Cambridge University Press. http://www.nsi.edu. (cited 23 April, 2003). ———. 1997. In the theatre of consciousness: Global workspace theory, a rigorous scientific theory of consciousness. Journal of Consciousness Studies 4 (4): 292–309. ———. 1997a. In the Theater of Consciousness. New York: Oxford University Press. ———. 2001. There are no known differences in brain mechanisms of sensory consciousness between humans and other mammals. AnimalWelfare 10: S31–40. ———. 2002. The conscious access hypothesis: Origins and recent evidence. Trends in Cognitive Sciences 6: 47–52. Baddeley, A. D. 1966. The influence of acoustic and semantic similarity on longterm memory for word sequences. Quarterly Journal of Experimental Psychology 18: 302–309. ———. 1990. Human Memory: Theory and Practice. Oxford: Oxford University Press. ———. 2000. The episodic buffer: A new component of working memory? Trends in Cognitive Sciences 4: 417–423. Baddeley,A. D., and G. J Hitch. 1974.Working memory. In The Psychology of Learning and Motivation, ed. G. A. Bower. New York: Academic Press. Baddeley, A. D., N. Thompson, and M. Buchanan. 1975. Word length and the structure of memory. Journal of Verbal Learning and Verbal Behaviour 14: 575–589. Bechtel, W., P. Mandik, J. Mundale, and R. S. Stufflebeam, eds. 2001. Philosophy and the Neurosciences:A Reader. Malden and Oxford: Blackwell.
239
Begley, S. 2001. Religion and the brain. Newsweek, May 7. Bell, J. 1964. On the Einstein Podolsky Rosen paradox. Physics 1 (3): 195. Bennett, M. R. 1997. The Idea of Consciousness: Synapses and the Mind. Amsterdam: Harwood Academic Publishers. Blackmore, S. 2003. Consciousness:An Introduction. London: Arnold. Blakemore, C. 1973. The language of vision. New Scientist 58: 674–677. ———. 1988. The Mind Machine. London: BBC Publications. Bliss, T. V. P., and T. Lømo. 1973. Long-lasting potentiation of synaptic transmission in the dentate area of the anaesthetized rabbit following stimulation of the perforent path. Journal of Physiology 232: 331–356. Block, N., O. Flanagan, and G. Güzeldere, eds. 1997. The Nature of Consciousness: Philosophical Debates. Cambridge: MIT Press. Bogen, J. 1995. On the neurophysiology of consciousness, I and II. Consciousness and Cognition 4: 52–62, 137–158. Bohr, N. 1934. Atomic Physics and Human Knowledge. Cambridge: Cambridge University Press. Boring, E. G. 1929/1950. A History of Experimental Psychology. New York:AppletonCentury-Crofts. Born, M., ed. 1971. The Born-Einstein Letters. London: Macmillan. Botterell, A. 2001. Conceiving what is not there. Journal of Consciousness Studies 8 (8): 21–42. Bradshaw, R. H. 1998. Consciousness in non-human animals. Journal of Consciousness Studies 5 (1): 108–114. Buckley, K.W. 1989. Mechanical Man: John BroadusWatson and the Beginnings of Behaviorism. New York: Guilford Press. Cajal, R. 1989. Recollections of My Life, trans. E. Horne and J. Cano. Cambridge: MIT Press. Cardena, E., S. Jay, and S. Krippner, eds. 2000. Varieties of Anomalous Experience: Examining the Scientific Evidence. Washington, DC: American Psychological Association. Chalmers, D. J. 1995. Facing up to the problem of consciousness. Journal of Consciousness Studies 2 (3): 200–219. ———. 1996. The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press. ———. 1997. Moving forward on the problem of consciousness. Journal of Consciousness Studies 4 (1): 3–46. ———. 2000.What is a neural correlate of consciousness? In Neural Correlates of Consciousness: Empirical and Conceptual Questions, ed. T. Metzinger. Cambridge: MIT Press. http://www.u.arizona.edu/~chalmers/papers/ncc2.html. (cited 23, April 2003). Churchland, P. M. 1981. Eliminative materialism and the propositional attitudes. Journal of Philosophy 78: 67–90. ———. 1985. Reduction, qualia, and the direct introspection of brain states. Journal of Philosophy 82: 8–28.
240 • References
Churchland, P. S. 1986. Neurophilosophy:Toward a Unified Science of the Mind-Brain. Cambridge: MIT Press. Churchland, P. M., and P. S. Churchland. 1991. Intertheoretic reduction:A neuroscientist’s field guide. Seminars in the Neurosciences 2: 249–256. Cohen, J. 2002. The grand grand illusion illusion. Journal of Consciousness Studies 9 (5–6): 141–157. Cotterill, R. M. J. 1995. On the unity of conscious experience. Journal of Consciousness Studies 2 (4): 290–312. ———. 1998. Enchanted Looms: Conscious Networks in Brains and Computers. Cambridge: Cambridge University Press. ———. 2001. Evolution, cognition, and consciousness. Journal of Consciousness Studies 8 (2): 3–17. ———. 2003. Cyberchild. Journal of Consciousness Studies 10 (4–5): 31–45. Cottingham, J. G., R. Stoothoff, A. Kenny, and D. Murdoch, eds. 1985–1991. The PhilosophicalWritings of Descartes. Cambridge: Cambridge University Press. Craik, F. I. M., and R. Lockhart. 1972. Levels of processing: A framework for memory research. Journal ofVerbal Learning andVerbal Behaviour 11: 671–684. Crick, F. 1994. The Astonishing Hypothesis:The Scientific Search for the Soul. New York: Simon and Schuster. ———. 1996.Visual perception: Rivalry and consciousness. Nature 379: 485–486. Crick, F., and J. Clark. 1994. Interview. Journal of Consciousness Studies 1 (1): 10–17. Crick, F., and C. Koch. 1990. Towards a neurobiological theory of consciousness. Seminars in the Neurosciences 2: 263–275. Damasio, A. R. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Grosset/Putman. ———. 1999. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Brace. d’Aquili, E. G., and A. B. Newberg. 1999. The Mystical Mind: Probing the Biology of Religious Experience. Minneapolis: Fortress Press. Davidson, D. 1970. Mental events. In Experience and Theory, ed. L. Foster and J.W. Swanson. London: Duckworth. Dennett, D. C. 1984. Elbow Room:The Varieties of Free Will Worth Wanting. Cambridge: MIT Press. ———. 1991. Consciousness Explained. New York: Little, Brown. ———. 1995. The unimagined preposterousness of zombies. Journal of Consciousness Studies 2 (4): 322–326. ———. 1996. Facing backwards on the problem of consciousness. Journal of Consciousness Studies 3 (1): 4–6. Dennett, D. C., and M. Kinsbourne. 1991. Time and the observer: The where and when of consciousness in the brain. Behavioral and Brain Sciences 15: 183–247. Dipellegrino, G., L. Fadiga, L. Fogassi, V. Gallese, and G. Rizzolatti. 1992. Understanding motor events:A neurophysiological study. Exp Brain Res 91: 176–180.
References
• 241
Doty, R. W. 1998. Five mysteries of the mind and their consequences. In Views of the Brain: A Tribute to Roger W. Sperry, ed. A. Puente. Washington, DC: American Psychological Association. Eccles, J. C. 1994. How the Self Controls Its Brain. New York: Springer. Eckhorn, R., R. Bauer,W. Jordan, M. Brosch,W. Kruse, M. Munk, and H. J. Reitboeck. 1988. Coherent oscillations: A mechanism of feature linking in the visual cortex? Biological Cybernetics 60: 121–130. Edelman, G. 1987. Neural Darwinism: The Theory of Neuronal Group Selection. New York: Basic Books. ———. 1992. Bright Air, Brilliant Fire: On the Matter of the Mind. New York: Basic Books. Edelman, G., and S. Levy. 1994. Interview: Dr. Edelman’s brain. The New Yorker, May 2: 62–73. Einstein, A., B. Podolsky, and N. Rosen. 1935. Can quantum-mechanical description of physical reality be considered complete? Physical Review 47: 777. Reprinted in 1983 in Quantum Theory and Measurement, ed. J. A.Wheeler and W. H. Zurek. Princeton: Princeton University Press. Elster, J. 1999. Alchemies of the Mind: Rationality and the Emotions. Cambridge: Cambridge University Press. Feser, E. 1998. Can phenomenal qualities exist unperceived? Journal of Consciousness Studies 5 (4): 405–414. Feyerabend, P. 1963. Materialism and the mind-body problem. Review of Metaphysics 17: 49–66. Fodor, J. 1981. The mind-body problem. Scientific American 244 (1). ———. 1983. The Modularity of Mind. Cambridge: MIT Press. ———. 2000. The Mind Doesn’t Work That Way:The Scope and Limits of Computational Psychology. Cambridge: MIT Press. Forman, R. K. C. 1998. What does mysticism have to teach us about consciousness? Journal of Consciousness Studies 5 (2): 185–201. Freeman, A. 1998. Good old-fashioned sin: A neglected area of consciousness studies. Paper delivered at the “Toward a Science of Consciousness” conference, April 27–May 2, Tucson. Freeman,W. J. 1988. Nonlinear neural dynamics in olfaction as a model for cognition. In Dynamics of Sensory and Cognitive Processing by the Brain, ed. E. Basar. Berlin: Springer. ———. 1995. Societies of Brains: A Study in the Neuroscience of Love and Hate. Hillsdale: Lawrence Erlbaum Associates. Freeman, W. J., and J. Burns. 1996. Interview: Societies of brains. Journal of Consciousness Studies 3 (2): 172–180. Frith, C. 2001. Commentary on Revonsuo. Journal of Consciousness Studies 8 (3): 30. Frith, C., and S. Gallagher. 2002. Models of the pathological mind. Journal of Consciousness Studies 9 (4): 57–80. Gage, F. H., and A. Bjorklund. 1986. Cholinergic septal grafts into the hippocampal formation improve spatial learning and memory in aged rats by an atropine-sensitive mechanism. Journal of Neuroscience 6 (10): 2837–2847.
242 • References
Gallese,V., L. Fadiga, L. Fogassi, and G. Rizzolatti. 1996. Action recognition in the premotor cortex. Brain 119: 593–609. Gibson, J. J. 1950. The Perception of theVisualWorld. Boston: Houghton Mifflin. ———. 1979. The Ecological Approach toVisual Perception. Boston: Houghton Mifflin. Gilbert,W. S. 1926. The Savoy Operas. London: Macmillan. Gray, C., and W. Singer. 1989. Stimulus-specific neuronal oscillations in orientation columns of cat visual cortex. Proceedings of the National Academy of Sciences USA 86: 1698–1702. Greenfield, S. 2000. Brain Story. London: BBC Worldwide. Greenfield, S.A. 1995. Journey to the Centers of the Mind. New York:W. H. Freeman. Grush, R., and P. S. Churchland. 1995. Gaps in Penrose’s toilings. Journal of Consciousness Studies 2 (1): 10–29. Güzeldere, G. 1999. There is no neural correlate of consciousness. Paper delivered at the “Toward a Science of Consciousness” conference, May 25–28, Tokyo. Haggard, P., and B. Libet. 2001. Conscious intention and brain activity. Journal of Consciousness Studies 8 (11): 47–63. Hameroff, S. R. 1987. Ultimate Computing. New York: Elsevier. ———. 1994. Quantum coherence in microtubules: A neural basis for emergent consciousness? Journal of Consciousness Studies 1 (1): 91–118. Hameroff, S. R.,A.W. Kaszniak, and A. C. Scott, eds. 1998. Toward a Science of Consciousness II. Cambridge: MIT Press. Hardcastle, V. G. 1996. The why of consciousness: A non-issue for materialists. Journal of Consciousness Studies 3 (1): 7–13. Herbert, N. 1985. Quantum Reality: Beyond the New Physics. New York: Anchor Books. Hobson, J. A. 2001. The Dream Drugstore: Chemically Altered States of Consciousness. Cambridge: MIT Press. Hodgson, D. 1998. Folk psychology, science, and the criminal law. In Toward a Science of Consciousness II, ed. S. R. Hameroff et al. Cambridge: MIT Press. ———. 2002. Three tricks of consciousness. Journal of Consciousness Studies 9 (12): 65–88. Honderich, T. 1993. How Free AreYou? Oxford: Oxford University Press. ———, ed. 1995. The Oxford Companion to Philosophy. Oxford: Oxford University Press. Hubel, D. 1988. Eye, Brain, andVision. New York:W. H. Freeman. Hume, D. 1748/1999. An Enquiry Concerning Human Understanding, ed. T. L. Beauchamp. Oxford: Oxford University Press. Humphrey, N. 2000. How to Solve the Mind-Body Problem. Exeter: Imprint Academic. Hurley, S. L. 1998. Consciousness in Action. Cambridge: Harvard University Press. Huxley, A. 1972. The Doors of Perception and Heaven and Hell. London: Chatto and Windus. Institute of Neurology. 2002. Volunteer’s Guide to a PET Scan. London: Leopold Muller Functional Imaging Laboratory. References
• 243
Ione, A. 2000. Perceptual beauty as the basis for genuine judgments of beauty. Journal of Consciousness Studies 7 (8–9): 21–27. Jackson, F. 1982. Epiphenomenal qualia. Philosophical Quarterly 32: 127–136. ———. 1998. Postscript on Qualia. In Mind, Method, and Conditionals. London: Routledge. James,W. 1890. The Principles of Psychology. New York: Dover Publications. ———. 1904. Does “consciousness” exist? Journal of Philosophy, Psychology, and Scientific Methods 1: 477–491. Reprinted in 1943 in Essays in Radical Empiricism and a Pluralistic Universe. London: Longmans, Green, and Co., to which page citations refer. Johnson, G. 1997. Conventional wisdom says machines cannot think. New York Times, May 9. Jones, E. 1999. Golgi, Cajal, and the neuron doctrine. Journal of the History of the Neurosciences 8 (2): 170–178. Kane, R. 1996. The Significance of FreeWill. New York: Oxford University Press. Kant, I. 1788/1992. Critique of Practical Reason, ed. and trans. L.W. Beck. London: Macmillan. Kaszniak,A., ed. 2001. Emotions,Qualia,and Consciousness. Singapore:World Scientific. Kensicki, Linda Jean. 2003. “Dr. John Watson.” http://uts.cc.utexas.edu/ ~kensicki/watson-pers.html. (cited 23 April, 2003). Kim, J. 1998. Mind in a PhysicalWorld. Cambridge: MIT Press. Kirk, R. 1974. Sentience and behavior. Mind 83: 43–60. Kolers, P. A., and M. Grünau. 1976. Shape and color in apparent motion. Vision Research 16: 329–335. Kosslyn, S. M. 1980. Image and Mind. Cambridge: Harvard University Press. Kosslyn, S. M., N. M. Alpert, W. L. Thompson, V. Maljkovic, S. B. Weise, C. F. Chabris, S. E. Hamilton, S. L. Rauch, and F. S. Buonanno. 1993.Visual mental imagery activates topographically organized visual cortex: PET investigations. Journal of Cognitive Neuroscience 5: 263–287. Krippner, S. 2000. The epistemology and technologies of shamanic states of consciousness. Journal of Consciousness Studies 7 (11–12): 93–118. LaBerge, S. 2000. Lucid dreaming: Evidence and methodology. Behavioral and Brain Sciences 23: 962–963. LaBerge, S., and W. Dement. 1982. Voluntary control of respiration during REM sleep. Sleep Research 11: 107. LeDoux, J. 1996. The Emotional Brain. New York: Simon and Schuster. Leitch, A. 1978. A Princeton Companion. Princeton University Press. Leopold, D., and N. Logothetis. 1996. Activity changes in early visual cortex reflect monkeys’ percepts during binocular rivalry. Nature 379: 549–553. Levine, J. 1983. Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly 64: 354–361. Lewis, C. I. 1929. Mind and the World-Order: Outline of a Theory of Knowledge. New York: Charles Scribner’s Sons. Lewis, D. 1988. What experience teaches. In Proceedings of the Russellian Society. Sydney: University of Sydney.
244 • References
Libet, B. 1965. Cortical activation in conscious and unconscious experience. Perspectives in Biology and Medicine 9: 77–86. ———. 1985. Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences 8: 529–566. ———. 1994. A testable field theory of mind-brain interaction. Journal of Consciousness Studies 1 (1): 119–126. ———. 1999. Do we have free will? Journal of Consciousness Studies 6 (8–9): 47–57. Libet, B., W. W. Alberts, E. W. Wright, L. D. Delattre, G. Levin, and B. Feinstein. 1964. Production of threshold levels of conscious sensation by electrical stimulation of human somatosensory cortex. Journal of Neurophysiology 27: 546–578. Libet, B.,W.W. Alberts, E.W.Wright, and B. Feinstein. 1972. Cortical and thalamic activation in conscious sensory experience. In Neurophysiology Studied in Man, ed. G. Somjen. Amsterdam: Excerptas Medica. Libet, B., E. W. Wright, and C. A. Gleason. 1982. Readiness potentials preceding unrestricted spontaneous and preplanned voluntary acts. Electroencephal. and Clin. Neurophysiology 54: 322–325. Llinás, R. 2001. I of theVortex: From Neurons to Self. Cambridge: MIT Press. Llinás, R., and U. Ribrary. 1993. Coherent 40-Hz oscillation characterizes dream state in human. Proceedings of the National Academy of Sciences USA 90: 2078–2081. Lockwood, M. 1998. Unsensed phenomenal qualities: A defence. Journal of Consciousness Studies 5 (4): 415–418. Logothetis, N. K. 1999. Vision: A window on consciousness. Scientific American, November. Logothetis, N., and J. Schall. 1989. Neuronal correlates of subjective visual perception. Science 245: 761–763. Lowe, E. J. 1995. There are no easy problems of consciousness. Journal of Consciousness Studies 2 (3): 266–271. Luna, L. E., and P.Amaringo. 1993. AyahuascaVisions. Berkeley: North Atlantic Books. Lyons,W. 2001. Matters of the Mind. Edinburgh: Edinburgh University Press. Macmillan, M. 2000. An Odd Kind of Fame: Stories of Phineas Gage. Cambridge: MIT Press. Magee, B. 1988. The Great Philosophers. Oxford: Oxford University Press. Marshall, J. C. 1992. See me, feel me. Review of Bright Air, Brilliant Fire: On the Matter of the Mind by G. Edelman. Times Literary Supplement, September 4, 8. McCrone, J. 1999. Going Inside: A Tour Round a Single Moment of Consciousness. London: Faber and Faber. McCulloch,W. S., and W. H. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5: 115–133. McGinn, C. 1991. The Problem of Consciousness: Essays towards a Resolution. Oxford: Blackwell. ———. 1995. Consciousness and space. Journal of Consciousness Studies 2 (3): 220–230. ———. 1999. The Mysterious Flame: Conscious Minds in a Material World. New York: Basic Books. References
• 245
———. 2002. The Making of a Philosopher: My Journey through Twentieth-Century Philosophy. New York: HarperCollins. Metzinger, T., ed. 2000. Neural Correlates of Consciousness: Empirical and Conceptual Questions. Cambridge: MIT Press. Milner, B. S. Corkin, and H. L. Teuber. 1968. Further analysis of the hippocampal amnesic syndrome: 14-year follow-up study of HM. Neuropsychologia 6: 215–234. Milner, D., and M. Goodale. 1995. The Visual Brain in Action. Oxford: Oxford University Press. Nagel, T. 1974/1997. What is it like to be a bat? Philosophical Review 83 (4): 435–450. Reprinted in 1997 in The Nature of Consciousness: Philosophical Debates, ed. N. Block, O. Flanagan, and G. Güzeldere. Cambridge: MIT Press. Nagel, T. 1995. Qualia. In The Oxford Companion to Philosophy, ed. T. Honderich. Oxford: Oxford University Press. Newman, J. 1997. Putting the puzzle together, I and II. Journal of Consciousness Studies 4 (1): 47–66; 4 (2): 100–121. Noë, A., ed. 2002. Is theVisualWorld a Grand Illusion? Exeter: Imprint Academic. Noë, A., L. Pessoa, and E. Thompson. 2000. Beyond the grand illusion: What change blindness really teaches us about vision. Visual Cognition 7: 93–106. Noë, A., and E. Thompson. In press. Are there neural correlates of consciousness? Journal of Consciousness Studies. O’Hara, K., and T. Scutt. 1996. There is no hard problem of consciousness. Journal of Consciousness Studies 3 (4): 290–303. O’Regan, J. K. 1992. Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology 46: 461–488. O’Regan J. K., and A. Noë. 2001.What it is like to see: A sensorimotor theory of perceptual experience. Synthese 129 (1): 79–103. Panksepp, J. 1998. Affective Neuroscience. Oxford: Oxford University Press. Penfield,W. 1958. The Excitable Cortex in Conscious Man. Springfield: Thomas. Penfield,W., and T. Rasmussen. 1950. The Cerebral Cortex of Man:A Clinical Study of Localization of Function. New York: Macmillan. Penrose, R. 1989. The Emperor’s New Mind. Oxford: Oxford University Press. ———. 1994a. Shadows of the Mind. Oxford: Oxford University Press. ———. 1994b. Mechanisms, microtubules, and the mind. Journal of Consciousness Studies 1 (2): 241–249. Pinker, S. 1997a. How the MindWorks. New York:W.W. Norton. ———. 1997b. Organs of computation. Edge 3: January 11. http://www.edge. org/documents/archive/edge3.html (cited 23 April, 2003). Place, U. T. 1956. Is consciousness a brain process? British Journal of Psychology 47: 44–50. Popper, K., and J. C. Eccles. 1977. The Self and Its Brain. New York: Springer. Posner, M. I. 1993. Seeing the mind. Science 262: 673–674. Posner, M. I., and S.W. Keele. 1967. Decay of visual information from a single letter. Science 158: 137–139. Putnam, H. 1960. Minds and machines. Reprinted in 1975 in Mind, Language, and Reality by H. Putnam. Cambridge: Cambridge University Press.
246 • References
———. 1975. Mind, Language, and Reality. Cambridge: Cambridge University Press. Quine,W.V. 1961. From a Logical Point ofView. New York: Harper and Row. Ramachandran, V. S., and S. Blakeslee. 1998. Phantoms of the Brain: Human Nature and the Architecture of the Mind. London: Fourth Estate. Ramachandran,V. S., and W. Hirstein. 1999. The science of art:A neurological theory of aesthetic experience. Journal of Consciousness Studies 6 (6–7): 15–51. Revonsuo, A. 2001. Can functional brain imaging discover consciousness? Journal of Consciousness Studies 8 (3): 3–23. Richardson, A., and J. Bowden, eds. 1983. A New Dictionary of Christian Theology. London: SCM Press. Rizzolatti, G., L. Fadiga,V. Gallese, and L. Fogassi. 1996. Premotor cortex and the recognition of motor actions. Cognitive Brain Res 3: 131–141. Robinson,W. S. 2001. Qualia realism. A Field Guide to the Philosophy of Mind, ed. M. Nani and M. Marraffa. http://host.uniroma3.it/progetti/kant/field/qr. htm. (cited 23 April, 2003). Rose, D. 1996. Guest Editorial: Some reflections on (or by?) grandmother cells. Perception 25 (8). Rose, S. 1993. The Making of Memory. New York: Bantam Books. ———, ed. 1998. From Brains to Consciousness. London: Penguin Press. Rosenthal, D. 1997. A theory of consciousness. In The Nature of Consciousness: Philosophical Debates, ed. N. Block, O. Flanagan, and G. Güzeldere. Cambridge: MIT Press, 729–753. Ross, J. A. 2002. Review of Philosopher: A Kind of Life by Ted Honderich and The Making of a Philosopher: My Journey through Twentieth-Century Philosophy by Colin McGinn. Journal of Consciousness Studies 9 (7): 55–82. Ryle, G. 1949. The Concept of Mind. London: Hutchinson. Sacks, O. 1994. Interview with Anthony Freeman. Journal of Consciousness Studies 1 (2): 234–240. Salzman, M. 2000. Lying Awake. New York: Knopf. Schacter, D. L., and E. Tulving, eds. 1994. Memory Systems 1994. Cambridge: MIT Press. Schulman, A. 1974. Memory for words recently classified. Memory and Cognition 2: 47–52. Searle, J. R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457. ———. 1984. Minds, Brains, and Science. Cambridge: Harvard University Press. ———. 1992. The Rediscovery of the Mind. Cambridge: MIT Press. ———. 1997. The Mystery of Consciousness. New York: New York Review of Books. ———. 1998. How to study consciousness scientifically. Philosophical Transactions of the Royal Society of London B 353: 1935–1942. ———. 2002. Why I am not a property dualist. Journal of Consciousness Studies 9 (12): 57–64. Searle, J. R., and W. J. Freeman. 1998. Do we understand consciousness? Journal of Consciousness Studies 5 (5–6): 718–733. References
• 247
Sellars, W. 1963. Science, Perception, and Reality. London: Routledge and Kegan Paul. Shallice, T. 1982. Specific impairments of planning. Philosophical Transactions of the Royal Society of London B 298: 199–209. Shanon, B. 2003. The Antipodes of the Mind: Charting the Phenomenology of Ayahuasca Experience. Oxford: Oxford University Press. Sherrington, C. S. 1947. The Integrative Action of the Nervous System. Cambridge: Cambridge University Press. Silberstein, M. 2001. Converging on emergence: Consciousness, causation, and explanation. In The Emergence of Consciousness, ed. A. Freeman. Exeter: Imprint Academic. Simons, D., and D. Levin. 1998. Failure to detect changes to people during realworld interaction. Psychonomic Bulletin and Review 4: 644. Singer,W. 1998. Processes in sensory systems. Paper presented at the symposium “Dreaming and Consciousness,” Vienna, June 10–12. Smart, J. J. C. 1959. Sensations and brain processes. Philosophical Review 68: 141–156. Smith, H. 2000. Cleansing the Doors of Perception: The Religious Significance of Entheogenic Plants and Substances. New York: Tarcher/Putnam. Spence, K. W., and J. T. Spence. eds. 1968. The Psychology of Learning and Motivation. Vol. 2. London: Academic Press. Sperling, G. 1960. The information available in brief visual presentations. Psychological Monographs 74. Sperry, R. W. 1976. Mental phenomena as causal determinants in brain function. In Consciousness and the Brain, ed. G. G. Globus, G. Maxwell, and I. Savodnik. New York: Plenum. Squire, L. R. 1998. Memory and brain systems. In From Brains to Consciousness, ed. S. Rose. London: Penguin, 53–72. Stapp, H. P. 1993. Mind, Matter, and Quantum Mechanics. New York: Springer. ———. 1996. The hard problem: A quantum approach. Journal of Consciousness Studies 3 (3): 194–210. Stewart, I. 1990. Does God Play Dice? The Mathematics of Chaos. London: Penguin. Sutherland, S., ed. 1989. The International Dictionary of Psychology. London: Continuum International Publishing Group. Tallis, R. 1994. Psycho-Electronics. London: Ferrington. Toates, F. 2001. Biological Psychology: An Integrative Approach. Harlow: Pearson Education. Turing, A. 1950. Computing machinery and intelligence. Mind 59: 433–460. Ungerleider, L. G., and M. Mishkin. 1982. Two cortical visual systems, in Analysis of Visual Behavior, ed. D. Ingle, M. A. Goodale, and R. J.W. Mansfield. Cambridge: MIT Press. Varela, F. J. 1996. Neurophenomenology. Journal of Consciousness Studies 3 (4): 330–349. Varela, F. J., and J. Shear, eds. 1999. The View from Within: First-Person Approaches to the Study of Consciousness. Exeter: Imprint Academic.
248 • References
Varela, F. J., E. Thompson, and E. Rosch. 1991. The Embodied Mind: Cognitive Science and Human Experience. Cambridge: MIT Press. von der Malsburg, C. 1981. The correlation theory of brain function. Internal Report 81–82, MPI Biophysical Chemistry. von Neumann, J. 1932/1955. Mathematical Foundations of Quantum Mechanics. Princeton: Princeton University Press. Wallen, R. 1999. Response to Ramachandran and Hirstein. Journal of Consciousness Studies 6 (6–7): 68–72. Watson, J. B. 1913. Psychology as the behaviorist views it. Psychological Review 20: 158–177. Watt, D. G. 1999. Consciousness and emotion. Journal of Consciousness Studies 6 (6–7): 191–200. ———. 2000. Emotion and consciousness, Part 2. Journal of Consciousness Studies 7 (3): 72–84. Waugh, N., and D. Norman. 1965. Primary memory. Psychological Review 72: 89–104. Weber, B. 1996. Mean chess-playing computer tears at meaning of thought. New York Times, February 19. Weiner, P. P., ed. 1951. Leibniz: Selections. New York: Scribners. Weiskrantz, L. 1986. Blindsight:A Case Study and Implications. Oxford: Oxford University Press. Wheeler, J. A., and W. H. Zurek, eds. 1983. Quantum Theory and Measurement. Princeton: Princeton University Press. Wheelwell, D. 2000. Against the reduction of art to galvanic skin response. Journal of Consciousness Studies 7 (8–9): 37–42. Wick, D. 1996. The Infamous Boundary: Seven Decades of Heresy in Quantum Physics. New York: Copernicus. Wigner, E. 1961/1983. Remarks on the mind-body problem. Reprinted in Quantum Theory and Measurement, ed. J. A. Wheeler and W. H. Zurek. Princeton: Princeton University Press. Wilkinson, R. 2000. Minds and Bodies:An Introduction with Readings. London: Routledge. Winkelman, M. 2000. Shamanism: The Neural Ecology of Consciousness and Healing. Westport: Bergin and Garvey/Greenwood. Zeki, S. 1993. AVision of the Brain. Oxford: Blackwell. ———. 1999. Art and the brain. Journal of Consciousness Studies 6 (6–7): 76–96. Zurek, W. H. 1991. Decoherence and the transition from quantum to classical. Physics Today 44 (10): 36–44.
References
• 249
Further Reading
General Books on Consciousness Baars, Bernard J. In the Theater of Consciousness:The Workspace of the Mind. New York: Oxford University Press, 1997. Carter, Rita. Consciousness. London:Weidenfeld and Nicolson, 2002. Cotterill, Rodney. Enchanted Looms: Conscious Networks in Brains and Computers. Cambridge: Cambridge University Press, 1998. Crick, Francis. The Astonishing Hypothesis:The Scientific Search for the Soul. New York: Simon and Schuster, 1994. Dennett, Daniel. Consciousness Explained. New York: Little, Brown, 1991. Edelman, Gerald. Bright Air, Brilliant Fire: On the Matter of the Mind. New York: Basic Books, 1992. Greenfield, Susan. Brain Story: Unlocking Our Inner World of Emotions, Memories, Ideas, and Desires. London: BBC Worldwide, 2000. Humphrey, Nicholas. A History of the Mind. London: Chatto and Windus, 1992. Lyons,William. Matters of the Mind. Edinburgh: Edinburgh University Press. 2001. McCrone, John. Going Inside:A Tour Round a Single Moment of Consciousness. London: Faber and Faber, 1999. McGinn, Colin. The Mysterious Flame: Conscious Minds in a Material World. New York: Basic Books, 1999. Penrose, Roger. The Emperor’s New Mind. Oxford: Oxford University Press, 1989. Searle, John R. Minds, Brains and Science. Cambridge, MA: Harvard University Press, 1984. ———. The Mystery of Consciousness. New York: New York Review of Books, 1997. All these authors are accessible to the nonspecialist reader. Carter and McCrone are journalists and read particularly easily. Dennett, Lyons, McGinn, and Searle are academic philosophers who write in a clear and interesting fashion. On the science side, Baars and Humphrey are cognitive psychologists, whereas Cotterill, Crick, Edelman, and Greenfield lean more to the physiological side of neuroscience. Penrose is the toughest read of this selection, but as a mathematical physicist, he is the only one in the group to write authoritatively on the quantum aspects of consciousness.
251
Collections of Original Texts William Bechtel et al., Philosophy and the Neurosciences:A Reader. Blackwell, 2001. Ned Block et al., The Nature of Consciousness: Philosophical Debates. MIT Press, 1997. These edited collections both give an opportunity to read original scholarly work not otherwise readily available to the general reader. Bechtel covers both philosophy and science; Block is limited to philosophical issues and a little psychology, but covers the full range of contemporary philosophy of mind.
Journals The Journal of Consciousness Studies. Editor-in-chief Joseph Goguen. Published by Imprint Academic, PO Box 200, Exeter, EX5 5YX, UK. http://www.imprint-academic.com. Trends in Cognitive Sciences. Editor Dominic Palmer-Brown. Published by Elsevier Science London, 84 Theolbalds Road, London,WC1X 8RR, UK.
[email protected]. For readers willing to tackle something a little more demanding and also wishing to keep up to date with new developments, there are several international journals available. These are the two best suited to the nonspecialist; both are published monthly.
252 • Further Reading
Chronology
B.C.E.
440
Leucippus of Miletus. The first person to suggest that the physical world is made up of tiny particles (atoms), an influential concept in consciousness studies.
460–361
Democritus. A champion of atomic theory whose views clashed with those of Aristotle.
428–347
Plato. He believed in a nonmaterial soul or mind that was only temporarily associated with a physical body.
384–322
Aristotle. He thought of the soul as an organizing principle, not composed of matter, but only able to function in relation to matter. He also envisaged three different kinds of souls: for plants, animals, and humans.
C.E.
354–430
St. Augustine. He developed a Christian understanding of human nature based on Plato’s idea of an immortal soul that survived the death of the body.
1225–1274 St. Thomas Aquinas. Drawing on Aristotle, Aquinas reconciled Augustine’s belief in an immortal soul with the biblical doctrine of the resurrection of the body. 1564–1642 Galileo. By illustrating mistakes in Aristotle’s ideas, Galileo helped pave the way for the new thinking of the Enlightenment and a fresh approach to consciousness. 1637
René Descartes publishes Discourse on Method. 253
1641
René Descartes publishes Meditations.
1642
Blaise Pascal builds the first mechanical calculator.
1802
Franz Gall is banned from lecturing on phrenology in Vienna.
1805
John Dalton gives the first clear description of the atomic theory of elements.
1848
Phineas Gage survives an accident that removes much of his prefrontal cortex.
1861–1866 Paul Broca publishes papers linking speech production to a specific area of the left cerebral hemisphere.
254 • Chronology
1873
Camillo Golgi publishes “On the Structure of the Brain Gray Matter.”
1874
Carl Wernicke publishes work on speech perception confirming the association of language with the left cerebral hemisphere.
1875
Wilhelm Wundt establishes Institute of Experimental Psychology at Leipzig.
1890
William James publishes The Principles of Psychology.
1897
J. J. Thomson discovers the electron.
1905
Albert Einstein promulgates equation E = mc2.
1906
The Nobel Prize in physiology or medicine is awarded jointly to Camillo Golgi and Santiago Ramón y Cajal.
1908
William McDougall publishes An Introduction to Social Psychology.
1913
On 24 February at Columbia University, John B.Watson gives an invited lecture on psychology that will become known as the Behaviorist Manifesto.
1915
Erwin Schrödinger devises his equation describing the wavelike behavior of subatomic particles.
1924
Watson and McDougall debate behaviorism at the Psychology Club in Washington, D.C.
1926
Schrödinger’s theory of quantum wave mechanics is published.
1930s
The electroencephalogram (EEG) is developed to measure the brain’s electrical activity.
1931
Kurt Gödel’s incompleteness theorem is published.
1932
John von Neumann publishes The Foundations. Aldous Huxley publishes Brave NewWorld.
1933
E. G. Boring publishes The Physical Dimensions of Consciousness.
1935
Albert Einstein, Boris Podolsky, and Nathan Rosen claim that quantum theory is incomplete.
1943
Warren McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity.”
1945
Maurice Merleau-Ponty publishes The Phenomenology of Perception.
1949
Gilbert Ryle attacks Cartesian dualism in The Concept of Mind. Donald Hebb publishes The Organization of Behaviour. George Orwell publishes 1984.
1950
Alan Turing publishes “Computing Machinery and Intelligence,” which includes the “Turing test” for machine intelligence. Wilder Penfield and Theodore Rasmussen publish The Chronology
• 255
1950, cont.
Cerebral Cortex of Man: A Clinical Study of Localization of Function. J. J. Gibson publishes The Perception of theVisualWorld.
1953
“Split-brain” work on cats by Ronald Myers and Roger Sperry establishes hemispheric independence. Rapid-eye movement (REM) sleep identified as a distinctive form of sleep.
1956
Ullin Place publishes “Is Consciousness a Brain Process?” George Miller publishes “The Magical Number 7, Plus or Minus 2” on the limits of memory.
256 • Chronology
1958
David Hubel and Torsten Wiesel discover that neurons in the primary visual cortex (V1) respond to lines and edges.
1960
Hilary Putnam publishes “Minds and Machines.”
1964
John Bell shows that reality must be “nonlocal.”
1965
Benjamin Libet publishes “Cortical Activation in Conscious and Unconscious Experience.”
1969
“Grandmother cell” coined by Jerry Lettvin.
1973
Timothy Bliss and Terje Lømo discover long-term potentiation.
1974
Thomas Nagel publishes “What Is It Like to Be a Bat?”
1975
Paul Kolers and Michael von Grünau publish work on the “color phi” effect.
1978
Gerald Edelman first sketches his proposal for “neural Darwinism.”
1979
“Entheogens” coined as a term for psychoactive substances.
1980
John Searle first introduces his “Chinese Room” thought experiment in the article “Minds, Brains, and Programs.” Positron emission tomography (PET) scanning described by Marcus Raichle in Scientific American. Stephen Kosslyn publishes Image and Mind. Lawrence Weiskrantz reports the phenomenon of blindsight.
1981
Christoph von der Malsburg proposes synchronized firing to explain perceptual binding. The Nobel Prize in Physiology or Medicine is awarded jointly to Roger Sperry, David Hubel, and Torsten Wiesel.
1982
Leslie Ungerleider and Mortimer Mishkin propose the idea of a divided visual pathway in the brain. Benjamin Libet and colleagues announce the nonconscious cortical origin of volitional actions.
1989
Roger Penrose publishes The Emperor’s New Mind.
1990
Francis Crick and Christof Koch propose 40-hertz oscillation as the neural correlate of visual awareness. Henry Stapp publishes his quantum dynamical account of consciousness.
1991
Francisco Varela, Evan Thompson, and Eleanor Rosch publish The Embodied Mind. Daniel Dennett publishes Consciousness Explained and (with Marcel Kinsbourne) the article “Time and the Observer.” Colin McGinn publishes The Problem of Consciousness.
1992
John Searle publishes The Rediscovery of the Mind. Chronology
• 257
1992, cont.
Magnetic resonance imaging (MRI) scanning is reported in the Proceedings of the National Academy of Sciences.
1993
Semir Zeki publishes Vision of the Brain. Magnetoencephalogram (MEG) scanning reported in New Scientist. Stephen Kosslyn’s PET research is finally published.
1994
The first “Toward a Science of Consciousness” conference is held in Tucson, Arizona. The Association for the Scientific Study of Consciousness is formed. Francis Crick publishes The Astonishing Hypothesis. Antonio Damasio publishes Descartes’ Error. The first issue of the Journal of Consciousness Studies comes out.
1995
David Chalmers introduces the hard problem in his article “Facing Up to the Problem of Consciousness.” David Milner and Melvyn Goodale publish The Visual Brain in Action.
1996
The Wellcome scanner laboratory opens at Institute of Neurology, London. Giaccamo Rizzollati and colleagues report finding “mirror neurons” in the premotor cortex. David Chalmers publishes The Conscious Mind.
1997
Chess-playing computer Deep Blue beats world champion Gary Kasparov. Daniel Simons and Daniel Levin write on “change blindness” in Trends in Cognitive Sciences.
258 • Chronology
Steven Pinker publishes How the MindWorks. 1998
Vilayanur Ramachandran and Sandra Blakeslee publish Phantoms of the Brain.
2000
Jerry Fodor publishes The Mind Doesn’t Work That Way, attacking Steven Pinker and distancing himself from his earlier support for functionalism. The journal Consciousness and Emotion is first published.
2001
Neurotheology makes the cover of Newsweek International on May 7, with Sharon Begley’s article “Religion and the brain.”
2003
Benny Shanon publishes The Antipodes of the Mind: Charting the Phenomenology of Ayahuasca Experience. Susan Blackmore publishes Consciousness: An Introduction, the first textbook dedicated to the subject.
Chronology
• 259
Glossary
Note: Items marked with an asterisk (*) are featured in the illustration of the brain on page 25. ACETYLCHOLINE: A neurotransmitter found in many parts of the brain; it has an excitatory or arousing effect on cells. ACTION POTENTIAL: The electrochemical signal passed along the axon when a neuron “fires.” ADRENALINE: A neurotransmitter associated with the brain’s alert “fight or flight” state. AFFECTIVE: Associated with the emotions. AGNOSIA: A state in which the sensory organs are working normally, but brain malfunction results in a loss of recognition. ALGORITHM: A sequence of mathematical steps to accomplish a task. AMNESIA: Loss of memory. AMYGDALA*: Part of the brain’s limbic system, associated with emotion. ANATOMY: The study and description of the structure of the body. ARTIFICIAL INTELLIGENCE (AI): A computer program that replicates (strong AI) or simulates (weak AI) mental thought processes. ARTIFICIAL NEURAL NET (ANN): A computer simulation of the brain’s network of neurons, designed to enable the computer program to change itself in response to its inputs and outputs, that is, to “learn.” ATOMS: The supposed smallest particles of matter, from which, according to atomic theory, all physical objects are ultimately composed. 261
ATTENTION: The brain’s focusing on a particular external stimulus or internal event. AURAL: Associated with the sensory mode of hearing. AXON: The single fiber (called a process) that carries the neuron’s output signal; it branches at the end, and each branch has a synapse that enables it to communicate with a neighboring cell. BASAL GANGLIA*: A group of subcortical neurons deep in the brain associated with the control of movement and posture. BEHAVIORISM: The view in psychology and philosophy of mind that mental states can be defined exclusively as the link between the senses (input) and behavior (output). BINDING PROBLEM: The need to explain how different types of sensory information (sight, sound, shape, color, etc.) come together to give a single perception. BLINDSIGHT: The ability to display knowledge of visual information without consciously seeing it. BRAIN*: The part of the central nervous system contained in the skull. BRAIN MAPPING: Correlating specific parts of the brain with particular mental states or processes. BRAINSTEM*: The evolutionarily oldest part of the brain, where it meets the spinal cord. BRIDGING PRINCIPLES: The rules governing the correlation between mental and physical states in dualistic accounts of the mindbody relation. BROCA’S AREA*: An area of the brain’s left frontal cortex associated with speech production. BRUTE FORCE: In computers, the use of calculating speed alone to solve problems such as chess moves. BUTTERFLY EFFECT: Name given to a characteristic of certain nonlinear dynamical systems, that a very small alteration in initial conditions can make a huge difference to the final outcome. CARTESIAN: Deriving from or associated with René Descartes. CAUSAL MECHANISM: A change in one state that brings about a change in another state. 262 • Glossary
CELL: A basic physical unit of living systems. CEREBELLUM*: The structure low down at the back of the brain, surrounding the rear of the brainstem. CHANGE BLINDNESS: The failure to register changes in the visual field; an example of inattentional blindness. CHAOS THEORY: The popular though misleading name for the description of nonlinear dynamic systems, like those responsible for the weather. CLASSICAL PHYSICS: The description of the world of classical objects, that is, those that obey Isaac Newton’s laws of motion. See also QUANTUM PHYSICS. CODING (VISUAL, ACOUSTIC, SEMANTIC): See NEURAL CODING COGNITION: The process or state of thinking. COGNITIVE SCIENCE: The study of cognition; a multidisciplinary field drawing on psychology, philosophy, neuroscience, and computer science. COLLAPSE OF THE WAVE FUNCTION: The term in one version of quantum theory for the point at which a range of “quantum possibilities” becomes a single “classical” outcome. COMPATIBILISM: The philosophical view that physical determinism is compatible with a degree of free choice. See also DETERMINISM. COMPUTATIONAL: Capable of being expressed fully and without remainder in one or more algorithms. COMPUTER: A person or machine that computes, or carries out algorithms. CONSCIOUSNESS: A state of awareness, such that “there is something it is like” to be in that state. CONTINGENT: Dependent upon circumstances; could have been otherwise. COPENHAGEN INTERPRETATION: The standard interpretation of quantum mechanics, which says there is no unequivocal reality at the quantum level of description but only at the observable level of classical physics. CORPUS CALLOSUM*: A structure of some 200 million nerve fibers that links the left and right hemispheres of the brain’s cortex. Glossary
• 263
CORRELATION: The situation in which two states occur together, such that if one varies, so does the other. CORTEX (CEREBRAL CORTEX)*: The outermost and evolutionarily latest layer of the brain, especially well-developed in humans, comprising heavily wrinkled sheets of neurons associated with the sensorimotor system and cognition. Different areas of the cortex have been named according to their position (e.g. prefrontal, posterior parietal) and their presumed function (e.g. visual, motor, premotor). CORTICAL: Pertaining to the cortex, as in “cortical pathway,” which defines a route taken by signals passing through the neurons that make up the cortex. “Subcortical” refers to evolutionarily older parts of the brain situated beneath the cortex. DENDRITES: The fibers (called processes) that carry the neuron’s input signals from the synapses with neighboring neurons to the cell body. DETERMINISM: The philosophical belief that all physical states inevitably follow from earlier states and events. See also COMPATIBILISM and LIBERTARIANISM. DORSAL (UPPER) STREAM: One of two proposed cortical pathways in the brain’s visual system, the one associated with position and movement. DUALISM: The view in philosophy of mind that both the mental and the physical are really existing realms. ELECTRODE: An electrical device used either to measure or to stimulate brain activity. ELECTROENCEPHALOGRAPH (EEG): Recording of the brain’s electrical activity. ELECTRON: A subatomic particle carrying a single negative electric charge. ELIMINATIVE MATERIALISM (ELIMINATIVISM): The view in philosophy of mind that mental states do not really exist, but are misrepresentations of purely physical or material states. EMERGENTISM: The view in philosophy of mind that mental states really do exist but are necessarily associated with physical states in which they emerge at a sufficient level of complexity. EMPIRICAL: Observable; subject to scientific investigation. 264 • Glossary
ENLIGHTENMENT: In the West, a movement in European thought and culture in the sixteenth and seventeenth centuries that heralded the modern world; in the East, a spiritual state associated with “pure consciousness,” attained by rigorous meditation and personal discipline. ENTHEOGEN: The name given to psychoactive substances by those who use them for religious purposes. EPIPHENOMENALISM: The view in philosophy of mind that the mental is a mere by-product (or epiphenomenon) of the physical and has no causal power. ERTAS: Acronym for the “extended reticular-thalamic activation system,” an interconnected group of structures in the brain proposed by Bernard Baars and James Newman as the seat of consciousness. FEEDBACK: In electronic or neuronal circuits, the mechanism whereby the output signal from a cell or unit can influence an earlier point in the circuit and so influence its own subsequent input. FIRING (CELL): The popular name for the situation in which a neuron’s electrochemical state reaches the critical point that causes an output signal to travel down the axon and transmit to neighboring cells. FIRST-PERSON: See SUBJECTIVE FOLK PSYCHOLOGY: The commonsense interpretation of mental experience, regarded as unreliable by many professional scientists and philosophers (especially eliminative materialists). FREE WILL (VOLITION): The exercise of mental choice over an action, such that one could, in identical circumstances, have acted otherwise had one so chosen. FUNCTIONAL MAGNETIC RESONANCE IMAGING (fMRI): A technique for deducing which neuronal areas are active by measuring the variation in oxygen levels in the blood serving different parts of the brain. FUNCTIONALISM: The view in philosophy of mind that mental states are determined solely by their relations to sensory input, other inner states, and behavioral output. GANGLION CELLS: The neurons that make up the optic nerve, transmitting signals from the retina to the LATERAL GENICULATE NUCLEUS (LGN). Glossary
• 265
GHOST IN THE MACHINE: Gilbert Ryle’s scornful description of substance dualism, the mind-body relationship as depicted by René Descartes. GRANDMOTHER CELL: The hypothetical single neuron at the end of an information-processing path, which on a hierarchical interpretation of perception would cause a complete image to enter consciousness. HARD-WIRED: A description, based on an electrical analogy, of permanent links between certain neurons. HEMISPHERE*: The whole of either the left or right half of the cerebral cortex, the two being joined by the corpus callosum. HIERARCHICAL MODEL: A pyramidal picture of how perception works, with many originally isolated pieces of information at the bottom contributing to a single unified image at the top. HIPPOCAMPUS*: A centrally placed and evolutionarily old cortical structure in the brain, associated with memory. HYPOTHALAMUS*: A small structure at the base of the brain behind the eyes, involved in controlling the body’s hormone system to regulate temperature, hunger, and so on. IDEALISM: The view in the philosophy of mind that physical objects are ultimately mental creations. IDENTITY THEORY: The view in the philosophy of mind that mental states are identical with certain physical brain states. See also TOKEN-IDENTITY and TYPE-IDENTITY. INATTENTIONAL BLINDNESS: The failure to register a visual stimulus in consciousness because one is not attending to it. INFERIOR TEMPORAL AREA (IT)*: Cortical region associated with the visual system and memory. INFORMATION PROCESSING: The dominant model in cognitive science, which treats the brain as a device for receiving information from the senses (like the input to a computer from the mouse or keyboard or inserted disk), which is then manipulated by the networks of neurons to produce an appropriate output in some form of thought, word, or deed. INPUT: See INFORMATION PROCESSING and FUNCTIONALISM. INSTANTIATION: See REALIZATION. 266 • Glossary
INTERACTION PROBLEM: In dualist accounts of the mind-body relation, the problem of how two unlike realms can causally influence each other. INTRALAMINAR NUCLEUS (ILN)*: A group of neurons at the center of the thalamus, connected to many regions of cortex and possibly playing a major role in generating consciousness. INTROSPECTIONISM: The view in psychology and philosophy of mind that mental states can be accessed only by self-inspection on the part of the owner of those states. INVASIVE: Investigative methods that involve putting external objects (electrodes, surgical instruments, chemicals, radiation, etc.) into the body. KNOWLEDGE ARGUMENT: In philosophy of mind, an argument against physicalism that claims human beings can know things subjectively that elude objective investigation. LATERAL GENICULATE NUCLEUS (LGN)*: A group of neurons in the thalamus that acts as a junction box in the visual system, passing signals from the retina to the visual cortex, from which it receives signals in return. LESION: An area of physical damage to the brain or other part of the body. LIBERTARIANISM: The philosophical view that physical determinism is incompatible with human free will, and that determinism is consequently false. See also DETERMINISM. LIMBIC SYSTEM: A group of structures in the midbrain, both physically and evolutionarily between the brainstem and the cortex, associated with emotion; they include the hippocampus, amygdala, hypothalamus, and septum. LINEAR: One thing following another in a set order; a serially organized scheme, as opposed to parallel systems, feedback loops, and so on. LOBES*: The four divisions of the cerebral cortex: frontal, parietal (middle top), temporal (middle side), and occipital (back). LONG-TERM POTENTIATION (LTP): An increase in the reactivity of a neuron in the hippocampus following high-frequency stimulation of a neuron outputting to that neuron; the effect lasts several days and is probably a mechanism for learning and memory. Glossary
• 267
LOOP: In neuronal circuits, a feedback arrangement whereby the output signal from a cell influences an earlier point in the circuit, thus contributing to its own subsequent input. LUCID DREAMING: A sleeping state in which the dreamer has conscious control over the dream and has a limited ability to communicate with the outside world. MAGNETOENCEPHALOGRAPH (MEG): Recording of changes in the brain’s magnetic field as a way of measuring neuronal activity. MATERIALISM: The view in the philosophy of mind that the mind is ultimately physical. MATTER: The “stuff ” of which the physical world is composed. MEMORY: The ability to recall past experiences, either consciously (when it is called declarative or explicit memory) or nonconsciously (when it is called nondeclarative, implicit, or procedural memory). Explicit memory may be episodic (relating to personal events) or semantic (relating to detached facts). A distinction once made between short- and long-term memory has largely been displaced by the notion of working memory, which is the portion of stored memory to which one has current access. MENTAL: Pertaining to the mind. MENTAL STATE: A general term covering any state of mind, which might be a thought, a belief, a pain, an emotion, and so on. MICROTUBULE: A microscopic component of many cells, including neurons; believed by Roger Penrose and Stuart Hameroff to provide suitable conditions for the collapse of the quantum wave function at body temperature, and so generate consciousness. MIND: The seat of experience and conscious memory. MIND-BODY PROBLEM: The need to account for the intimate relation between two such apparently different entities as the mind and the body, both of which we think of, at different times, as being “me.” MONISM: A philosophy of mind that treats the mental and the physical realms as ultimately one, for example, physicalism (the mind is ultimately physical), idealism (physical objects are ultimately mental creations), and neutral or dual-aspect monism (the mental and the physical are two ways of experiencing a single reality).
268 • Glossary
MOTOR SYSTEM: The arrangement of neurons (called motor neurons) that transmit signals from the brain to the muscles that implement motion (i.e., “motor action”). MULTIPLE DRAFTS MODEL (MDM): Daniel Dennett’s concept of consciousness as an ever-changing and never-finalized succession of representations within the brain of conditions in the environment. NEURAL ACTIVITY: The electrochemical changes in the brain’s nerve cells that give rise to changes in the physical state of the brain, which appear also to be correlated with changes of mental state. NEURAL ARCHITECTURE: The patterns of connections between neurons, which are evident from their sequences of firing and provide clues to the workings of the brain and the mind. NEURAL CODING: The means by which the brain holds information in memory; possible mechanisms relate to images (visual coding), sounds (acoustic coding), and meaning (semantic coding). NEURAL CORRELATES OF CONSCIOUSNESS (NCCs): The physical state of the neurons associated with a particular conscious mental state. NEURAL DARWINISM: The popular name for Gerald Edelman’s theory of neural group selection (TNGS), which holds that the brain is born with neurons already connected into groups. Those groups that perform best in response to stimuli are selected and retained by having their intercellular links strengthened; the links holding together the less successful groups weaken and disappear. NEUROBIOLOGY: The study of the structure and function of the nervous system, including the brain. NEUROMODULATORS: Chemicals such as serotonin and acetylcholine, produced when neurons are activated, that modify the behavior of neighboring neurons, either by exciting or inhibiting them. Formerly called neurotransmitters. NEURON: A nerve cell. Neurons are generally regarded as the basic unit of the nervous system, receiving and passing on electrochemical signals. NEURONAL CIRCUITRY: A way of speaking of the pathways linking neurons, along which electrochemical signals are passed. NEUROSCIENCE: An ill-defined expression applied jointly to Glossary
• 269
various branches of study, including anatomy, physiology, and biochemistry, that focus on the brain and nervous system. NEUROTRANSMITTER: See NEUROMODULATORS. NONLOCALITY: The condition that means objects cannot always be treated as separate, even if they appear to be observed at different positions in space. NONREDUCTIVE PHYSICALISM: The view in the philosophy of mind that mental states are dependent on physical states but have features over and above those contained in a purely physical description. OBJECTIVE (THIRD-PERSON): Independently verifiable; not reliant on one person’s inner experience or conviction. OCCAM’S RAZOR: The philosophical principle that simple explanations are preferable to complicated ones, in particular that the number of elements or “entities” in an explanation should be kept to a minimum. Named after the medieval scholar William of Occam. OLFACTORY: Associated with the sensory mode of smell. ONTOLOGY: The study of the way things are, as distinct from the way we can know about them. OUTPUT: See INFORMATION PROCESSING and FUNCTIONALISM. PANPSYCHISM: The philosophical viewpoint that all physical objects have some degree of mentality or consciousness. PARALLEL PATHWAYS: A system allowing simultaneous processes to occur, as opposed to a linear or serially organized procedure. PERCEPTION: The brain’s capacity to recognize and respond to stimuli in the environment. PERSON: An organism with a point of view. PHANTOM LIMB: The feeling of sensations in an amputated limb, caused by activity in the region of cortex formerly linked with that limb. PHENOMENOLOGY: Broadly speaking, the phenomenology of an event is the way it appears to the person experiencing it; more narrowly, phenomenology is a philosophical approach to conscious experience based on the teaching of Edmund Husserl. PHI EFFECT: The apparent movement of a single light when two neighboring lights are rapidly and alternately switched on and off. 270 • Glossary
PHILOSOPHY: The critical study of the nature of things and human knowledge of them. PHOTON: A particle of light. PHRENOLOGY: The study of psychological character by examining the external shape and features of the skull. PHYSICAL: That aspect of the world that is able to be observed and measured objectively. PHYSICALISM: See MATERIALISM. PHYSIOLOGY: The study and description of the way the body functions. POSITIVISM: A nineteenth-century philosophical movement founded by Auguste Comte that portrayed human thought as evolving from religion through philosophy to reach its highest point in science. POSITRON EMISSION TOMOGRAPHY (PET): A technique for deducing which neuronal areas are active by assessing the blood flow to different parts of the brain. PROSOPAGNOSIA: A state in which one is unable to recognize faces. PSYCHOACTIVE SUBSTANCES: Chemicals that, having entered the bloodstream, act as neuromodulators and disturb the normal chemical balance of the brain, with consequent alterations in conscious mental states. PSYCHOLOGY: The study of mental states, their causes and consequences. QUALIA (singular QUALE): The experienced qualities of conscious states, such as the redness of a rose or the smell of freshly ground coffee. QUANTUM MECHANICS (QM): The mathematical laws and equations that predict observations made on entities that do not follow Newtonian mechanics. QUANTUM PHYSICS: The study and account of the world at the level of very small entities that do not obey Newton’s Laws of Motion. See also CLASSICAL PHYSICS. QUANTUM THEORY: The attempt to explain the reality underlying quantum mechanics. RADIO(ACTIVE) ISOTOPE: A mildly radioactive form of a common Glossary
• 271
element (often hydrogen or oxygen) used as a harmless “label” to trace the flow of blood or the uptake of oxygen in the brain during experiments. RAPID EYE MOVEMENT (REM) SLEEP: The part of the sleep cycle associated with dreaming. READINESS POTENTIAL: Electrical activity preparing neurons to initiate a “voluntary” action before the conscious decision is made to act. REALIZATION: A mental state is said to be realized (or instantiated) in a physical brain state when it is correlated with it in a causal or necessary manner. RECEPTIVE FIELD: The portion of the visual field in which a stimulus is detectable by a given neuron in some part of the visual system remote from the retina. RECEPTOR CELL: A sensory neuron that converts an external stimulus, such as a light, into an electrochemical signal. REDUCTIVE PHYSICALISM: The view in the philosophy of mind that mental states have no features over and above those contained in a purely physical description. REENTRY: A kind of neuronal feedback that is central to Edelman’s concept of neural Darwinism. REFERENT: In philosophy, the referent is the thing to which an expression refers, as distinct from what the expression means. The distinction is significant in debates on the identity theory of mind. REFLEX: An action in which a signal passes more or less direct from the receptor cell to a motor neuron. REPLICATION: A form of copying in which the same outcome is achieved from the same input and by the same means, as in the original. See also SIMULATION. RES COGITANS: Descartes’s term for the mental, “stuff that thinks.” RES EXTENSA: Descartes’s term for the physical, “stuff that takes up space.” RETINA: The light-sensitive layers of receptor cells (rods and cones) and ganglion cells at the back of the eye. SACCADE: A rapid change in direction of the eyes’ gaze; the human eye makes about four saccades per second. 272 • Glossary
SCHRÖDINGER’S CAT: The subject of a thought experiment designed by Erwin Schrödinger to discredit the quantum theory of physics he had helped to produce. SEMANTIC: Associated with meaning. SENSORIMOTOR SYSTEM: The motor and sensory systems taken together; that part of the nervous system which controls the complex interaction between sensory stimuli and bodily movement. SENSORY MODALITY: Any of the five faculties of seeing, hearing, touching, tasting, and smelling. SENSORY STIMULUS: Anything that is detectable by any of the sensory modes. SENSORY SYSTEM: The arrangement of neurons that transmit and process signals from the sense organs to the brain. SEPTUM: A group of neurons in the limbic system whose stimulation of the neighboring hippocampus seems to be essential for memory. SERIALLY ORGANIZED SYSTEM: See LINEAR; PARALLEL PATHWAYS. SEROTONIN: A neurotransmitter involved in modulating sleep patterns. SHAMANISM: The practice of techniques for entering the “spirit world” to gain information used in counseling and healing. SIGNAL: A term borrowed from the world of communications to describe the electrochemical pulses produced by the network of neurons in the brain. SIMULATION: A form of copying in which the same outcome is achieved from the same input as in the original, but by different means. See also REPLICATION. SPATIAL RESOLUTION: In the context of consciousness research, the degree of accuracy with which the position of a brain event can be pinpointed by a measuring device. STATE: The full description of a given system (such as a brain, a mind, or an experimental setup) at a particular time. SUBATOMIC: Smaller than an atom. SUBCORTICAL: See CORTEX. Glossary
• 273
SUBJECTIVE (FIRST-PERSON): Based on one person’s inner experience or conviction; not independently verifiable. SUBLIMINAL: Not entering into conscious awareness. SUBSTANCE DUALISM: The view in philosophy of mind that the mental and the physical constitute two independent realms and are composed of two quite different kinds of “stuff,” named by Descartes as res cogitans and res extensa. SUPERVENIENCE: In philosophy, an asymmetric relation between two entities, in which the supervenient one is dependent on the subvenient one but not vice versa. Some philosophers of mind have suggested that mental states supervene on physical brain states. SYNAPSE: The junction between two adjacent neurons, consisting of a synaptic gap across which a neurotransmitter released by the presynaptic neuron can effect an electrochemical change in the target cell. SYNTAX: Rules of procedure, in a language or in computer science, that in themselves are devoid of meaning. TEMPORAL RESOLUTION: In the context of consciousness research, the degree of accuracy with which the timing of a brain event can be pinpointed by a measuring device. THALAMUS*: A composite structure in the limbic system, relaying signals from the sense organs to the appropriate region of the sensory cortex and receiving downward signals from the cortex. See also ILN and LGN. THIRD-PERSON: See OBJECTIVE. TOKEN IDENTITY: A weaker version of mind-brain identity theory, in which it is possible for two identical mental states each to be identical with some physical brain state, without those two physical states being identical with each other. See also TYPE IDENTITY. TRANSDUCTION: The translation by a receptor cell of a sensory stimulus, such as light, into an electrochemical signal detectable by other neurons. TURING MACHINE: Not a physical machine, but a description of the minimum set of step-by-step mathematical instructions out of which any computational task can be constructed. TURING TEST: A setup in which a person carries on two separate conversations with two unseen correspondents, all replies being 274 • Glossary
typed out; if the person cannot deduce from the replies that one of the correspondents is in fact a computer, then that machine is deemed to be intelligent. TYPE IDENTITY: A stronger version of mind-brain identity theory that requires that two identical mental states must be identified with two identical physical brain states. See also TOKEN IDENTITY. VENTRAL (LOWER) STREAM: One of two proposed cortical pathways in the brain’s visual system, the one associated with recognition. VISUAL: Associated with the sensory mode of sight. VISUAL FIELD: The section of the environment that can make a visual impact, for a given direction of gaze. VISUAL SYSTEM: The arrangement of neurons that transmit and process signals related to vision; the most studied of all the sensory systems. VOLITION: See FREE WILL. WAVE EQUATION: The fundamental mathematical expression of quantum mechanics, derived by Erwin Schrödinger. WERNICKE’S AREA*: An area of the brain’s left temporal cortex associated with the meaning of words. ZOMBIE: A fictional creature defined as being physically, functionally, and behaviorally identical with a human being and yet without any conscious experience. Whether philosophers of mind can conceive of such a creature indicates their position on the dualist-physicalist spectrum.
Glossary
• 275
Documents
1. Meditations on First Philosophy, René Descartes (1641) 2. Nobel Presentation Speech to Camillo Golgi and Santiago Ramón y Cajal, for work on the Anatomy of the Nervous System, Count K.A. H. Mörner (1906) 3. The Battle of Behaviorism: An Exposition and an Exposure, John B.Watson andWilliam McDougall (1924) 4. The Magical Number Seven, Plus or Minus Two, George A. Miller (1956) 5. What Is It Like to Be a Bat? Thomas Nagel (1974) 6. Minds, Brains, and Programs, John R. Searle (1980) 7. Nobel Presentation Speech to Roger Sperry, David Hubel, and Torsten Wiesel, for Neuroscientific Research, David Ottoson (1981) 8. Time and the Observer, Daniel Dennett and Marcel Kinsbourne (1991) 9. Facing Up to the Problem of Consciousness, David J.Chalmers (1995) 10. Lucid Dreaming, Stephen LaBerge (2000)
Document 1 The first document is a translation of a pair of extracts from Descartes’s Meditations on First Philosophy, published in Latin in 1641. First philosophy is a term derived from Aristotle that refers to the foundations of human knowledge on which all other thinking is built. The first piece is the opening six paragraphs of Meditation 2, to which Descartes gave the heading “Of the Nature of the Human Mind; and That It Is More Easily Known Than the Body.” In it he undertakes a thoroughly skeptical approach to his investigation, determining to reject any proposal that admits of the slightest doubt (paragraph 1). He concludes that at least the proposition “I exist” is necessarily true at the time it is uttered (paragraph 3) and that it must be the case that this “I” is “a thinking thing” (paragraph 6).This is the 277
famous affirmation better known in the form he uses elsewhere,“I think, therefore I am.” The Latin for this is cogito ergo sum, and the idea is often referred to simply as “the cogito.” The second of the two extracts is a group of paragraphs taken from the middle of Meditation 6, titled “Of the Existence of Material Things, and of the Real Distinction between the Mind and Body of Man.” It contains the assertion central to Cartesian dualism that the essential “I” is a nonextended (that is, nonphysical) thinking thing that possesses, but may exist without, an extended (that is, physical) body (paragraph 9). But he then partially contradicts himself by saying that “a pilot in a vessel” is an inadequate simile for the mindbody relation because the latter is more intimate (paragraph 13). Meditations on First Philosophy René Descartes From Meditation 2, “Of the Nature of the Human Mind; and That It Is More Easily Known Than the Body.” 1. The Meditation of yesterday has filled my mind with so many doubts, that it is no longer in my power to forget them. Nor do I see, meanwhile, any principle on which they can be resolved; and, just as if I had fallen all of a sudden into very deep water, I am so greatly disconcerted as to be unable either to plant my feet firmly on the bottom or sustain myself by swimming on the surface. I will, nevertheless, make an effort, and try anew the same path on which I had entered yesterday, that is, proceed by casting aside all that admits of the slightest doubt, not less than if I had discovered it to be absolutely false; and I will continue always in this track until I shall find something that is certain, or at least, if I can do nothing more, until I shall know with certainty that there is nothing certain.Archimedes, that he might transport the entire globe from the place it occupied to another, demanded only a point that was firm and immovable; so, also, I shall be entitled to entertain the highest expectations, if I am fortunate enough to discover only one thing that is certain and indubitable. 2. I suppose, accordingly, that all the things which I see are false; I believe that none of those objects which my fallacious memory represents ever existed; I suppose that I possess no senses; I believe that body, figure, extension, motion, and place are merely fictions of my mind. What is there, then, that can be esteemed true? Perhaps this only, that there is absolutely nothing certain. 3. But how do I know that there is not something different altogether from the objects I have now enumerated, of which it is impossible to entertain the slightest doubt? Is there not a God, or some 278 •
Documents
being, by whatever name I may designate him, who causes these thoughts to arise in my mind? But why suppose such a being, for it may be I myself am capable of producing them? Am I, then, at least not something? But I before denied that I possessed senses or a body; I hesitate, however, for what follows from that? Am I so dependent on the body and the senses that without these I cannot exist? But I had the persuasion that there was absolutely nothing in the world, that there was no sky and no earth, neither minds nor bodies; was I not, therefore, at the same time, persuaded that I did not exist? Far from it; I assuredly existed, since I was persuaded. But there is I know not what being, who is possessed at once of the highest power and the deepest cunning, who is constantly employing all his ingenuity in deceiving me. Doubtless, then, I exist, since I am deceived; and, let him deceive me as he may, he can never bring it about that I am nothing, so long as I shall be conscious that I am something. So that it must, in fine, be maintained, all things being maturely and carefully considered, that this proposition I am, I exist, is necessarily true each time it is expressed by me, or conceived in my mind. 4. But I do not yet know with sufficient clearness what I am, though assured that I am; and hence, in the next place, I must take care, lest perchance I inconsiderately substitute some other object in room of what is properly myself, and thus wander from truth, even in that knowledge which I hold to be of all others the most certain and evident. For this reason, I will now consider anew what I formerly believed myself to be, before I entered on the present train of thought; and of my previous opinion I will retrench all that can in the least be invalidated by the grounds of doubt I have adduced, in order that there may at length remain nothing but what is certain and indubitable. 5. What then did I formerly think I was? Undoubtedly I judged that I was a man. But what is a man? Shall I say a rational animal? Assuredly not; for it would be necessary forthwith to inquire into what is meant by animal, and what by rational, and thus, from a single question, I should insensibly glide into others, and these more difficult than the first; nor do I now possess enough of leisure to warrant me in wasting my time amid subtleties of this sort. I prefer here to attend to the thoughts that sprung up of themselves in my mind, and were inspired by my own nature alone, when I applied myself to the consideration of what I was. In the first place, then, I thought that I possessed a countenance, hands, arms, and all the fabric of members that appears in a corpse, and which I called by the name of body. It further occurred to me that I was nourished, that I walked, perceived, and thought, and all those actions I referred to the soul; but Documents
• 279
what the soul itself was I either did not stay to consider, or, if I did, I imagined that it was something extremely rare and subtile, like wind, or flame, or ether, spread through my grosser parts. As regarded the body, I did not even doubt of its nature, but thought I distinctly knew it, and if I had wished to describe it according to the notions I then entertained, I should have explained myself in this manner: By body I understand all that can be terminated by a certain figure; that can be comprised in a certain place, and so fill a certain space as therefrom to exclude every other body; that can be perceived either by touch, sight, hearing, taste, or smell; that can be moved in different ways, not indeed of itself, but by something foreign to it by which it is touched and from which it receives the impression; for the power of self-motion, as likewise that of perceiving and thinking, I held as by no means pertaining to the nature of body; on the contrary, I was somewhat astonished to find such faculties existing in some bodies. 6. But as to myself, what can I now say that I am, since I suppose there exists an extremely powerful, and, if I may so speak, malignant being, whose whole endeavors are directed toward deceiving me? Can I affirm that I possess any one of all those attributes of which I have lately spoken as belonging to the nature of body? After attentively considering them in my own mind, I find none of them that can properly be said to belong to myself. To recount them were idle and tedious. Let us pass, then, to the attributes of the soul. The first mentioned were the powers of nutrition and walking; but, if it be true that I have no body, it is true likewise that I am capable neither of walking nor of being nourished. Perception is another attribute of the soul; but perception too is impossible without the body; besides, I have frequently, during sleep, believed that I perceived objects which I afterward observed I did not in reality perceive. Thinking is another attribute of the soul; and here I discover what properly belongs to myself. This alone is inseparable from me. I am—I exist: this is certain; but how often? As often as I think; for perhaps it would even happen, if I should wholly cease to think, that I should at the same time altogether cease to be. I now admit nothing that is not necessarily true. I am therefore, precisely speaking, only a thinking thing, that is, a mind, understanding, or reason, terms whose signification was before unknown to me. I am, however, a real thing, and really existent; but what thing? The answer was, a thinking thing. From Meditation 6, “Of the Existence of Material Things, and of the Real Distinction between the Mind and Body of Man.” 8. But now that I begin to know myself better, and to discover 280 •
Documents
more clearly the author of my being, I do not, indeed, think that I ought rashly to admit all which the senses seem to teach, nor, on the other hand, is it my conviction that I ought to doubt in general of their teachings. 9. And, firstly, because I know that all which I clearly and distinctly conceive can be produced by God exactly as I conceive it, it is sufficient that I am able clearly and distinctly to conceive one thing apart from another, in order to be certain that the one is different from the other, seeing they may at least be made to exist separately, by the omnipotence of God; and it matters not by what power this separation is made, in order to be compelled to judge them different; and, therefore, merely because I know with certitude that I exist, and because, in the meantime, I do not observe that aught necessarily belongs to my nature or essence beyond my being a thinking thing, I rightly conclude that my essence consists only in my being a thinking thing or a substance whose whole essence or nature is merely thinking. And although I may, or rather, as I will shortly say, although I certainly do possess a body with which I am very closely conjoined; nevertheless, because, on the one hand, I have a clear and distinct idea of myself, in as far as I am only a thinking and unextended thing, and as, on the other hand, I possess a distinct idea of body, in as far as it is only an extended and unthinking thing, it is certain that I, that is, my mind, by which I am what I am, is entirely and truly distinct from my body, and may exist without it. 10. Moreover, I find in myself diverse faculties of thinking that have each their special mode: for example, I find I possess the faculties of imagining and perceiving, without which I can indeed clearly and distinctly conceive myself as entire, but I cannot reciprocally conceive them without conceiving myself, that is to say, without an intelligent substance in which they reside, for in their formal concept, they comprise some sort of intellection; whence I perceive that they are distinct from myself as modes are from things. I remark likewise certain other faculties, as the power of changing place, of assuming diverse figures, and the like, that cannot be conceived and cannot therefore exist, any more than the preceding, apart from a substance in which they inhere. It is very evident, however, that these faculties, if they really exist, must belong to some corporeal or extended substance, since in their clear and distinct concept there is contained some sort of extension, but no intellection at all. Further, I cannot doubt but that there is in me a certain passive faculty of perception, that is, of receiving and taking knowledge of the ideas of sensible things; but this would be useless to me, if there did not also exist in me, or in some Documents
• 281
other thing, another active faculty capable of forming and producing those ideas. But this active faculty cannot be in me, in as far as I am but a thinking thing, seeing that it does not presuppose thought, and also that those ideas are frequently produced in my mind without my contributing to it in any way, and even frequently contrary to my will. This faculty must therefore exist in some substance different from me, in which all the objective reality of the ideas that are produced by this faculty is contained formally or eminently, as I before remarked; and this substance is either a body, that is to say, a corporeal nature in which is contained formally all that is objectively and by representation in those ideas; or it is God himself, or some other creature, of a rank superior to body, in which the same is contained eminently. But as God is no deceiver, it is manifest that he does not of himself and immediately communicate those ideas to me, nor even by the intervention of any creature in which their objective reality is not formally, but only eminently, contained. For as he has given me no faculty whereby I can discover this to be the case, but, on the contrary, a very strong inclination to believe that those ideas arise from corporeal objects, I do not see how he could be vindicated from the charge of deceit, if in truth they proceeded from any other source, or were produced by other causes than corporeal things: and accordingly it must be concluded, that corporeal objects exist. Nevertheless, they are not perhaps exactly such as we perceive by the senses, for their comprehension by the senses is, in many instances, very obscure and confused; but it is at least necessary to admit that all which I clearly and distinctly conceive as in them, that is, generally speaking all that is comprehended in the object of speculative geometry, really exists external to me. 11. But with respect to other things which are either only particular, as, for example, that the sun is of such a size and figure, etc., or are conceived with less clearness and distinctness, as light, sound, pain, and the like, although they are highly dubious and uncertain, nevertheless on the ground alone that God is no deceiver, and that consequently he has permitted no falsity in my opinions which he has not likewise given me a faculty of correcting, I think I may with safety conclude that I possess in myself the means of arriving at the truth. And, in the first place, it cannot be doubted that in each of the dictates of nature there is some truth: for by nature, considered in general, I now understand nothing more than God himself, or the order and disposition established by God in created things; and by my nature in particular I understand the assemblage of all that God has given me. 12. But there is nothing which that nature teaches me more ex282 •
Documents
pressly or more sensibly than that I have a body which is ill affected when I feel pain, and stands in need of food and drink when I experience the sensations of hunger and thirst, etc. And therefore I ought not to doubt but that there is some truth in these informations. 13. Nature likewise teaches me by these sensations of pain, hunger, thirst, etc., that I am not only lodged in my body as a pilot in a vessel, but that I am besides so intimately conjoined, and as it were intermixed with it, that my mind and body compose a certain unity. For if this were not the case, I should not feel pain when my body is hurt, seeing I am merely a thinking thing, but should perceive the wound by the understanding alone, just as a pilot perceives by sight when any part of his vessel is damaged; and when my body has need of food or drink, I should have a clear knowledge of this, and not be made aware of it by the confused sensations of hunger and thirst: for, in truth, all these sensations of hunger, thirst, pain, etc., are nothing more than certain confused modes of thinking, arising from the union and apparent fusion of mind and body. Source: Descartes, René. 1901. Meditations on the First Philosophy, trans. J.Veitch, available in various print and electronic formats, including Descartes’ Meditations:A Trilingual HTML Edition, edited by David B. Manley and Charles S. Taylor, http://philos.wright.edu/Descartes/Meditations. html.
Document 2 The second document is the presentation speech made by Professor the Count K. A. H. Mörner, rector of the Royal Karolinska Institute, to Camillo Golgi and Santiago Ramón y Cajal, when they were awarded the Nobel Prize in Physiology or Medicine in the year 1906 in recognition of their work on the anatomy of the nervous system. It provides a neat summary of their achievements as they were regarded at that time and is notable for being the first occasion upon which a Nobel Prize was awarded jointly. Nobel Presentation Speech to Camillo Golgi and Santiago Ramón y Cajal, for Work on the Anatomy of the Nervous System Count K.A. H. Mörner Your Majesty, Your Royal Highnesses, Ladies and Gentlemen. This year’s Nobel Prize for Physiology or Medicine is presented for work accomplished in the field of anatomy. It has been awarded to Professors Camillo Golgi of Pavia and Ramón y Cajal of Madrid in recognition of their work on the anatomy of the nervous system. Documents
• 283
It is not possible at the present occasion to give a detailed account of this work. The importance of the field that they have undertaken to explore is obvious since it concerns the nervous system, an organic structure of such paramount importance to the most delicately organized of all living creatures. It is this system which brings us into relation with the outside world, be it that we receive impressions from it which act on our sensory organs and from there transmit themselves to the nervous centres, or be it that by movements or other forms of activity we intervene in the environmental phenomena. This same organic structure provides the basis and instrument for the highest form of activity of all, intellectual work. The different parts of the nervous system are all structurally complex, to a greater or lesser degree. The peripheral nerves, which act as transmitters—they may be compared to telegraph wires—are relatively simple as regards structure and pattern. On the other hand, the central nervous system, which includes the brain and spinal cord, has an extremely complicated structure. The central nervous system is connected to the different parts of the body by means of a mass of fibres emanating from the central organ and following the pathways of the nerves which originate from this organ. These fibres may however be divided into several groups according to their specific functions. One group of fibres transmits the impulses which produce muscular contraction. Another group enables the nervous system to control the activity of other organs such as those used in digestion. Still another group transmits to the central organ of the nervous system exterior stimuli registered by the sensory organs or stimuli resulting from changes occurring in the organs of the body itself. Even when we are not considering the central nervous system itself it is often extremely difficult to discover the exact pathways of these different groups of fibres and to study each one separately. Within the central system the task is naturally even more difficult, since the nerve fibres are dispersed throughout the system and the fibres corresponding to the different parts of the body intermingle with those which link up the different parts of the central nervous system; moreover, some of these nerves have a long tract and others a shorter tract within the central organ. I should like to give an example of the way in which the nervous system functions in order to demonstrate how complicated this is. Let us suppose that a part of the skin at one of the extremities has suffered a lesion produced by an exterior agent; corresponding nerve endings receive the stimulus. Through the nerve trunk to which the nerve 284 •
Documents
endings belong the irritation spreads and is transmitted to the spinal cord by the dorsal roots of the nerves to the area which is known as the dorsal horns of the cord. Should transmission of the impulse be interrupted at this point, the sensation will not be consciously registered. It can nevertheless give rise to a movement which is described as a reflex action. This proves that communicating pathways must exist by which the impulse is transmitted to cells in the ventral horns of the spinal cord which specifically control muscular activity. The resulting movement appears to be to some extent appropriate to the environmental circumstances, which denotes the existence of some mechanism which coordinates the activity of these motor cells. Even a relatively simple example such as this demonstrates a fairly complex mechanism. But a far greater complexity appears if the impulse continues to be transmitted and reaches the centres of consciousness. The impulse progresses along nerve tracts which follow complex pathways until it reaches the surface of the brain, i.e. the cerebral cortex. For consciousness—in man at least—is exclusively located in this area. Until it reaches this area the transmission of the impulse must remain isolated, otherwise, if other pathways corresponding to other parts of the skin become involved, the site of the injury may be incorrectly located. If a painful sensation is eventually perceived, limited to the irritated area of skin, this sensation may in its turn give rise to a number of different activities within the central nervous system. It can give rise to thought and action. In this case, painful sensation can be linked with memory traces from earlier experiences, obtained in various ways and stored in various areas of the brain. This process presupposes a system of connections between different parts of the cerebrum. Finally, stimulation may occur of certain cells in the cerebral cortex which control voluntary and conscious muscular activity. When this occurs these cells produce impulses which provoke muscular reactions appropriate to the circumstances. The mechanism of transmission, which we have briefly outlined correlated with functional phenomena, will, I trust, demonstrate the complexity of the mechanism required for the functioning of the nervous system. Our present knowledge of this mechanism has been acquired in a number of different ways: by research in the field of comparative anatomy, by studying the development of the nervous system, by physiological experimentation, etc. The way which would appear to be leading most directly to better knowledge, i.e. direct anatomical observation, remained impracticable for many years. It had been shown that the nervous system contained, apart Documents
• 285
from blood vessels, etc. a “supporting substance,” composed of cells and fibrillar structures, and of nervous elements proper, also composed of filaments and cells which at different places showed a different appearance. The nerve cells which, for good reasons, were considered as stages and foci of the nervous pathways, were found to be concentrated in those areas of the central nervous system which are characterized by grey pigmentation. It was often difficult, however, to distinguish between real nerve cells and cells which made up the supporting substance. It was also known that many nerve cells gave off cellular processes, in varying numbers, among which one in particular, by reason of its special appearance, was believed to give rise to the true nerve fibre. Unfortunately, it was not possible to follow this process for a very long distance along its pathway. As for the other cellular processes, which ramified very quickly, they were the object of guesses rather than direct observation. Our knowledge of nerve fibres was also to a great extent incomplete. In the white areas of the central nervous system grouped nerve fibres were seen, similar in appearance to the peripheral nerve fibres. But to what extent did those of the first group prolong themselves into those of the second group, or link up different centres in the central nervous system? Did these fibres produce ramifications or not? Did they communicate or not with other nerve fibres? Such were the questions which required answers. It should be remembered in particular that almost nothing was known for certain of the relationship between nerve fibres and nerve cells. The central nervous system appeared as a confused mass of filaments, each as fine as the thread of a spider’s web, and of microscopic cells armed with cellular processes. It was impossible to isolate the individual components of tissue specimens. Nor was it possible to resort to known staining methods by which, for example, a single nerve cell with its processes could be distinguished as an entity. For these reasons Golgi’s method of silver impregnation, which met these requirements, must be considered as a fundamental discovery in the field of nerve anatomy. Using his original method, Golgi was also able to demonstrate a number of essential points of the architecture of the central nervous system, as well as many important structural details. It was only after many years, however, that attention was paid to his work and its importance recognized. When at last this happened, many scientists began to work in the field of action which Golgi had opened up. One could mention the names of a number of eminent scientists from far and near who, by their important contributions in the field of original studies of the anatomy of the nervous system, 286 •
Documents
have done a great deal for science. First among these we must place someone whose extraordinarily active and successful work in this field has revealed both fundamental factors of great importance and many essential details and who therefore, more than anyone else, has contributed to the recent extensive development of this branch of science. I refer to Mr. Ramón y Cajal. By their achievements, which have been briefly described here, Professors Camillo Golgi and Ramón y Cajal must be considered as the principal representatives and standard bearers of the modern science of neurology, which is proving so fertile in results. In recognition of their achievements in this field, the Staff of Professors of the Caroline Institute has decided to award to them this year’s Nobel Prize for Medicine. Professor Golgi. The Staff of Professors of the Caroline Institute, deeming you to be the pioneer of modern research into the nervous system, wishes therefore, in the annual award of the Nobel Prize for Medicine, to pay tribute to your outstanding ability and in such fashion to assist in perpetuating a name which by your discoveries you have written indelibly into the history of anatomy. Señor Don Santiago Ramón y Cajal. By reason of your numerous discoveries and learned investigations, you have given the study of the nervous system the form that it has taken at the present day, and by means of the rich material which your work has given to the study of neuroanatomy, you have laid down a firm foundation for the further development of this branch of science. The Staff of Professors of the Caroline Institute is pleased to honour such meritorious work by conferring upon you this year’s Nobel Prize. Source: The Nobel Foundation 1906. Reprinted with permission. http:// www.nobel.se.medicine/laureates/1906/press.html.
Document 3 The third document consists of the opening salvos from the two protagonists, John B.Watson and William McDougall, in the famous behaviorist debate before the Psychological Club of Washington, D.C., on February 5, 1924.The text is taken from a version of the confrontation published five years later in 1929 but retains all the freshness and vigor of the live encounter.The personal as well as professional dislike of the two men for each other is very evident, and one can feel the sarcasm in Watson’s pretended fear of “Professor McDougall’s forensic ability” and McDougall’s reference to the “incalculable advantage” afforded to Watson “by his attractive and forceful personality.” It is also interesting to see how dated some of the material seems.Academics today Documents
• 287
could hardly rely on the Old Testament and Greek mythology as part of a common stock of background knowledge suitable for pulling into the somewhat knockabout remarks used to warm up their audience before getting into the serious stuff. These passages have not just been chosen, however, for the sense of time and atmosphere that they create.They also supply a very succinct and informative account of the way behaviorism was viewed by the two sides at the moment in history when it was set to dominate psychology and much philosophy of mind for the next half century. The Battle of Behaviorism: An Exposition and an Exposure [The substance of remarks made in a debate before the Psychological Club of Washington, D.C., February 5, 1924] John B.Watson and William McDougall
Behaviorism—the Modern Note in Psychology John B.Watson Introduction When I innocently committed myself to meet Professor McDougall in debate, I understood that all that was required of me was to give a brief account of the new Behavioristic movement in psychology now rapidly forging to the front. Had I known that my presentation was expected to take the present form I fear timidity would have overcome me. Professor McDougall’s forensic ability is too well known, and my own shortcomings in that direction are too well known, for me knowingly to offer him combat. So I think the only self-protective plan is to disregard all controversial developments and attempt to give here a brief résumé of Behaviorism—the modern note in psychology—and to tell why it will work and why the current introspective psychology of Professor McDougall will not work. What is the Behavioristic note in psychology? Psychology is as old as the human race. The tempting of Eve by the serpent is our first biblical record of the use of psychological methods. May I call attention to the fact, though, that the serpent when he tempted Eve did not ask her to introspect, to look into her mind to see what was going on. No, he handed her the apple and she bit into it.We have a similar example of the Behavioristic psychology in Grecian mythology, when the golden apple labeled “For the Fairest” was tossed into a crowd of society women, and again when Hippomenes, in order to win the race from Atalanta, threw golden apples in front of her, knowing full well that she would check her swift flight to pick them up. 288 •
Documents
One can go through history and show that early psychology was Behavioristic—it grew up around the notion that if you place a certain thing in front of an individual or a group of individuals, the individual or group will act, will do something. Behaviorism is a return to early common-sense. The keynote is: Given a certain object or situation, what will the individual do when confronted with it. Or the reverse of this formulation: Seeing an individual doing something, to be able to predict what object or situation is calling forth that act. Behavioristic psychology, then, strives to learn something about the nature of human behavior. To get the individual to follow a certain line, to do certain things, what situation shall I set up? Or, seeing the crowd in action, or the individual in action, to know enough about behavior to predict what the situation is that leads to that action. This all sounds real; one might say it seems to be just commonsense. How can any one object to this formulation? And yet, full of common-sense as it is, this Behavioristic formulation of the problem of psychology has been a veritable battleground since 1912. To understand why this is so, let us examine the more conservative type of psychology which is represented by Professor McDougall. But to understand at all adequately the type of psychology which he represents we must take one little peep at the way superstitious responses have grown up and become a part of our very nature. Religious Background of Introspective Psychology No one knows just how the idea of the supernatural started. It probably had its origin in the general laziness of mankind. Certain individuals who in primitive society declined to work with their hands, to go out hunting, to make flints, to dig for roots, became Behavioristic psychologists—observers of human nature. They found that breaking boughs, thunder, and other sound-producing phenomena would throw the primitive individual from his very birth into a panicky state (meaning by that: stopping the chase, crying, hiding, and the like), and that in this state it was easy to impose upon him. These lazy but good observers began to speculate on how wonderful it would be if they could get some device by which they could at will throw individuals into this fearsome attitude and in general control their behavior. The colored nurses down south have gained control over the children by telling them that there is some one ready to grab them in the dark; that when it is thundering there is a fearsome power which can be appeased by their being good boys and girls. Medicine men flourished—a good medicine man had the best of everything and, best of all, he didn’t have to work. These individuals were called medicine Documents
• 289
men, soothsayers, dream interpreters, prophets—deities in modern times. Skill in bringing about these emotional conditionings of the people increased; organization among medicine men took place, and we began to have religions of one kind or another, and churches, temples, cathedrals, and the like, each presided over by a medicine man. I think an examination of the psychological history of people will show that their behavior is much more easily controlled by fear stimuli than by love. If the fear element were dropped out of any religion, that religion would not survive a year. An Examination of Consciousness From the time of Wundt on, consciousness becomes the keynote of psychology. It is the keynote to-day. It has never been seen, touched, smelled, tasted, or moved. It is a plain assumption just as unprovable as the old concept of the soul. And to the Behaviorist the two terms are essentially identical, so far as their metaphysical implications are concerned. To show how unscientific is the concept, look for a moment at William James’ definition of psychology: “Psychology is the description and explanation of states of consciousness as such.” Starting with a definition which assumes what he starts out to prove, he escapes his difficulty by an argumentum ad hominum. “Consciousness—oh, yes, everybody must know what this ‘consciousness’ is.” When we have a sensation of red, a perception, a thought, when we will to do something, or when we purpose to do something, or when we desire to do something, we are being conscious. In other words, they do not tell us what consciousness is, but merely begin to put things into it by assumption, and then when they come to analyze consciousness, naturally they find in it just what they put into it. Consequently, in the analysis of consciousness made by certain of the psychologists you find, as elements, sensations and their ghosts, the images.With others you find not only sensations, but so-called affective elements; in still others you will find such elements as will—the so-called conative element in consciousness.With some psychologists you will find many hundreds of sensations of a certain type; others will maintain that only a few of that type exist. And so it goes. Literally, millions of printed pages have been published on the minute analysis of this intangible something called “consciousness.” And how do we begin work upon it? Not by analyzing it as we would a chemical compound, or the way a plant grows. No, those things are material things. This thing we call consciousness can be analyzed only by self-introspection, turning around, and looking at what goes on inside. 290 •
Documents
In other words, instead of gazing at woods and trees and brooks and things, we must gaze at this undefined and undefinable something we call consciousness. As a result of this major assumption that there is such a thing as consciousness, and that we can analyze it by introspection, we find as many analyses as there are individual psychologists. There is no element of control. There is no way of experimentally attacking and solving psychological problems and standardizing methods. The Advent of the Behaviorists In 1912 the Behaviorists reached the conclusion that they could no longer be content to work with the intangibles. They saw their brother scientists making progress in medicine, in chemistry, in physics. Every new discovery in those fields was of prime importance, every new element isolated in one laboratory could be isolated in some other laboratory; each new element was immediately taken up in the warp and woof of science as a whole. May I call your attention to radium, to wireless, to insulin, to thyroxin, and hundreds of others? Elements so isolated and methods so formulated immediately began to function in human achievement. Not so with psychology, as we have pointed out. One has to agree with Professor Warner Fite that there has never been a discovery in subjective psychology; there has been only medieval speculation. The Behaviorist began his own formulation of the problem of psychology by sweeping aside all medieval conceptions. He dropped from his scientific vocabulary all subjective terms such as sensation, perception, image, desire, purpose, and even thinking and emotion as they were originally defined. What has he set up in their place? The Behaviorist asks: Why don’t we make what we can observe the real field of psychology? Let us limit ourselves to things that can be observed, and formulate laws concerning only the observed things. Now what can we observe? Well, we can observe behavior—what the organism does or says. And let me make this fundamental point at once: that saying is doing— that is, behaving. Speaking overtly or silently is just as objective a type of behavior as baseball. The Behaviorist puts the human organism in front of him and says:What can it do? When does it start to do these things? If it doesn’t do these things by reason of its original nature, what can it be taught to do? What methods shall society use in teaching it to do these things? Again, having taught it to do these things, how long will that organism be able to do them without practice? With this as subject matter, psychology connects up immediately with life. Documents
• 291
Fundamentals of Psychology—Behaviorism Examined William McDougall Dr. Watson and I have been invited to debate upon the fundamentals of psychology, because we are regarded as holding extremely different views; yet there is much in common between us. I wish to emphasize this common ground no less than our differences. I would begin by confessing that in this discussion I have an initial advantage over Dr. Watson, an advantage which I feel to be so great as to be unfair; namely, all persons of common-sense will of necessity be on my side from the outset, or at least as soon as they understand the issue. On the other hand, Dr.Watson also can claim certain initial advantages; all these together constitute a considerable asset that partially redresses the balance. First, there is a considerable number of persons so constituted that they are attracted by whatever is bizarre, paradoxical, preposterous, and outrageous, whatever is “agin the government,” whatever is unorthodox and opposed to accepted principles. All these will inevitably be on Dr.Watson’s side. Secondly, Dr.Watson’s views are attractive to many persons, and especially to many young persons, by reason of the fact that these views simplify so greatly the problems that lie before the student of psychology: they abolish at one stroke many tough problems with which the greatest intellects have struggled with only very partial success for more than two thousand years; and they do this by the bold and simple expedient of inviting the student to shut his eyes to them, to turn resolutely away from them, and to forget that they exist. This naturally inspires in the breast of many young people, especially perhaps those who still have examinations to pass, a feeling of profound gratitude to Dr.Watson. He appears to them as the great liberator, the man who sets free the slave of the lamp, who emancipates vast numbers of his unfortunate fellow creatures from the task of struggling with problems which they do not comprehend and which they cannot hope to solve. In short, Dr.Watson’s views are attractive to those who are born tired, no less than to those who are born Bolshevists. Thirdly, Dr. Watson’s views not only have the air of attractive simplicity, but also they claim to bring, and they have the air of bringing psychology into line with the other natural sciences and of rendering it strictly scientific. Fourthly, Dr. Watson’s cause has, on this occasion, the incalculable advantage of being presented by his attractive and forceful personality. 292 •
Documents
Fifthly,Watsonian Behaviorism is a peculiarly American product. It may even be claimed that it bears very clearly the marks of the national genius for seeking short cuts to great results. And if no European psychologist can be brought to regard it seriously, that may be accepted as merely another evidence of the effeteness of European civilization and the obtuseness of the European intellect, beclouded by the mists of two thousand years of culture and tradition. Here, in this great and beautiful city, the capital of the proudest and most powerful nation in all the earth, this patriotic consideration can hardly fail to carry weight. Lastly, Dr. Watson has the advantage of being in a position that must excite pity for him in the minds of those who understand the situation. And I will frankly confess that I share this feeling. I am sorry for Dr.Watson; and I am sorry about him. For I regard Dr.Watson as a good man gone wrong. I regard him as a bold pioneer whose enthusiasm, in the cause of reform in psychology, has carried him too far in the path of reform; one whose impetus, increased by the plaudits of a throng of youthful admirers, has caused him to overshoot the mark and to land in a ditch, a false position from which he has not yet summoned up the moral courage to retreat. And so long as his followers continue to jump into the ditch after him, shouting loud songs of triumph as they go, he does need great moral courage in order to climb back and brush off the mud; for such retreat might even seem to be a betrayal of those faithful followers. Now, though I am sorry for Dr. Watson, I mean to be entirely frank about his position. If he were an ordinary human being, I should feel obliged to exercise a certain reserve, for fear of hurting his feelings.We all know that Dr.Watson has his feelings, like the rest of us. But I am at liberty to trample on his feelings in the most ruthless manner; for Dr.Watson has assured us (and it is the very essence of his peculiar doctrine) that he does not care a cent about feelings, whether his own or those of any other person. After these preliminary observations, I will point out that Dr. Watson has shown serious misunderstanding of my position, and does me grave injustice in certain respects. Namely, he suspects me of being a sort of priest in disguise, a wolf in sheep’s clothing, a believer in conventional morality, an upholder of exploded dogmas. He has announced in large headlines that “McDougall Returns to Religion.” I cannot stop to refute these dreadful charges. I must be content to assert flatly that I am a hard-boiled scientist, as hard-boiled as Dr.Watson himself and perhaps more so. In all this psychology business, my aim is purely and solely to approximate towards the truth, that is to Documents
• 293
say, to achieve such understanding of human nature as will promote for each of us our power of controlling it, both in ourselves and in others. In spite of the clarity of Dr.Watson’s exposition, I do not believe that he has made quite clear the nature of the issues between us. There are really two main questions in dispute, two fundamentals on which we disagree. These may be shortly defined as, first, Dr. Watson’s Behaviorism, secondly, his acceptance of the mechanistic dogma. The second is the more important. I will say a few words about each of these topics in the order named. There are, as I understand it, three chief forms of “Behaviorism,” as the word is commonly used. First, there is “Metaphysical Behaviorism,” which also goes by the name of “Neo-Realism.” This is an inversion of subjective idealism.While the idealist says: “What you call the things or objects of the physical world are really your thoughts or phases of your thinking,” the neo-realist says: “What you call your thoughts, or phases of your thinking and feeling, are really things or processes of the physical world.” I need not trouble you by dwelling further upon this strange doctrine: for it is not the form of Behaviorism expounded by Dr.Watson. I will only say of it that it is the latest and presumably the last (because the only remaining) possible formulation of that most elusive of all relations, the relation of the mental to the physical. As a novelty (which we owe to a suggestion from the extraordinarily fertile mind of William James) it deserves and is enjoying a certain vogue. Secondly, there is the true or original Watsonian Behaviorism. There is no “metaphysical nonsense” about this. In fact, it is its principal distinction, the principal virtue claimed for it, that it extradites from the province of psychology every question that may be suspected of being metaphysical, and so purges the fold of the true believers, leaving them in intellectual peace forevermore. The essence of this form of Behaviorism is that it refuses to have any dealings with introspectively observable facts, resolutely refuses to attempt to state them, describe them, interpret them, make use of them, or take account of them in any way. All such facts as feelings, feelings of pleasure and pain or distress; emotional experiences, those we denote by such terms as anger, fear, disgust, pity, disappointment, sorrow, and so forth; all experiences of desiring, longing, striving, making an effort, choosing; all experiences of recollecting, imagining, dreaming, of fantasy, of anticipation, of planning or foreseeing; all these and all other experiences are to be resolutely ignored by this weird new psychology. The psychologist is to rely upon data of one kind only, the 294 •
Documents
data or facts of observation obtainable by observing the movements and other bodily changes exhibited by human and other organisms. Thirdly, there is sane Behaviorism, or that kind of psychology which, while making use of all introspectively observable facts or data, does not neglect the observation of behavior, does not fail to make full use of all the facts which are the exclusive data of Watsonian Behaviorism. This same Behaviorism is the kind of psychology that is referred to approvingly, by many contemporary writers in other fields, as “Behavioristic Psychology.” And now, trampling ruthlessly on Dr. Watson’s feelings, I make the impudent claim to be the chief begetter and exponent of this sane Behaviorism or Behavioristic Psychology, as distinct from the other two forms of Behaviorism. I claim in fact that, as regards the Behaviorism which is approvingly referred to by many contemporary writers other than technical psychologists, I, rather than Dr. Watson, am the Arch-Behaviorist. Up to the end of the last century and beyond it, psychologists did in the main concentrate their attention upon the introspectively observable facts, unduly neglecting the facts of human action or behavior, and ignoring the need for some adequate theory of behavior and of character (of which behavior or conduct is the outward expression). This neglect is implied in the definition of psychology commonly accepted at that time, namely, the “science of consciousness,” and it may be well illustrated by reference to two leading psychologists, one of the middle, the other of the end, of the nineteenth century. John Stuart Mill, after expending much labor in the endeavor to patch up the hopelessly inadequate psychology of his father, James Mill, and of the other British Associationists, seems to have realized that the psychology he had achieved by this patching process had little or no bearing upon the facts of conduct and of character; for he set to work to construct a completely new science, a science different from and independent of psychology, a science of behavior, of conduct, and of character, for which he proposed the name of “Ethology.” At the end of the century, or a little later, my lamented friend, Dr. Charles Mercier, repeated this significant attempt. He was an ardent disciple of Herbert Spencer, and had written several well-known and forcible expositions of Spencerian psychology. Then, seemingly in blissful ignorance of J. S. Mill’s proposal, he also, realizing that his psychology threw little or no light upon human action, conduct or behavior, proposed to construct a new science of behavior. This time the name given to this new science was “Praxiology.” It was at this time that I was beginning to struggle with the funDocuments
• 295
damentals of psychology. And it seemed to me that both Mill and Mercier were in error; that what was needed was not a new science of behavior under a new Greek name, but rather a reform of psychology, consisting in a greater attention to the facts of behavior or conduct, in the formulation of some theory of human action less inadequate than the hedonism of Mill and Bain, the ideo-motor theory of the intellectualists, or the mechanical reflex-theory of the Spencerian psychologists. I gave expression to this view in my first book, by proposing to define psychology as the positive science of conduct. I further defended this definition and expounded the need of this reform in my “Introduction to Social Psychology” (1908). And in 1912 I published my little book entitled “Psychology, the Study of Behavior.” I also proposed that distinction between psychology and physiology which Dr. Watson accepts, namely, that physiology studies the processes of organs and tissues, while psychology studies the total activities of the organism. Further, in the year 1901, I had begun to practice strictly behavioristic experiment upon infants, making a strictly objective or behavioristic study of the development of color discrimination in my children; by this means I was able to demonstrate for the first time the capacity for color discrimination as early as the second half-year after birth. That is to say, I practiced with good results, as early as the year 1901, the principles which Dr. Watson began to expound and apply some ten years later. Source: Classics in the History of Psychology, an Internet resource developed by Christopher D. Green, York University, Ontario. http:// psychclassics.york.ca/Watson/Battle.
Document 4 Our fourth document is part of psychologist George Miller’s delightfully titled paper “The Magical Number Seven, Plus or Minus Two,” published in 1956, the same year as philosopher Ullin Place’s landmark article, “Is Consciousness a Brain Process?” It is never easy to extract a coherent and intelligible piece of a technical scientific text, but in this case the effort is worthwhile to present in the author’s own words his important distinction between bits (that is, total amounts of information) and chunks (that is, the way we group bits of information in order to make it more memorable). He starts the article with a playful confession: “My problem is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals. This number assumes a variety of disguises, being sometimes a little larger and sometimes a little smaller than usual, but never changing so much as to be un296 •
Documents
recognizable.The persistence with which this number plagues me is far more than a random accident.There is, to quote a famous senator, a design behind it, some pattern governing its appearances. Either there really is something unusual about the number or else I am suffering from delusions of persecution.” There immediately follows some experimental data on memory that it would be inappropriate to reproduce here.We begin our extract with his summary of this first section. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information George A. Miller
The Span of Immediate Memory Let me summarize the situation in this way. There is a clear and definite limit to the accuracy with which we can identify absolutely the magnitude of a unidimensional stimulus variable. I would propose to call this limit the span of absolute judgment, and I maintain that for unidimensional judgments this span is usually somewhere in the neighborhood of seven. We are not completely at the mercy of this limited span, however, because we have a variety of techniques for getting around it and increasing the accuracy of our judgments. The three most important of these devices are (a) to make relative rather than absolute judgments; or, if that is not possible, (b) to increase the number of dimensions along which the stimuli can differ; or (c) to arrange the task in such a way that we make a sequence of several absolute judgments in a row. . . . In spite of the coincidence that the magical number seven appears in both places, the span of absolute judgment and the span of immediate memory are quite different kinds of limitations that are imposed on our ability to process information. Absolute judgment is limited by the amount of information. Immediate memory is limited by the number of items. In order to capture this distinction in somewhat picturesque terms, I have fallen into the custom of distinguishing between bits of information and chunks of information. Then I can say that the number of bits of information is constant for absolute judgment and the number of chunks of information is constant for immediate memory. The span of immediate memory seems to be almost independent of the number of bits per chunk, at least over the range that has been examined to date. The contrast of the terms bit and chunk also serves to highlight the fact that we are not very definite about what constitutes a chunk Documents
• 297
of information. For example, the memory span of five words that Hayes obtained when each word was drawn at random from a set of 1000 English monosyllables might just as appropriately have been called a memory span of 15 phonemes, since each word had about three phonemes in it. Intuitively, it is clear that the subjects were recalling five words, not 15 phonemes, but the logical distinction is not immediately apparent.We are dealing here with a process of organizing or grouping the input into familiar units or chunks, and a great deal of learning has gone into the formation of these familiar units.
Recoding In order to speak more precisely, therefore, we must recognize the importance of grouping or organizing the input sequence into units or chunks. Since the memory span is a fixed number of chunks, we can increase the number of bits of information that it contains simply by building larger and larger chunks, each chunk containing more information than before. A man just beginning to learn radiotelegraphic code hears each dit and dah as a separate chunk. Soon he is able to organize these sounds into letters and then he can deal with the letters as chunks. Then the letters organize themselves as words, which are still larger chunks, and he begins to hear whole phrases. I do not mean that each step is a discrete process, or that plateaus must appear in his learning curve, for surely the levels of organization are achieved at different rates and overlap each other during the learning process. I am simply pointing to the obvious fact that the dits and dahs are organized by learning into patterns and that as these larger chunks emerge the amount of message that the operator can remember increases correspondingly. In the terms I am proposing to use, the operator learns to increase the bits per chunk. In communication theory, this process would be called recoding. The input is given in a code that contains many chunks with few bits per chunk. The operator recodes the input into another code that contains fewer chunks with more bits per chunk. There are many ways to do this recoding, but probably the simplest is to group the input events, apply a new name to the group, and then remember the new name rather than the original input events. Since I am convinced that this process is a very general and important one for psychology, I want to tell you about a demonstration experiment that should make perfectly explicit what I am talking about. This experiment was conducted by Sidney Smith and was re298 •
Documents
ported by him before the Eastern Psychological Association in 1954. . . . It is a little dramatic to watch a person get 40 binary digits in a row and then repeat them back without error. However, if you think of this merely as a mnemonic trick for extending the memory span, you will miss the more important point that is implicit in nearly all such mnemonic devices. The point is that recoding is an extremely powerful weapon for increasing the amount of information that we can deal with. In one form or another we use recoding constantly in our daily behavior. . . .
Summary I have come to the end of the data that I wanted to present, so I would like now to make some summarizing remarks. First, the span of absolute judgment and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process, and remember. By organizing the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break (or at least stretch) this informational bottleneck. Second, the process of recoding is a very important one in human psychology and deserves much more explicit attention than it has received. In particular, the kind of linguistic recoding that people do seems to me to be the very lifeblood of the thought processes. Recoding procedures are a constant concern to clinicians, social psychologists, linguists, and anthropologists and yet, probably because recoding is less accessible to experimental manipulation than nonsense syllables or T mazes, the traditional experimental psychologist has contributed little or nothing to their analysis. Nevertheless, experimental techniques can be used, methods of recoding can be specified, behavioral indicants can be found. And I anticipate that we will find a very orderly set of relations describing what now seems an uncharted wilderness of individual differences. Third, the concepts and measures provided by the theory of information provide a quantitative way of getting at some of these questions. The theory provides us with a yardstick for calibrating our stimulus materials and for measuring the performance of our subjects. In the interests of communication I have suppressed the technical details of information measurement and have tried to express the ideas in more familiar terms; I hope this paraphrase will not lead you to think they are not useful in research. Informational concepts have Documents
• 299
already proved valuable in the study of discrimination and of language; they promise a great deal in the study of learning and memory; and it has even been proposed that they can be useful in the study of concept formation. A lot of questions that seemed fruitless twenty or thirty years ago may now be worth another look. In fact, I feel that my story here must stop just as it begins to get really interesting. And finally, what about the magical number seven? What about the seven wonders of the world, the seven seas, the seven deadly sins, the seven daughters of Atlas in the Pleiades, the seven ages of man, the seven levels of hell, the seven primary colors, the seven notes of the musical scale, and the seven days of the week? What about the seven-point rating scale, the seven categories for absolute judgment, the seven objects in the span of attention, and the seven digits in the span of immediate memory? For the present I propose to withhold judgment. Perhaps there is something deep and profound behind all these sevens, something just calling out for us to discover it. But I suspect that it is only a pernicious, Pythagorean coincidence. Source: Miller, George A. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63: 81–97.
Document 5 The next two documents bring us to the heart of the philosophical debate over consciousness in the last quarter of the twentieth century.Thomas Nagel’s classic text,“What Is It Like to Be a Bat?” first appeared in the Philosophical Review in October 1974 and has probably been reprinted and quoted more than any other comparable text in the thirty years since. If any of our documents deserves to be set out in full, Nagel’s article must surely be the one, but lack of space can be turned to advantage by allowing us to focus on the core of the argument. It is so lucid and makes its points so eloquently that any further introduction would be superfluous. What Is It Like to Be a Bat? Thomas Nagel Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar sys300 •
Documents
tems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism. There may be further implications about the form of the experience; there may even (though I doubt it) be implications about the behavior of the organism. But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism— something it is like for the organism. . . . Let me first try to state the issue somewhat more fully than by referring to the relation between the subjective and the objective, or between the pour-soi and the en-soi. This is far from easy. Facts about what it is like to be an X are very peculiar, so that some may be inclined to doubt their reality, or the significance of claims about them. To illustrate the connection between subjectivity and a point of view, and to make evident the importance of subjective features, it will help to explore the matter in relation to an example that brings out clearly the divergence between the two types of conception, subjective and objective. I assume we all believe that bats have experience. After all, they are mammals, and there is no more doubt that they have experience than that mice or pigeons or whales have experience. I have chosen bats instead of wasps or flounders because if one travels too far down the phylogenetic tree, people gradually shed their faith that there is experience there at all. Bats, although more closely related to us than those other species, nevertheless present a range of activity and a sensory apparatus so different from own that the problem I want to pose is exceptionally vivid (though it certainly could be raised with other species). Even without the benefit of philosophical reflection, anyone who has spent some time in an enclosed space with an excited bat knows what it is to encounter a fundamentally alien form of life. I have said that the essence of the belief that bats have experience is that there is something that it is like to be a bat. Now we know that most bats (the microchiroptera, to be precise) perceive the external world primarily by sonar, or echolocation, detecting the reflections, from objects within range, of their own rapid, subtly modulated, highfrequency shrieks. Their brains are designed to correlate the outgoing impulses with the subsequent echoes, and the information thus acquired enables bats to make precise discriminations of distance, size, shape, motion, and texture comparable to those we make by vision. But bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine. Documents
• 301
This appears to create difficulties for the notion of what it is like to be a bat. We must consider whether any method will permit us to extrapolate to the inner life of the bat from our own case, and if not, what alternative methods there may be for understanding the notion. Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one’s arms, which enables one to fly around at dusk and dawn catching insects in one’s mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one’s feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat.Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications. To the extent that I could look and behave like a wasp or a bat without changing my fundamental structure, my experiences would not be anything like the experiences of those animals. On the other hand, it is doubtful that any meaning can be attached to the supposition that I should possess the internal neurophysiological constitution of a bat. Even if I could by gradual degrees be transformed into a bat, nothing in my present constitution enables me to imagine what the experiences of such a future stage of myself thus metamorphosed would be like. The best evidence would come from the experiences of bats, if we only knew what they were like. So if extrapolation from our own case is involved in the idea of what it is like to be a bat, the extrapolation must be incompletable. We cannot form more than a schematic conception of what it is like. For example, we may ascribe general types of experience on the basis of the animal’s structure and behavior. Thus we describe bat sonar as a form of three-dimensional forward perception; we believe that bats feel some versions of pain, fear, hunger, and lust, and that they have other, more familiar types of perception besides sonar. But we believe that these experiences also have in each case a specific subjective character, which it is beyond our ability to conceive. And if there is conscious life elsewhere in the universe, it is likely that some of it will not be describable even in the most general experiential terms available to us. (The problem is not confined to exotic cases, however, for it 302 •
Documents
exists between one person and another. The subjective character of the experience of a person deaf and blind from birth is not accessible to me, for example, nor presumably is mine to him. This does not prevent us each from believing that the other’s experience has such a subjective character.) If anyone is inclined to deny that we can believe in the existence of facts like this whose exact nature we cannot possibly conceive, he should reflect that in contemplating the bats we are in much the same position that intelligent bats or Martians would occupy if they tried to form a conception of what it was like to be us. The structure of their own minds might make it impossible for them to succeed, but we know they would be wrong, to conclude that there is not anything precise that it is like to be us: that only certain general types of mental state could be ascribed to us (perhaps perception and appetite would be concepts common to us both; perhaps not).We know they would be wrong to draw such a skeptical conclusion because we know what it is like to be us. And we know that while it includes an enormous amount of variation and complexity, and while we do not possess the vocabulary to describe it adequately, its subjective character is highly specific, and in some respects describable in terms that can be understood only by creatures like us. The fact that we cannot expect ever to accommodate in our language a detailed description of Martian or bat phenomenology should not lead us to dismiss as meaningless the claim that bats and Martians have experiences fully comparable in richness of detail to our own. It would be fine if someone were to develop concepts and a theory that enabled us to think about those things; but such an understanding may be permanently denied to us by the limits of our nature. And to deny the reality or logical significance of what we can never describe or understand is the crudest form of cognitive dissonance. This brings us to the edge of a topic that requires much more discussion that I can give it here: namely, the relation between facts on the one hand and conceptual schemes or systems of representation on the other. My realism about the subjective domain in all its forms implies a belief in the existence of facts beyond the reach of human concepts. Certainly it is possible for a human being to believe that there are facts, which humans never will possess the requisite concepts to represent or comprehend. Indeed, it would be foolish to doubt this, given the finiteness of humanity’s expectations. After all, there would have been transfinite numbers even if everyone had been wiped out by the Black Death before Cantor discovered them. But one might also believe that there are facts which could not ever be Documents
• 303
represented or comprehended by human beings, even if the species lasted forever—simply because our structure does not permit us to operate with concepts of the requisite type. This impossibility might even be observed by other beings, but it is not clear that the existence of such beings, or the possibility of their existence, is a precondition of the significance of the hypothesis that there are humanly inaccessible facts. (After all, the nature of beings with access to humanly inaccessible facts is presumably itself a humanly inaccessible fact.) Reflection on what it is like to be a bat seems to lead us, therefore, to the conclusion that there are facts that do not consist in the truth of propositions expressible in a human language. We can be compelled to recognize the existence of such facts without being able to state or comprehend them. Source: Nagel, Thomas. 1974.What is it like to be a bat? Philosophical Review 83 (4): 435–450.
Document 6 John Searle’s “Chinese Room” ranks alongside Nagel’s bat as one of the most discussed of the word-pictures that have helped focus philosophers’ minds on the key issues of consciousness.With the bat, the question was about subjective experience and the personal point of view.With the Chinese Room, the question is whether meaning is computational. Our sixth document contains the first published account of the Chinese Room and is part of “Minds, Brains, and Programs,” which acted as the target article for a symposium on the computability of mind in Behavioral and Brain Sciences in 1980. Since its first appearance, Searle’s Gedankenexperiment (“thought experiment”) has been retold many times and refined in response to criticisms, but I still find that its original version is a joy to return to for its uncluttered clarity. Minds, Brains, and Programs John R. Searle What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call “strong” AI from “weak” or “cautious” AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately pro304 •
Documents
grammed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations. I have no objection to the claims of weak AI, at least as far as this article is concerned. My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. When I hereafter refer to AI, I have in mind the strong version, as expressed by these two claims. I will consider the work of Roger Schank and his colleagues at Yale, because I am more familiar with it than I am with any other similar claims, and because it provides a very clear example of the sort of work I wish to examine. But nothing that follows depends upon the details of Schank’s programs. . . . One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I’m locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that “formal” means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a “script,” they call the second batch a “story,” and they call the third batch “questions.” Furthermore, they call the symbols I give them Documents
• 305
back in response to the third batch “answers to the questions,” and the set of rules in English that they gave me, they call the “program.” Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view—that is, from the point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don’t speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view—from the point of view of someone reading my “answers”—the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program. Now the claims made by strong AI are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment. 1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing. 2.As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to understanding? One of the claims made by the supporters of 306 •
Documents
strong AI is that when I understand a story in English, what I am doing is exactly the same—or perhaps more of the same—as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don’t. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program. On the basis of these two assumptions we assume that even if Schank’s program isn’t the whole story about understanding, it may be part of the story.Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested—though certainly not demonstrated—by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding. Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles—that is not the point at all. Rather, whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all. Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that I know what the former mean, while I haven’t the faintest idea what the latter mean. Documents
• 307
Source: Searle, John R. 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3: 417–457. Reprinted with the permission of Cambridge University Press.
Document 7 The seventh document is another Nobel Prize presentation speech, the one made by Professor David Ottoson of the Royal Caroline Institute, when Roger Sperry, David Hubel, and Torsten Wiesel won the prize in physiology or medicine in 1981. In some ways I am uneasy about using these particular texts as documents, but original scientific papers do not easily lend themselves to adaptation to illustrate for a general readership their authors’ importance in their chosen field.These Nobel citations, however, do encapsulate that significance, albeit in words other than those of the pioneer scientists themselves, in an appropriate manner. In the present case, the inclusion of Roger Sperry in the same award as Hubel and Wiesel provides a way for me to pay tribute to a major researcher whose work was not given the room in the main text of this book that its importance deserves. Such are the pressures of space on a wide-ranging review of a large topic such as consciousness studies. These particular prizewinners also serve to remind readers how long a time twenty years is in the world of science research into consciousness.Although nobody would question the importance of the work carried out by these three men, it is notable that the discoveries for which they were honored—hemispheric specialization and hierarchical processing in the visual system—have both been the subject of criticism and modification in the intervening years.
Nobel Presentation Speech to Roger Sperry, David Hubel, and Torsten Wiesel, for Neuroscientific Research. [Translation from the Swedish text] David Ottoson Your Majesties, Your Royal Highnesses, Ladies and Gentlemen. One day in October 1649, René Descartes, the French philosopher and mathematician acknowledged as the greatest brain researcher of the period, arrived in Stockholm at the pressing invitation of Queen Christina. It was with much hesitation that Descartes went to Sweden as he wrote “the land of bears between rocks and ice.” In the letters to his friends, he complained bitterly that he was obliged to present himself at the Royal Palace at five o’clock each morning to instruct the young queen in philosophy, so avid was she for knowledge. Modern 308 •
Documents
brain research scientists and followers in the Cartesian footsteps are not faced with the same demands as winners of the Nobel Prize, but they are met with other tribulations—and expectations. Descartes, with the help of philosophy, sought to find the answer to his questions of the functions of the mind. Later research has had other means at its disposal and has tried to feel its way forward by other methods. Sperry has succeeded with sophisticated methods to extract from the brain some of its best guarded secrets and has allowed us to look into a world which until now has been nearly completely closed to us. Hubel and Wiesel have succeeded in breaking the code of the message which the eyes send to the brain and have thereby given us insight into the neuronal processes underlying our visual experiences. The brain consists of two halves, hemispheres, which are structurally identical. Does this mean that we have two brains or that the two hemispheres have different tasks? The answer to this question can appear impossible to find because the brain halves are united by millions of nerve threads and, therefore, work in a complete functional harmony. However, it has been known for more than a hundred years that despite their similarity and close linkage the two hemispheres have in part different tasks to fulfill. The left hemisphere is specialized for speech and has, therefore, been considered absolutely superior to the right hemisphere. For the right hemisphere it has been difficult to find a role and it has generally been regarded as a “sleeping partner” of its left companion. In a way the roles of the two hemispheres were somewhat like those of man and wife of an old-time marriage. In the beginning of the 1960s Sperry had the occasion to study some patients in whom the connections between the two hemispheres had been severed. The surgical intervention had been undertaken as a last resort to alleviate the epileptic seizures from which the patients suffered. In most of them an improvement occurred and there was a decrease in the frequency of their epileptic fits. Otherwise, the operation did not appear to be accompanied by any changes in the personality of the patients. However, Sperry was able, using brilliantly designed test methods to demonstrate that the two hemispheres in these patients had each its own stream of conscious awareness, perceptions, thoughts, ideas and memories, all of which were cut off from the corresponding experiences in the opposite hemisphere. The left brain half is, as Sperry was able to show, superior to the right in abstract thinking, interpretation of symbolic relationships and in carrying out detailed analysis. It can speak, write, carry out mathematical calculations and in its general function is rather reminiscent of Documents
• 309
a computer. Furthermore, it is the leading hemisphere in the control of the motor system, the executive and in some respects the aggressive brain half. It is with this brain half that we communicate. The right cerebral hemisphere on the other hand is mute and in essence lacks the possibility to reach the outside world. It cannot write and can only read and understand the meaning of simple words in noun form and does not grasp the meaning of adjective or verb. It almost entirely lacks the ability to count and can only carry out simple additions up to 20. It completely lacks the ability to subtract, multiply and divide. Because of its muteness, the right brain half gives the impression of being inferior to the left. However, Sperry in his investigations was able to reveal that the right hemisphere in many ways is clearly superior to the left. Foremost, this concerns the capacity for concrete thinking, the apprehension and processing of spatial patterns, relations and transformations. It is superior to the left hemisphere in the perception of complex sounds and in the appreciation of music; it recognizes melodies more readily and also can accurately distinguish voices and tones. It is, too, absolutely superior to the left hemisphere in perception of nondescript patterns. It is with the right hemisphere we recognize the face of an acquaintance, the topography of a town or landscape earlier seen. It is nearly 50 years since Pavlov, the great Russian physiologist, put forward the suggestions that mankind can be divided into thinkers and artists. Pavlov was perhaps not entirely wrong in making this proposal. Today we know from Sperry’s work that the left hemisphere is cool and logical in its thinking, while the right hemisphere is the imaginative, artistically creative half of the brain. Perhaps it is so that in thinkers the left hemisphere is dominant whereas in artists it is the right. Hubel and Wiesel came in the mid-50s to the laboratory of the neurophysiologist S.W. Kuffler in Baltimore. Kuffler had at this time completed a series of investigations marked by an extraordinary experimental elegance in which he demonstrated how the picture that falls into the eyes is processed by the cells of the retina. Kuffler, who passed away a year ago, had by his work indicated the lines on which to continue analysis of the information processing of the visual system. This is, therefore, a fitting occasion on which to pay tribute to the memory of Kuffler for his important contribution. The signal message that the eye sends to the brain can be regarded as a secret code to which only the brain possesses the key and can interpret the message. Hubel and Wiesel have succeeded in breaking the code. This they have achieved by tapping the signals from the nerve cells in the various cell layers of the brain cortex. Thus, they have been able to show how the various components of the retinal 310 •
Documents
image are read out and interpreted by the cortical cells in respect to contrast, linear patterns and movement of the picture over the retina. The cells are arranged in columns, the analysis takes place in a strictly ordered sequence from one nerve cell to another and every nerve cell is responsible for one particular detail in the picture pattern. Hubel and Wiesel in their investigations were also able to show that the ability of the cortical cells to interpret the code of the impulse message from the retina develops early during a period directly after birth. A prerequisite for this development to take place is that the eye is subjected to visual experiences. If during this period one eye is sutured even for a few days, this can result in permanently impaired vision because the capacity of the brain to interpret the picture has not developed normally. For this to take place it is not only essential that the eye is reached by light but also that a sharp image is formed on the retina and that retinal image has a pattern of contours and contrasts. This discovery reveals that the brain has a high degree of plasticity at an early stage immediately after birth. Hubel and Wiesel have disclosed one of the most well guarded secrets of the brain: the way by which its cells decode the message which the brain receives from the eyes. Thanks to Hubel and Wiesel we now begin to understand the inner language of the brain. Their discovery of the plasticity of the brain cortex during an early period of our life has implications reaching far beyond the field of visual physiology and proves the importance of a richly varied sensory input for the development of the higher functions of the brain. Dr. Sperry, Dr. Hubel and Dr. Wiesel, you have with your discoveries written one of the most fascinating chapters in the history of brain research.You, Dr. Sperry, have given us more profound insights into the higher functions of the brain than all the knowledge acquired in the twentieth century.You, Dr. Hubel and Dr. Wiesel, have translated the symbolic calligraphy of the brain cortex. The deciphering of the hieroglyphic characters of the ancient Egyptians has been denoted as one of the greatest advances in the history of philology. By breaking the code of the enigmatic signals of the visual system you have made an achievement which for all time will stand out as one of the most important in the history of brain research. It is a privilege and pleasure for me to convey to you the warmest congratulations of the Nobel Assembly of the Karolinska Institute and to invite you to receive your Nobel Prize from the hands of His Majesty the King. Source: The Nobel Foundation 1981. Reprinted with permission. http:// www.nobel.se/medicine/laureates/1981/presentation-speech.html.
Documents
• 311
Document 8 No representative selection of writings in the science of consciousness would be complete without a contribution from Daniel Dennett. His output has been so large, and his fields of interest so many, that the choice is difficult. So, as with John Searle, I have chosen to go for the original crisp presentation of an idea that has since evolved and been presented in other, more expansive works.The document is an extract from the target article he coauthored with Marcel Kinsbourne for the Behavioral and Brain Sciences conference in 1991.The paper focuses on the conscious experience of time and includes a discussion of Benjamin Libet’s findings and phenomena such as the phi effect.As a preliminary, Dennett launches his assault on what he dubs “the Cartesian Theater” understanding of perception, and it is this opening section that is reproduced below. It also includes the presentation of his rival multiple drafts model of how the brain responds to sensory inputs. Time and the Observer:The Where and When of Consciousness in the Brain Daniel Dennett and Marcel Kinsbourne
Cartesian Materialism: Is There a “Central Observer” in the Brain? Wherever there is a conscious mind, there is a point of view. A conscious mind is an observer, who takes in the information that is available at a particular (roughly) continuous sequence of times and places in the universe. A mind is thus a locus of subjectivity, a thing it is like something to be.What it is like to be that thing is partly determined by what is available to be observed or experienced along the trajectory through space-time of that moving point of view, which for most practical purposes is just that: a point. For instance, the startling dissociation of the sound and appearance of distant fireworks is explained by the different transmission speeds of sound and light, arriving at the observer (at that point) at different times, even though they left the source simultaneously. But if we ask where precisely in the brain that point of view is located, the simple assumptions that work so well on larger scales of space and time break down. It is now quite clear that there is no single point in the brain where all information funnels in, and this fact has some far from obvious consequences. Light travels much faster than sound, as the fireworks example reminds us, but it takes longer for the brain to process visual stimuli than to process auditory stimuli. As Pöppel has pointed out, thanks to 312 •
Documents
these counterbalancing differences, the “horizon of simultaneity” is about 10 meters: light and sound that leave the same point about 10 meters from the observer’s sense organs produce neural responses that are “centrally available” at the same time. Can we make this figure more precise? There is a problem. The problem is not just measuring the distances from the external event to the sense organs, or the transmission speeds in the various media, or allowing for individual differences. The more fundamental problem is deciding what to count as the “finish line” in the brain. Pöppel obtained his result by comparing behavioral measures: mean reaction times (button-pushing) to auditory and visual stimuli. The difference ranges between 30 and 40 msec, the time it takes sound to travel approximately 10 meters (the time it takes light to travel 10 meters is infinitesimally different from zero). Pöppel used a peripheral finish line—external behavior—but our natural intuition is that the experience of the light and sound happens between the time the vibrations strike our sense organs and the time we manage to push the button to signal that experience.And it happens somewhere centrally, somewhere in the brain on the excited paths between the sense organ and muscles that move the finger. It seems that if we could say exactly where, we could infer exactly when the experience happened. And vice versa: if we could say exactly when it happened, we could infer where in the brain conscious experience was located. This picture of how conscious experience must sit in the brain is a natural extrapolation of the familiar and undeniable fact that for macroscopic time intervals, we can indeed order events into the categories “not yet observed” and “already observed” by locating the observer and plotting the motions of the vehicles of information relative to that point. But when we aspire to extend this method to explain phenomena involving very short time intervals, we encounter a logical difficulty: If the “point” of view of the observer is spread over a rather large volume in the observer’s brain, the observer’s own subjective sense of sequence and simultaneity must be determined by something other than a unique “order of arrival” since order of arrival is incompletely defined until we specify the relevant destination. If A beats B to one finish line but B beats A to another, which result fixes subjective sequence in consciousness? Which point or points of “central availability” would “count” as a determiner of experienced order, and why? Consider the time course of normal visual information processing. Visual stimuli evoke trains of events in the cortex that gradually yield content of greater and greater specificity. At different times and Documents
• 313
different places, various “decisions” or “judgments” are made: more literally, parts of the brain are caused to go into states that differentially respond to different features, e.g., first mere onset of stimulus, then shape, later color (in a different pathway), motion, and eventually object recognition. It is tempting to suppose that there must be some place in the brain where “it all comes together” in a multi-modal representation or display that is definitive of the content of conscious experience in at least this sense: the temporal properties of the events that occur in that particular locus of representation determine the temporal properties—of sequence, simultaneity, and real-time onset, for instance—of the subjective “stream of consciousness.” This is the error of thinking we intend to expose. “Where does it all come together?” The answer, we propose, is Nowhere. Some of the contentful states distributed around in the brain soon die out, leaving no traces. Others do leave traces, on subsequent verbal reports of experience and memory, on “semantic readiness” and other varieties of perceptual set, on emotional state, behavioral proclivities, and so forth. Some of these effects—for instance, influences on subsequent verbal reports—are at least symptomatic of consciousness. But there is no one place in the brain through which all these causal trains must pass in order to deposit their contents “in consciousness.” The brain must be able to “bind” or “correlate” and “compare” various separately discriminated contents, but the processes that accomplish these unifications are themselves distributed, not gathered at some central decision point, and as a result, the “point of view of the observer” is spatially smeared. If brains computed at near the speed of light, as computers do, this spatial smear would be negligible. But given the relatively slow transmission and computation speeds of neurons, the spatial distribution of processes creates significant temporal smear—ranging, as we shall see, up to several hundred milliseconds—within which range the normal common sense assumptions about timing and arrival at the observer need to be replaced. For many tasks, the human capacity to make conscious discriminations of temporal order drops to chance when the difference in onset is on the order of 50 msec (depending on stimulus conditions), but, as we shall see, this variable threshold is the result of complex interactions, not a basic limit on the brain’s capacity to make the specialized order judgments required in the interpretation and coordination of perceptual and motor phenomena.We need other principles to explain the ways in which subjective temporal order is composed, especially in cases in which the brain must cope with rapid sequences occurring at the limits of its powers of temporal resolution. As usual, the performance of 314 •
Documents
the brain when put under strain provides valuable clues about its general modes of operation. Descartes, early to think seriously about what must happen inside the body of the observer, elaborated an idea that is superficially so natural and appealing that it has permeated our thinking about consciousness ever since and permitted us to defer considering the perplexities—until now. Descartes decided that the brain did have a center: the pineal gland, which served as the gateway to the conscious mind. It is the only organ in the brain that is in the midline, rather than paired, with left and right versions. It looked different, and since its function was then quite inscrutable (and still is), Descartes posited a role for it: in order for a person to be conscious of something, traffic from the senses had to arrive at this station, where it thereupon caused a special—indeed magical—transaction to occur between the person’s material brain and immaterial mind. When the conscious mind then decided on a course of bodily action, it sent a message back “down” to the body via the pineal gland. The pineal gland, then, is like a theater, within which is displayed information for perusal by the mind. Descartes’ vision of the pineal’s role as the turnstile of consciousness (we might call it the Cartesian bottleneck) is hopelessly wrong. The problems that face Descartes’ interactionistic dualism, with its systematically inexplicable traffic between the realm of the material and the postulated realm of the immaterial, were already well appreciated in Descartes’ own day, and centuries of reconsideration have only hardened the verdict: the idea of the Ghost in the Machine, as Ryle aptly pilloried it, is a non-solution to the problems of mind. But while materialism of one sort or another is now a received opinion approaching unanimity, even the most sophisticated materialists today often forget that once Descartes’ ghostly res cogitans is discarded, there is no longer a role for a centralized gateway, or indeed for any functional center to the brain. The brain itself is Headquarters, the place where the ultimate observer is, but it is a mistake to believe that the brain has any deeper headquarters, any inner sanctum arrival at which is the necessary or sufficient condition for conscious experience. Let us call the idea of such a centered locus in the brain Cartesian materialism, since it is the view one arrives at when one discards Descartes’ dualism but fails to discard the associated imagery of a central (but material) Theater where “it all comes together.” Once made explicit, it is obvious that it is a bad idea, not only because, as a matter of empirical fact, nothing in the functional neuroanatomy of the brain suggests such a general meeting place, but also because positDocuments
• 315
ing such a center would apparently be the first step in an infinite regress of too-powerful homunculi. If all the tasks Descartes assigned to the immaterial mind have to be taken over by a “conscious” subsystem, its own activity will either be systematically mysterious, or decomposed into the activity of further subsystems that begin to duplicate the tasks of the “non-conscious” parts of the whole brain.Whether or not anyone explicitly endorses Cartesian materialism, some ubiquitous assumptions of current theorizing presuppose this dubious view. We will show that the persuasive imagery of the Cartesian Theater, in its materialistic form, keeps reasserting itself, in diverse guises, and for a variety of ostensibly compelling reasons. Thinking in its terms is not an innocuous shortcut; it is a bad habit. One of its most seductive implications is the assumption that a distinction can always be drawn between “not yet observed” and “already observed.” But, as we have just argued, this distinction cannot be drawn once we descend to the scale that places us within the boundaries of the spatio-temporal volume in which the various discriminations are accomplished. Inside this expanded “point of view” spatial and temporal distinctions lose the meanings they have in broader contexts. The crucial features of the Cartesian Theater model can best be seen by contrasting it with the alternative we propose, the Multiple Drafts model: All perceptual operations, and indeed all operations of thought and action, are accomplished by multi-track processes of interpretation and elaboration that occur over hundreds of milliseconds, during which time various additions, incorporations, emendations, and overwritings of content can occur, in various orders. Feature-detections or discriminations only have to be made once. That is, once a localized, specialized “observation” has been made, the information content thus fixed does not have to be sent somewhere else to be rediscriminated by some “master” discriminator. In other words, it does not lead to a re-presentation of the already discriminated feature for the benefit of the audience in the Cartesian Theater. How a localized discrimination contributes to, and what affect it has on, the prevailing brain state (and thus awareness) can change from moment to moment, depending on what else is going on in the brain. Drafts of experience can be revised at a great rate, and no one is more correct than another. Each reflects the situation at the time it is generated. These spatially and temporally distributed content-fixations are themselves precisely locatable in both space and time, but their onsets do not mark the onset of awareness of their content. It is always an open question whether any particular content thus discriminated will even316 •
Documents
tually appear as an element in conscious experience. These distributed content-discriminations yield, over the course of time, something rather like a narrative stream or sequence, subject to continual editing by many processes distributed around in the brain, and continuing indefinitely into the future. This stream of contents is only rather like a narrative because of its multiplicity; at any point in time there are multiple “drafts” of narrative fragments at various stages of “editing” in various places in the brain. Probing this stream at different intervals produces different effects, elicits different narrative accounts from the subject. If one delays the probe too long (overnight, say) the result is apt to be no narrative left at all—or else a narrative that has been digested or “rationally reconstructed” to the point that it has minimal integrity. If one probes “too early,” one may gather data on how early a particular discrimination is achieved in the stream, but at the cost of disrupting the normal progression of the stream. Most importantly, the Multiple Drafts model avoids the tempting mistake of supposing that there must be a single narrative (the “final” or “published” draft) that is canonical—that represents the actual stream of consciousness of the subject, whether or not the experimenter (or even the subject) can gain access to it. The main points at which this model disagrees with the competing tacit model of the Cartesian Theater may be summarized: 1. Localized discriminations are not precursors of re-presentations of the discriminated content for consideration by a more central discriminator. 2. The objective temporal properties of discriminatory states may be determined, but they do not determine temporal properties of subjective experience. 3. The “stream of consciousness” is not a single, definitive narrative. It is a parallel stream of conflicting and continuously revised contents, no one narrative thread of which can be singled out as canonical—as the true version of conscious experience. The different implications of these two models will be exhibited by considering several puzzling phenomena that seem at first to indicate that the mind “plays tricks with time.” Source: Dennett, Daniel, and Kinsbourne, Marcel. 1991. Time and the observer: The where and when of consciousness in the brain. Behavioral and Brain Sciences 15: 183–247. Reprinted with the permission of Cambridge University Press.
Documents
• 317
Document 9 The penultimate document is perhaps the one that should be read even if all the others are ignored. Its subject matter is the central question for the science and philosophy of consciousness: why and how does physical processing give rise to a rich inner life of experience? And its author, David J. Chalmers, has emerged as a leading figure in the maturing international field of consciousness research. “Facing Up to the Problem of Consciousness” first appeared in printed form in the Journal of Consciousness Studies in 1995, but its contents had been laid before the public a year earlier at the very first of the biennial Tucson conferences titled “Toward a Science of Consciousness.”The ideas distilled in the article were then set out in a far more developed way in Chalmers’s 1996 book, The Conscious Mind. The extract from the article printed here consists of the first three sections, in which the author distinguishes clearly between the “easy” problems of consciousness, those that relate to brain function, and the “hard” problem of conscious experience itself. Facing Up to the Problem of Consciousness David J. Chalmers
I. Introduction Consciousness poses the most baffling problems in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain. All sorts of mental phenomena have yielded to scientific investigation in recent years, but consciousness has stubbornly resisted. Many have tried to explain it, but the explanations always seem to fall short of the target. Some have been led to suppose that the problem is intractable, and that no good explanation can be given. To make progress on the problem of consciousness, we have to confront it directly. In this paper, I first isolate the truly hard part of the problem, separating it from more tractable parts and giving an account of why it is so difficult to explain. I critique some recent work that uses reductive methods to address consciousness, and argue that these methods inevitably fail to come to grips with the hardest part of the problem. Once this failure is recognized, the door to further progress is opened. In the second half of the paper, I argue that if we move to a new kind of nonreductive explanation, a naturalistic account of consciousness can be given. I put forward my own candidate for such an account: a nonreductive theory based on principles of 318 •
Documents
structural coherence and organizational invariance and a double-aspect view of information.
II.The Easy Problems and the Hard Problem There is not just one problem of consciousness. “Consciousness” is an ambiguous term, referring to many different phenomena. Each of these phenomena needs to be explained, but some are easier to explain than others.At the start, it is useful to divide the associated problems of consciousness into “hard” and “easy” problems. The easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods. The easy problems of consciousness include those of explaining the following phenomena: • the ability to discriminate, categorize, and react to environmental stimuli; • the integration of information by a cognitive system; • the reportability of mental states; • the ability of a system to access its own internal states; • the focus of attention; • the deliberate control of behaviour; • the difference between wakefulness and sleep. All of these phenomena are associated with the notion of consciousness. For example, one sometimes says that a mental state is conscious when it is verbally reportable, or when it is internally accessible. Sometimes a system is said to be conscious of some information when it has the ability to react on the basis of that information, or, more strongly, when it attends to that information, or when it can integrate that information and exploit it in the sophisticated control of behaviour. We sometimes say that an action is conscious precisely when it is deliberate. Often, we say that an organism is conscious as another way of saying that it is awake. There is no real issue about whether these phenomena can be explained scientifically. All of them are straightforwardly vulnerable to explanation in terms of computational or neural mechanisms. To explain access and reportability, for example, we need only specify the mechanism by which information about internal states is retrieved and made available for verbal report. To explain the integration of information, we need only exhibit mechanisms by which information is Documents
• 319
brought together and exploited by later processes. For an account of sleep and wakefulness, an appropriate neurophysiological account of the processes responsible for organisms’ contrasting behaviour in those states will suffice. In each case, an appropriate cognitive or neurophysiological model can clearly do the explanatory work. If these phenomena were all there was to consciousness, then consciousness would not be much of a problem. Although we do not yet have anything close to a complete explanation of these phenomena, we have a clear idea of how we might go about explaining them. This is why I call these problems the easy problems. Of course, “easy” is a relative term. Getting the details right will probably take a century or two of difficult empirical work. Still, there is every reason to believe that the methods of cognitive science and neuroscience will succeed. The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of informationprocessing, but there is also a subjective aspect. As Nagel has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience. It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing.Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises.Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does. If any problem qualifies as the problem of consciousness, it is this one. In this central sense of “consciousness,” an organism is conscious if there is something it is like to be that organism, and a mental state is conscious if there is something it is like to be in that state. Sometimes 320 •
Documents
terms such as “phenomenal consciousness” and “qualia” are also used here, but I find it more natural to speak of “conscious experience” or simply “experience.” Another useful way to avoid confusion is to reserve the term “consciousness” for the phenomena of experience, using the less loaded term “awareness” for the more straightforward phenomena described earlier. If such a convention were widely adopted, communication would be much easier.As things stand, those who talk about “consciousness” are frequently talking past each other. The ambiguity of the term “consciousness” is often exploited by both philosophers and scientists writing on the subject. It is common to see a paper on consciousness begin with an invocation of the mystery of consciousness, noting the strange intangibility and ineffability of subjectivity, and worrying that so far we have no theory of the phenomenon. Here, the topic is clearly the hard problem—the problem of experience. In the second half of the paper, the tone becomes more optimistic, and the author’s own theory of consciousness is outlined. Upon examination, this theory turns out to be a theory of one of the more straightforward phenomena—of reportability, of introspective access, or whatever. At the close, the author declares that consciousness has turned out to be tractable after all, but the reader is left feeling like the victim of a bait-and-switch. The hard problem remains untouched.
III. Functional Explanation Why are the easy problems easy, and why is the hard problem hard? The easy problems are easy precisely because they concern the explanation of cognitive abilities and functions. To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained. To explain reportability, for instance, is just to explain how a system could perform the function of producing reports on internal states. To explain internal access, we need to explain how a system could be appropriately affected by its internal states and use information about those states in directing later processes. To explain integration and control, we need to explain how a system’s central processes can bring information contents together and use them in the facilitation of various behaviours. These are all problems about the explanation of functions. Documents
• 321
How do we explain the performance of a function? By specifying a mechanism that performs the function. Here, neurophysiological and cognitive modelling are perfect for the task. If we want a detailed low-level explanation, we can specify the neural mechanism that is responsible for the function. If we want a more abstract explanation, we can specify a mechanism in computational terms. Either way, a full and satisfying explanation will result. Once we have specified the neural or computational mechanism that performs the function of verbal report, for example, the bulk of our work in explaining reportability is over. In a way, the point is trivial. It is a conceptual fact about these phenomena that their explanation only involves the explanation of various functions, as the phenomena are functionally definable. All it means for reportability to be instantiated in a system is that the system has the capacity for verbal reports of internal information. All it means for a system to be awake is for it to be appropriately receptive to information from the environment and for it to be able to use this information in directing behaviour in an appropriate way. To see that this sort of thing is a conceptual fact, note that someone who says “you have explained the performance of the verbal report function, but you have not explained reportability” is making a trivial conceptual mistake about reportability.All it could possibly take to explain reportability is an explanation of how the relevant function is performed; the same goes for the other phenomena in question. Throughout the higher-level sciences, reductive explanation works in just this way. To explain the gene, for instance, we needed to specify the mechanism that stores and transmits hereditary information from one generation to the next. It turns out that DNA performs this function; once we explain how the function is performed, we have explained the gene. To explain life, we ultimately need to explain how a system can reproduce, adapt to its environment, metabolize, and so on. All of these are questions about the performance of functions, and so are well-suited to reductive explanation. The same holds for most problems in cognitive science. To explain learning, we need to explain the way in which a system’s behavioural capacities are modified in light of environmental information, and the way in which new information can be brought to bear in adapting a system’s actions to its environment. If we show how a neural or computational mechanism does the job, we have explained learning. We can say the same for other cognitive phenomena, such as perception, memory, and language. Sometimes the relevant functions need to be characterized quite subtly, but it is clear that insofar as cognitive science explains 322 •
Documents
these phenomena at all, it does so by explaining the performance of functions. When it comes to conscious experience, this sort of explanation fails. What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioural functions in the vicinity of experience— perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience? A simple explanation of the functions leaves this question open. There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene,” then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced,” they are not making a conceptual mistake. This is a nontrivial further question. This further question is the key question in the problem of consciousness. Why doesn’t all this information-processing go on “in the dark,” free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery. There is an explanatory gap (a term due to Levine) between the functions and experience, and we need an explanatory bridge to cross it. A mere account of the functions stays on one side of the gap, so the materials for the bridge must be found elsewhere. This is not to say that experience has no function. Perhaps it will turn out to play an important cognitive role. But for any role it might play, there will be more to the explanation of experience than a simple explanation of the function. Perhaps it will even turn out that in the course of explaining a function, we will be led to the key insight that allows an explanation of experience. If this happens, though, the discovery will be an extra explanatory reward. There is no cognitive function such that we can say in advance that explanation of that function will automatically explain experience. Documents
• 323
To explain experience, we need a new approach. The usual explanatory methods of cognitive science and neuroscience do not suffice. These methods have been developed precisely to explain the performance of cognitive functions, and they do a good job of it. But as these methods stand, they are only equipped to explain the performance of functions. When it comes to the hard problem, the standard approach has nothing to say. Source: Chalmers, David J. 1995. Facing up to the problem of consciousness. Journal of Consciousness Studies 2 (3): 200–219. Reprinted with permission.
Document 10 The final document is an account of lucid dreaming by the leading researcher in the area, Stephen LaBerge. In its original setting, the piece is a commentary on a target article in Behavioral and Brain Sciences in the year 2000, but it works well also as a stand-alone presentation. Lucid dreaming is a fascinating and still controversial area of psychology, and the author is here both informative and also a persuasive advocate for this field of research. Lucid Dreaming: Evidence and Methodology Stephen LaBerge Just as dreaming provides a test case for theories of consciousness, lucid dreaming provides a test case for theories of dreaming.Although one is not usually explicitly aware that one is dreaming while in a dream, a remarkable exception sometimes occurs in which one possesses clear cognizance that one is dreaming. During such “lucid” dreams, one can reason clearly, remember the conditions of waking life, and act upon reflection or in accordance with plans decided upon before sleep. These cognitive functions, commonly associated only with waking consciousness, occur while one remains soundly asleep and vividly experiencing a dream world that is often nearly indistinguishable from the “real world.” Theories of dreaming that do not account for lucidity are incomplete, and theories that do not allow for lucidity are incorrect. Although lucid dreams have been reported since Aristotle, until recently many researchers doubted that the dreaming brain was capable of such a high degree of mental functioning and consciousness. Based on earlier studies showing that some of the eye movements of REM sleep corresponded to the reported direction of the dreamer’s gaze, we asked subjects to carry out distinctive patterns of voluntary eye movements when they realized they were dreaming. The pre324 •
Documents
arranged eye movement signals appeared on the polygraph records during REM, proving that the subjects had indeed been lucid during uninterrupted REM sleep. Our studies of the physiology of lucid dreaming fit within the psychophysiological paradigm of dream research that Hobson has helped establish. Therefore, I naturally agree with Hobson et al. in believing it worthwhile to attempt to relate phenomenological and physiological data across a range of states including waking, NREM, and REM sleep. I also share Hobson’s view that REM sleep is unique in many ways; for example, stable lucid dreams appear to be nearly exclusively found in REM. As for the AIM model on which the Hobson et al. article focuses, I regard it as an improvement on the earlier Activation-Synthesis model. The AIM model makes many plausible and interesting connections, but still doesn’t do justice to the full range and complexity of the varieties of dreaming consciousness accompanying REM sleep. One of the problems with AIM is that its three “dimensions” are actually each multidimensional. For example, from which brain area is “Activation” (A) measured? Obviously, A varies as a function of brain location. Hobson et al. admit as much when they propose to locate lucid dreaming in a dissociated AIM space with PFC more activated than it usually is. If this is true, then non-lucid dreaming would have to be characterized by a low value of A. Incidentally, there is no evidence to support the idea that lucid dreaming is in any sense a dissociated state. Still, the need for multiple A dimensions seems inescapable. Similarly, the “Information flow” (I) dimension is more complex than at first appears. Experimental evidence suggests that it is possible for one sense to remain awake, while others fall asleep. A further problem with the I “dimension” is the confounding of sensory input and motor output, as can be seen in several of Hobson et al.’s examples. Finally, “Mode of information processing” (M) attempts to reduce the vast neurochemical complexity of the brain to the global ratio of discharge rates of aminergic to cholinergic neurons. Is that really all there is to say about the neurochemical basis of consciousness? What about regional differences of function? What about the scores of other putative neurotransmitters and neuromodulators? Perhaps due in part to the over-simplifications necessary to fit these multiple dimensions into an easy-to-visualize three, certain features of dreaming consciousness are misunderstood or exaggerated. For example, Hobson et al. say “self-reflection in dreams is generally Documents
• 325
found to be absent or greatly reduced relative to waking.” However, the two studies cited suffered from weak design and extremely small sample sizes. Neither in fact actually compared frequencies of dreaming reflection to equivalent measures of waking reflection. A study that did make direct comparisons between dreaming and waking found nearly identical frequencies of reflection in dreaming (81%) as in waking (79%), clearly contradicting the characterization of dreams as non-reflective. Replications found similar results. These studies were cited in Hobson’s article but otherwise ignored. Another unsubstantiated claim of Hobson et al. is that “volitional control is greatly attenuated in dreams . . . .” Of course, during nonlucid dreams people rarely attempt to control the course of the dream by magic. The same is true, one hopes, for waking. But likewise, during dreams and waking, one has similar control over one’s body and is able to choose, for example to walk in one direction or in another. Such trivial choice is probably as ubiquitous in dreams as waking and, as measured by the question “At any time did you choose between alternative actions after consideration of the options?,” 49% of dream samples had voluntary choice, compared to 74% of waking samples. The lower amount of choice in dreams may be an artifact of poorer recall or a real difference, but choice is by no means “greatly attenuated.” While making the above claim, Hobson et al. incorrectly attribute to me the false statement that “the dreamer can only gain lucidity with its concomitant control of dream events for a few seconds.” In fact, lucid dreams as verified in the laboratory by eye-movement signalling last up to 50 minutes in length, with the average being about 2 minutes. The relatively low average is partially due to the fact that subjects were carrying out short experiments and wanted to awaken with full recall. At the onset of lucid dreams there is an increased tendency to awaken, probably due to the fact that lucid dreamers are thinking at that point, which withdraws attention from the dream, causing awakening. The eye-movement signalling methodology mentioned above forms the basis for a powerful approach to dream research: Lucid dreamers can remember pre-sleep instructions to carry out experiments marking the exact time of particular dream events with eye movement signals, allowing precise correlations between the dreamer’s subjective reports and recorded physiology, and enabling the methodical testing of hypotheses.We have used this strategy in a series of studies demonstrating a higher degree of isomorphism between dreamed actions and physiological responses than had been found previously using less effective methodologies. For example, we found that time intervals 326 •
Documents
estimated in lucid dreams are very close to actual clock time; that dreamed breathing corresponds to actual respiration; that dreamed movements result in corresponding patterns of muscle twitching; and that dreamed sexual activity is associated with physiological responses very similar to those that accompany actual sexual activity. These results support the isomorphism hypothesis (Hobson et al.) but contradict Solms’s notion of the “deflection” of motor output away from the usual pathways, and his speculation that it isn’t only the musculo-skeletal system that is deactivated during dreams, but “the entire motor system, including its highest psychological components which control goal-directed thought and voluntary action.” I believe Occam’s Razor favors the simpler hypothesis that the motor system is working in REM essentially as it is in waking, except for the spinal paralysis; just as the only essential difference between the constructive processes of consciousness in dreaming and waking is the degree of sensory input. Oddly, Hobson et al. ignore these data on eye movements while appealing that we keep open the question of relationship between eye movement and dream imagery “until methods more adequate to its investigation are developed.” There is no need to wait. Adequate methods have already been developed, as in our recent study showing smooth tracking eye movements during dreaming. Memory is another area of inquiry upon which lucid dreaming can shed light. Hobson et al. argue that memory during dreaming may be as deficient as it is upon awakening. They give the example of comparing one’s memory of a night’s dreaming to the memory of a corresponding interval of waking; unless it was a night of drinking being remembered, the dream will yield much less memory. But this is an example comparing episodic memory from waking and dreaming after awakening, and thus is not only unconvincing and vague, but irrelevant. Nobody disagrees that waking memory for dreams is sometimes extremely poor. In the same vein, Hobson et al. write that it is common for dreams to have scene shifts of which the dreamer takes little note. “If such orientational translocations occurred in waking, memory would immediately note the discontinuity and seek an explanation for it.” Note the unquestioned assumption regarding waking consciousness. In fact, recent studies suggest that people are less likely to detect environmental changes than commonly assumed. For example, a significant number of normal adults watching a video failed to notice changes when the only actor in a scene transformed into another person across an instantaneous change in camera angle. Documents
• 327
Likewise, Hobson et al. assert that “there is also strong evidence of deficient memory for prior waking experience in subsequent sleep.” However, the evidence offered is always extremely indirect and unconvincing.A direct test requires lucid dreamers to attempt memory tasks while dreaming, as was done in a pilot study showing that about 95% of the subjects could remember in their lucid dreams a key word learned before bed, as well as the time they went to bed, and where they were sleeping. Subjects forgot to do the memory tasks in about 20% of their lucid dreams. That may or may not represent a relative deficit in memory for intentions. A major methodological difficulty presented by dreaming is poor recall on awakening. The fact that recall for lucid dreams is more complete than for non-lucid dreams presents another argument in favor of using lucid dreamers as subjects. Not only can they carry out specific experiments in their dreams, but they are also more likely to be able to report them accurately. That our knowledge of the phenomenology of dreaming is severely limited by recall is not always sufficiently appreciated. For example, Hobson et al. repeatedly substitute “dreaming” for “dream recall.” Solms makes the same mistake, which in my view, is fatal to his argument. So when he writes “of the 111 published cases . . . in which focal cerebral lesions caused cessation or near cessation of dreaming. . . .”, he is really saying “in which lesions caused cessation of dreaming or dream recall.” To think otherwise would be to suppose that the dream is the report. Source: LaBerge, Stephen. 2000. Lucid dreaming: Evidence and methodology. Behavioral and Brain Sciences 23: 962–963. Commentary on target articles by J. A. Hobson et al. Reprinted with the permission of Cambridge University Press.
328 •
Documents
Index
Abacus, 103 Acetylcholine, 205 Acoustic coding, 139 Action, perception and, 123–124 Action potentials, 44, 147 Adrian, Edgar Douglas, 30 Advertising industry, 15 Affective dimension of the mind, 122 Affordances, 60 Agnosia, 53–54 AI. See Artificial intelligence Airy, George, 160 Airy pattern, 160–161, 163 Aleksander, Igor, 150 Algorithms, 109, 117 Altered states of consciousness, 208–209 Alzheimer’s disease, 152, 153–154, 206 Amygdala, 122, 134 Animal research combating amnesia, 153–155 emotional systems, 135 evolutionary development of the brain, 38–39 Freeman’s olfaction research, 79–80 memory research, 152 mirror neurons, 127–128 moral issues, 38–39 NCC of visual consciousness, 68–69 peak shift effect, 215 visual system, 47 ANNs. See Artificial neural nets Anomalous monism, 96–97 The Antipodes of the Mind: Charting the Phenomenology of Ayahuasca Experience (Shanon), 259 Aristotle, 2, 4–7, 253 Armstrong, David, 90, 96 Art, visual, 213–218 Articulatory loop, 142
Artificial intelligence (AI), 15–17, 101, 109–110, 117–118 Artificial neural nets (ANNs), 114, 148–151 Aspect, Alain, 175 Association for the Scientific Study of Consciousness, 18 The Astonishing Hypothesis (Crick), 71–72, 258 Atkinson, Richard, 140 Atomic theory, 2–4, 253, 254 Attention, 125–126, 166, 211 Augustine, Saint, 253 Autism, 128 Autobiographical self, 136 Axons, 146–147, 262 Ayahuasca, 209 Baars, Bernard, 38, 62, 65–67, 69, 135, 143–144 Backward masking, 182–184 Baddeley, Alan, 140, 142–144 Basal ganglia, 152–153 “The Battle of Behaviorism: An Exposition and an Exposure” (Watson and McDougall), 288–296 (doc.) Begley, Sharon, 259 Behaviorism, 10–16, 85–87 Behaviorist Manifesto, 13, 254 Bell, John, 176, 256 Bentall, Richard, 213 Berger, Hans, 29–30 Binocular rivalry, 68–69 Black reaction, 21–22 Blackmore, Susan, 259 Blakemore, Colin, 50–51, 191, 192 Blakeslee, Sandra, 259 The BlindWatchmaker (Dawkins), 112 Blindsight, 54–55, 257
329
Bliss, Timothy, 151, 256 Body-world schema, 166–167 Bogen, Joseph, 65 Bohr, Niels, 159, 161 Boring, E. G., 89, 255 Born, Max, 159, 175 Bradshaw, Harry, 38 Brain amygdala and emotional response, 122 anatomy and physiology of, 24–27 brain damage and memory loss, 137–139 emotional response, 131–136 mapping techniques, 27–36, 72 memory, 140–141, 143, 146–152 neural correlates for the contents of consciousness, 67–71 neuron grouping, 71–80 reticular and neuron theories, 21–24 single-cell recording, 36–39 vision processing, 44–46 See also Neural correlates of consciousness Brave NewWorld (Huxley), 14, 255 Bridging principles, 96–97 Bright Air, Brilliant Fire (Edelman), 74 Broca, Paul, 28, 254 Broca’s area, 28, 35, 213 Brute force, 110–111 Buchanan, M., 140 Bucke, Richard, 212 Buddhism, 123, 211, 235–236 Calculators, 102–106, 254 Cannon,Walter, 133 Cartesian dualism. See Dualism Cartesian theater, 184–186 Category mistake, 84–85 Causality, 61–62, 189–195 Central executive, 142–143 Cerebellum, 152–153 Cerebral cortex. See Cortex The Cerebral Cortex of Man:A Clinical Study of Localization of Function (Penfield and Rasmussen), 255–256 Chalmers, David, 63, 69, 97–98, 223–230, 234, 258 Change blindness, 56–57, 258 Chaos theory, 190 Chess, 17, 101, 102, 103, 110–111 China, ancient, 103
330 • Index
Chinese Room thought experiment, 17, 112–114, 257 Chola bronze, 215 Christianity, 6–8, 212, 253 Churchland, Patricia, 93–96, 171, 235 Churchland, Paul, 93–96, 222–223 Cognitive closure, 231–232 Cognitive disorders and damage, 53–55, 137–138, 145, 151 Cognitive science brain mapping and, 27–36 Chinese Room model, 111–114 cognitive function versus emotional response, 129–136 emergence of, 16–19 free will experiments, 177–178 mechanism of consciousness, 40–41 A Cognitive Theory of Consciousness (Baars), 143 Cogwheels, 104–106 Cohen, Jonathan, 57 Color, 123 Color phi effect, 181–182, 185–186, 256 Coma, 65 Compatibilism, 191–192 Computational approach to the mind, 17, 171 Computers artificial intelligence, 15–17, 101, 109–110, 117–118 artificial neural nets, 148–151 computer science, 15–18 as functional systems, 106–112 machine consciousness, 101–106 semantics and syntax, 112–114 “Computing Machinery and Intelligence” (Turing), 109 Comte, Auguste, 12–13 The Concept of Mind (Ryle), 84, 255 The Conscious Mind (Chalmers), 224 Consciousness:An Introduction (Blackmore), 259 Consciousness and Cognition (journal), 178 Consciousness and Emotion (journal), 130, 259 Consciousness Explained (Dennett), 178, 184, 224, 257 Consciousness in Action (Hurley), 123–124 Context-specific reflexes, 124–126 Continental phenomenology, 123 Contrastive analysis, 65–66
Copenhagen interpretation of quantum mechanics (QM), 159, 164–165, 167–169, 176 Core self, 135–136 Corpus callosum, 24 Correlates, 61–62. See also Neural correlates of consciousness Cortex, 24–36, 45, 67, 132. See also Brain Cosmic consciousness, 212 Cosmic Consciousness (Bucke), 212 Cotterill, Rodney, 124–128, 151 Craik, Fergus, 141 Creature consciousness, 64 Crick, Francis, 257, 258 contents of consciousness, 69–70 criticism of Gibson, 60 emergence of cognitive science, 17 40-hertz oscillation, 41, 186 free will, 191 identity theory, 93 importance of ILN, 65 molecular model, 3 NCC research, 71–73 visual system, 43, 46 CyberChild, 151 Dalton, John, 2–3, 254 Damasio, Antonio, 121, 122, 130–132, 135–136, 143, 258 Daniel (Biblical character), 198 D’Aquili, Eugene, 211 Darwin, Charles, 130–131 Davidson, Donald, 96 Dawkins, Richard, 112 De Homine (Descartes), 44 Decision making. See Free will Declarative memory, 144, 152 Decoherence model of wave function collapse, 170, 173 Deep Blue, 101–103, 110–111, 258 Deep Thought, 17 Dement,William, 206–207 Dementia, 137, 152–156 Democritus, 2, 253 Demonic possession, 94–95 Dendrites, 26–27, 146–147 Dennett, Daniel, 223–228, 257 Cartesian theater, 184–187 compatibilism, 192 free will, 178 hard/easy distinction, 229–230 Searle’s Chinese Room, 113–114
Depraz, Natalie, 235 Descartes, René, 8–10, 44, 82–88, 157, 253 Descartes’ Error (Damasio), 121, 258 Determinism, 189–190, 192 Dewey, John, 40 Direct perception, 58–59 Discourse on Method (Descartes), 9, 253 Dispositions, 85–88 Divided visual pathway, 44–46, 53, 257 DNA, 3 Dopamine, 206 Dreaming, 208–218 lucid dreaming, 206–208 observation of sleep state, 197–198 REM and non-REM sleep, 198–206 virtual experiences, 200–201 Dualism, 225–228, 255 computational view of the mind, 17 Crick’s NCC research, 72 Descartes and the Enlightenment, 8–10 Descartes versus Ryle, 81–88 dualist aspect of functionalism, 121–129 Eccles’s support of, 187 hard/easy problem, 230–233 property dualism, 97–98 quantum theory and, 157 reductive physicalism, 88–93 sleeping consciousness, 199 See also Mind-body problem Dualist-interactionism, 193–194 Easy problems, 228–231 Eccles, John, 187, 188, 193–194 Edelman, Gerald, 18, 73–78, 92, 151, 156, 256 EEG. See Electroencephalogram Einstein, Albert, 3, 158–159, 171, 175–176, 254–255 Ekman, Paul, 130–131, 133 Electroencephalogram (EEG), 197–199, 255 development and use of, 29–35 free will experiments, 177–178 lucid dreaming, 207 neuronal firing, 26 Electromyogram (EMG), 198, 207 Electrooculogram (EOG), 198, 199, 207 Eliminative materialism, 93–96 The Embodied Mind (Varela, Thompson, and Rosch), 122–123, 257
Index
• 331
Emergentism, 98–99, 195 EMG. See Electromyogram Emotion, 121–136 The Emperor’s New Mind (Penrose), 116–117, 257 Energy, conservation of, 194 Enigma code, 108 Enlightenment, 8–10, 189–190, 253 Entheogens, 210–211, 256 EOG. See Electrooculogram Epilepsy, 29, 94–95, 138, 212–213 Epiphenomenon, 83 Episodic buffer, 142 Episodic memory, 144–145, 151–152 Epoche, 235 EPR thought experiment, 175 ERTAS. See Extended reticular-thalamic activation system Ethics animal research, 38–39 fetal stem cell use, 153 PET scanning, 32–33 Experience, subjective, 219–223 Explanatory gap, 228 Explicit/implicit memory, 144–146 Extended reticular-thalamic activation system (ERTAS), 65–67, 135 “Facing Up to the Problem of Consciousness” (Chalmers), 318–324 (doc.) FAPs. See Fixed action patterns Feinstein, Bertram, 179 Feyerabend, Paul, 93 First-person ontology of consciousness, 225–226 Fixed action patterns (FAPs), 129 Flanagan, Owen, 231 fMRI. See Functional magnetic resonance imaging Fodor, Jerry, 108, 111, 259 Folk psychology, 94–96 40-hertz oscillation, 41, 73, 186, 201, 257 The Foundations (von Neumann), 255 “Fountains in the brain,” 78–79 Free will, 177–195 causality and, 189–195 experiments in timing, 177–179 importance of observer, 185–187 timing experiments, 179–185 Freeman,Walter, 40, 60, 79–80, 125
332 • Index
Frege, Gottlob, 91 Freud, Sigmund, 201–204 Frith, Chris, 40–41 Frontal lobotomy, 254 Functional magnetic resonance imaging (fMRI), 33–35, 127–128, 138, 197–198 Functionalism, 225, 259 computers as functional systems, 106–112 dualistic aspect of, 121–129 simulations and replications, 114–116 Gage, Phineas, 119–121, 131, 136, 143, 254 Galileo, 2, 253 Gall, Franz, 27, 254 Gamma oscillations, 41 Ganglion cells, 44–45, 47 Gassendi, Pierre, 83 Gibson, J. J., 59–60, 256 Global mapping, 78 Global workspace theory, 143 Gödel, Escher, Bach (Hofstadter), 111 Gödel, Kurt, 117, 255 Golgi, Camillo, 21–24, 26, 254 Goodale, Melvyn, 53, 258 Grand illusion hypothesis, 57–58 Grandmother cell, 50–52, 72, 77, 256 Gray matter, 21–24 Greece, ancient, 5, 103 Greenfield, Susan, 78–79 Grouping, 215 Grush, Rick, 171 Güzeldere, Güven, 63 Haggard, Patrick, 177 Hameroff, Stuart, 40, 171, 174–175 Hard problem, 228–236, 258 Hardcastle,Valerie, 229–230 Harlow, John, 120 Hebb, Donald, 146–147, 255 Heisenberg,Werner, 169, 190 Heisenberg uncertainty principle, 190, 194 Hemispheric independence, 256 Hilbert, David, 117 Hildegard of Bingen, 210–211 Hippocampus, 133, 140–141, 151–155 Hirstein,William, 215 Hitch, Graham, 142 Hobson, J. Allan, 66, 200–204, 207
Hodgson, David, 178–179, 192, 194 Hofstadter, Douglas, 111, 114 Holmgren, Emil, 23 Honderich, Ted, 191, 192 How the MindWorks (Pinker), 111, 259 Hubel, David, 47–50, 256, 257 Hume, David, 190 Humphrey, Nicholas, 93 Huntingdon’s chorea, 153 Hurley, Susan, 123–124, 188 Husserl, Edmund, 123, 234 Huxley, Aldous, 14, 205, 210, 255 Hypothalamus, 135–136 I of theVortex (Llinás), 128 Idealism, 227 Identity, 155–156 Ignorance, cognitive, 231–232 ILN. See Intralaminar nucleus Image and Mind (Kosslyn), 257 Imageless thought, 12 Imagination, 81 Immunology, 74–76 Inattentional blindness, 56–57 Inferior temporal area (IT), 53 Inferior temporal lobe, 212 Information and information processing Chinese Room model, 112–113 information-processing account of vision, 44–47 memory, 139–142 mind and machine, 16 neutral monism theory, 97–98 Intentions, 166–167 Interaction problem, 83–88, 169–170 The International Dictionary of Psychology, 236–237 Intralaminar nucleus (ILN), 65 An Introduction to Social Psychology (McDougall), 254 Introspectionism, 10–13 “Is Consciousness a Brain Process?” (Place), 89 Jackson, Frank, 221–223 James, Henry, 12 James,William, 12, 14, 133, 254 Journal of Consciousness Studies, 18, 258 Kane, Robert, 192 Kant, Immanuel, 189–190 Kasparov, Gary, 101, 102, 110–111, 258
Kim, Jaegwon, 97 Kinsbourne, Marcel, 184, 257 Knowledge argument, 221 Koch, Christof, 41, 65, 71–73, 257 Kochen-Specker theorem, 175–176 Kolers, Paul, 182, 256 Konno, Michiko, 212 Kosslyn, Stephen, 36, 257, 258 Krippner, Stanley, 208 Külpe, Oswald, 11–12 LaBerge, Stephen, 206–207 Lakoff, George, 17 Lange, Carl, 133 Language ability, 28 Lateral geniculate nucleus (LGN), 45, 47–50, 134 Learning, 146–153 LeDoux, Joseph, 130, 132–134 Left brain–right brain dichotomy, 28, 132 Leibniz, Gottfried Wilhelm, 10 Leibniz’s Law, 91–92 Leonardo da Vinci, 104 Lesions, brain, 65 Lettvin, Jerry, 50, 256 Leucippus of Miletus, 2, 253 Levin, Daniel, 56, 258 Levine, Joseph, 228 Lewis, David, 222 LGN. See Lateral geniculate nucleus Libertarianism, 192–194 Libet, Benjamin, 177–180, 182–184, 187, 195, 256, 257 Lie detectors, 216–217 Limbic system, 122, 132–134, 211 Llinás, Rodolfo, 128–129, 188–189, 201, 227–228, 231 Locke, John, 220 Lockhart, Robert, 141 Lockwood, Michael, 227 “A Logical Calculus of the Ideas Immanent in Nervous Activity” (McCulloch and Pitts), 255 Logothetis, Nikos, 68–69 Lømo, Terje, 151, 256 Long-term memory, 139–142 Long-term potentiation (LTP), 151, 256 Lowe, Jonathan, 230 Lucid dreaming, 206–208 “Lucid Dreaming” (LaBerge), 324–328 (doc.) Luna, Luis Eduardo, 209
Index
• 333
Lying Awake (Salzman), 213 Machine consciousness, 101–106 Machine intelligence, 255 MacLean, Paul, 132 “The Magical Number Seven, Plus or Minus Two” (Miller), 139, 297–300 (doc.) Magnetic resonance imaging (MRI), 33, 258 Magnetoencephalogram (MEG) scanning, 31–34, 201, 258 MAGNUS program, 150 Marshall, John C., 74 Matisse, Henri, 218 Maxwell, James Clerk, 171 McCarthy, John, 224–225 McCrone, John, 73 McCulloch,Warren, 16, 255 McDougall,William, 14–15, 254, 255 McGinn, Colin, 231–234, 257 Mechanism of consciousness, 40–41 Meditation, 211–212, 235–236 Meditations (Descartes), 9–10, 254 “Meditations on First Philosophy” (Descartes), 277–283 (doc.) MEG. See Magnetoencephalogram (MEG) scanning Memory, 78, 137 biological processes of, 146–153 brain damage and memory loss, 137–139, 153–156 combating and reversing, 153–156 declarative memory, 144, 152 different types of, 137–146 episodic memory, 144–145, 151–152 explicit/implicit memory, 144–146 long-term memory, 139–142 procedural memory, 145, 152–153 semantic memory, 144–145, 152 short-term memory, 139–142 working memory, 142–144 Merleau-Ponty, Maurice, 235, 255 Mescaline, 205 Microtubules, 40, 171, 173–174 Middle Ages, 4–10 Migraines, 210–211 Miller, George, 139, 144, 256 Milner, David, 53, 258 Mind, Matter, and Quantum Mechanics (Stapp), 166 Mind-body problem, 81–99 ancient and medieval times, 4–10
334 • Index
brain mapping and, 40–41 Descartes versus Ryle, 81–88 eliminative materialism, 93–96 qualia, 227 reductive physicalism, 88–93 See also Dualism Mind-brain identity theory. See Dualism The Mind Doesn’tWork ThatWay (Fodor), 111, 259 “Minds and Machines” (Putnam), 108 “Minds, Brains, and Programs” (Searle), 304–307 (doc.) Minsky, Marvin, 101 Mirror neurons, 127–128, 258 Mishkin, Mortimer, 53, 257 Models, 114–116 Modular Array of General Neural Units (MAGNUS), 150 The Modularity of Mind (Fodor), 111 Monroe, Marilyn, 214, 215 Morphology, 38 Motor control, 153 MRI. See Magnetic resonance imaging Multiple drafts model of conscious experience, 186–187 Multiple realizability, 64 Myers, Ronald, 256 The Mysterious Flame (McGinn), 231 Nagel, Thomas, 219, 256 Natural selection, 112 Naudin, Jean, 235 NCCs. See Neural correlates of consciousness Nebuchadnezzar, 198, 199 Nemirow, Laurence, 222 Neural correlates of consciousness (NCCs), 61–62, 212 controversies in research, 40–41 Crick’s research, 71–73 determining and defining, 62–67 Edelman’s group theory, 74–78 Freeman’s olfaction research, 79–80 Greenfield’s neurol gestalts, 78–79 use of EEG, 30 visual awareness, 67–71, 257 von Neumann’s chain, 168 Neural Darwinism, 74–78, 256 Neural gestalts, 78–79 Neuromodulators, 27, 66–67, 204–206 Neuron theory, 22–24, 26, 69 Neuronal group selection theory, 74–78
Neuronal hierarchies, 47–52 Neurons, 26–27 grandmother cell, 50–52 process of memory, 146–150 role in vision, 44 single-cell recording, 36–39 vision perception, 47–50 See also Neural correlates of consciousness Neurophenomenology, 234–235 Neurophilosophy, 235 Neuroscience, 10, 15–16 Neurotheology, 259 Neurotransmitters, 26, 204–206 Neutral monism, 97, 227 New mysterians, 231–234 New Scientist (magazine), 50 Newberg, Andrew, 211, 213 Newman, James, 65, 67, 135 Newton, Isaac, 157, 158, 160 1984 (Orwell), 14, 185, 255 NMR. See Nuclear magnetic resonance (NMR) scanning “Nobel Presentation Speech to Camillo Golgi and Santiago Ramón y Cajal, for work on the Anatomy of the Nervous System,” 283–287 (doc.) “Nobel Presentation Speech to Roger Sperry, David Hubel, and Torsten Wiesel, for Neuroscientific Research,” 308–311 (doc.) Nobel Prize, 21–24, 47, 193, 254, 257 Noë, Alva, 57–58, 69–71 Nonconscious cortical origin of volitional actions, 257 Nondeclarative memory, 144, 152–153 Nonlocality, 175–176 Nonreductive physicalism, 96–99 Norman, D. A., 139–140 Nuclear magnetic resonance (NMR) scanning, 32 Nuns, 211 Objective reduction, 172–175 Observation, conscious, 161–169, 184–185 Occam’s Razor, 91, 92 Olfaction research, 79–80, 125 “On the Structure of the Brain Gray Matter” (Golgi), 22, 254 Optic array, 59, 60 Optic flow pattern, 59
O’Regan, Kevin, 56–58 The Organization of Behaviour (Hebb), 255 Orwell, George, 14, 185, 255 PAG. See Periaquaductal gray Paley,William, 112 Panksepp, Jaak, 130, 134–135 Panpsychism, 98, 224 Parallel processing, 134, 217–218 Parkinson’s disease, 153, 206 Pascal, Blaise, 104–106, 254 Pascaline, 104–106 Patterns, 107–108 Pavlov, Ivan, 14 Peak shift effect, 215, 217 Penfield,Wilder, 29, 132, 179, 255–256 Penrose, Roger, 17, 116–118, 171–175, 194, 257 Perception and action, 45, 123–124 without consciousness, 54 grand illusion hypothesis, 57–59 motor-sensory approach, 58–60 perceptual binding, 257 perceptual input/behavioral output concept, 123–124, 188 See also Vision The Perception of theVisualWorld (Gibson), 256 Periaquaductal gray (PAG), 135 Personality, 119–121, 143 PET. See Positron emission tomography (PET) scanning Phantom limb effect, 179, 233 Phantoms of the Brain (Ramachandran and Blakeslee), 212, 259 Phenomenology, 123, 234–235 The Phenomenology of Perception (MerleauPonty), 255 Phi. See Color phi effect Philosophical behaviorism, 85–88 Phrenology, 27–28, 254 The Physical Dimensions of Consciousness (Boring), 255 Physical state, 61 Physicalism, 93–99, 108, 221–228 Physics, 2, 117–118. See also Science Pick’s disease, 152 Pinker, Steven, 111, 259 Pitts,Walter, 16, 255 Place, Ullin, 88–93, 108, 228, 256 Plato, 6–7, 253
Index
• 335
Podolsky, Boris, 175, 255 Popper, Karl, 193 Positivism, 12–13 Positron emission tomography (PET) scanning, 32–36, 66, 127, 152–153, 197–198, 211, 257, 258 Posner, Michael, 32 Posterior parietal cortex (PP), 53, 211–212 Practice effect, 145–146 Precautionary principle, 38–39 Prediction, 128–129 Prefrontal cortex, 122, 131, 136, 141 Premotor cortex, 124–127 Primary consciousness, 78 Primary visual cortex, 45, 48–50 The Principles of Psychology (James), 12, 254 Private thought, 87–88 The Problem of Consciousness (McGinn), 257 Procedural memory, 145, 152–153 Property dualism, 97–98 Prosopagnosia, 51 Proto-self, 135–136 Proxied movement concept, 124–127 Prozac, 205–206 Psyche, 5 Psychiatry, 10 Psychoactive substances, 204, 205, 209–210, 213, 256 Psychology, 10–16, 108 Psychons, 194 Psychophysical laws, 96 Putnam, Hilary, 108, 256 Qualia, 129, 219, 224, 226–228 Quantum dynamical account of consciousness, 257 Quantum mechanics (QM), 157–176, 255 consciousness and, 164–170 emergence of quantum theory, 3–4 free will and, 194 Heisenberg uncertainty principle, 190 origin of consciousness, 170–176 Quine,W.V., 95, 96 Raichle, Marcus, 35, 257 Ramachandran,Vilayanur, 212–218, 259 Ramón y Cajal, Santiago, 22–24, 26, 146, 153, 193, 254 Rapid-eye movement (REM) sleep, 66, 198–205, 256
336 • Index
Rasmussen, Theodore, 255–256 The Rediscovery of the Mind (Searle), 257 Reductionism, 218 Reductive materialism, 93 Reductive physicalism, 88–93 Reentrant cortical integration (RCI) model of the cortex, 77, 126 Reentrant signaling, 76–77 Reflexes, 124–129 Religious experiences, 208–213 Replications, 114–116 Representational theory of mind and perception, 123 Res cogitans (stuff that thinks), 82–83 Res extensa (stuff that takes up space), 82 Resonance, 59 Reticular theory, 22, 26 Reverberating loop, 211–212 Reverse-engineered artificial intelligence, 109–110, 112 Revonsuo, Antti, 40–41 Ribrary, Urs, 201 Rizzollati, Giaccamo, 127, 258 Robinson,William, 223 Rodin, Auguste, 87, 88 Roman Empire, 103 Rosch, Eleanor, 60, 122–123, 257 Rosen, Nathan, 175, 255 Rosenthal, David, 64 Russell, Bertrand, 97, 227 Ryle, Gilbert, 84–88, 92, 255 Sacks, Oliver, 210–211 Salamis Tablet, 103 Salzman, Mark, 213 SAS. See Supervisory attentional system Schacter, Daniel, 133 Schrödinger, Erwin, 3, 161, 171, 255 Schrödinger’s cat, 161–163 Science atomic theory, 2–4 computer science, 15–18 of consciousness, 1–2 psychology, 10–13 quantum theory, 157–163 See also Cognitive science; Quantum mechanics Scientific American (magazine), 69, 108, 257 SCR. See Skin conductance response Searle, John, 16–17, 81, 98–99, 101, 112–117, 223–228, 257
Selectivity, 110–111 The Self and Its Brain (Popper and Eccles), 193 Sellars,Wilfrid, 95, 96 Semantic coding, 139 Semantic memory, 144–145, 152 Semantics, 113–116 “Sensations and Brain Processes” (Smart), 91 Sensorimotor approach to perception, 58–60 Serotonin, 205–206 Shadows of the Mind (Penrose), 116–117 Shallice, Timothy, 143 Shamanism, 199–200, 208–209 Shannon, Claude, 98 Shanon, Benny, 209, 210, 259 Sherrington, Charles, 193 Shiffrin, Richard, 140 Short-term memory, 139–142 Silberstein, Michael, 99 Simon, Herbert, 110 Simons, Daniel, 56, 258 Simulations, 114–116 Singer,Wolf, 51, 133 Single photon emission computed tomography (SPECT), 211 Skarda, Christine, 60 Skin conductance response (SCR), 216–217 Skinner, B. F., 15, 88 Sleep, 66–67, 197–218, 256 Smart, Jack, 89, 90–92, 108 Smith, Huston, 210 Sociology, 13 Soul, 5–10, 81–88, 253 SPECT. See Single photon emission computed tomography Speech perception, 28 Speech production, 28, 254 Sperling, George, 139 Sperry, Roger, 28, 194–195, 256, 257 Spinoza, Baruch, 97, 227 Split-brain studies, 194, 256 Spooky action at a distance, 175–176 SQUIDs. See Superconducting quantum interfering devices Squire, Larry, 144 Stalin, Joseph, 185 Stapp, Henry, 164–170, 194, 257 Strawson, Peter, 234 Subcortical system, 45–46
Subjective experience, 219–223 Substance dualism. See Dualism Superconducting quantum interfering devices (SQUIDs), 31–32 Supervenience, 97 Supervisory attentional system (SAS), 143 Survival, 129 Sutherland, Stuart, 236–237 Symbol manipulation, 113–114 Synapses, 26–27, 146–147 Synchronic unity, 156, 257 Syntax, 113–116 Template for action, 166–169 Temporal cortex, 152 Temporal lobes, 140–141 Thalamus, 65, 134 The Thinker (Rodin), 87, 88 Thomas Aquinas, Saint, 6–8, 253 Thompson, Evan, 69–71, 122–123, 257 Thompson, N., 140 Thomson, J. J., 157, 254 “Time and the Observer” (Dennett and Kinsbourne), 184, 312–317 (doc.) Titchener, Edward, 11–12, 235 Token identity theory, 93, 96 Tower of Hanoi, 145–146, 152 Transduction, 44 Trimble, Michael, 212 Tulving, Endel, 144–145 Turing, Alan, 108–109, 255 Turing machine, 108–109 Turing test, 109, 113 Two-level interdependence view of perception and action, 124, 188 Type identity, 93 Ungerleider, Leslie, 53, 257 Varela, Francisco, 122–123, 234–235, 257 Vermersch, Pierre, 235 Vision, 43–60 action-centered view, 52–55 change blindness and visual richness, 55–60 mechanism of, 43–47 NCCs for visual awareness, 67–71 neuronal hierarchies, 47–52 parallel processing, 134 role of premotor cortex, 127–128 visual art and consciousness, 213–218
Index
• 337
Vision of the Brain (Zeki), 258 Visions, 210 Visual awareness, 30, 67–71 TheVisual Brain in Action (Milner and Goodale), 53, 258 Visual coding, 139 Visuospatial sketchpad, 142 Volition. See Free will Von der Malsburg, Christoph, 73, 257 Von Grünau, Michael, 182, 256 Von Neumann, John, 164, 169, 175–176, 255 Von Neumann’s chain, 164, 167–168 Watson, James, 3 Watson, John B., 13–15, 254, 255 Watt, Doug, 134 Waugh, N. C., 139–140 Wave function, 158–161, 164, 167, 170–171, 175–176 Wave theory of subatomic particles, 3, 255 Weiskrantz, Lawrence, 55, 257 Wellcome scanner laboratory, 258 Wernicke, Carl, 28, 254
338 • Index
Wernicke’s area, 28, 35 Wertheimer, Max, 181 “What is It Like to Be a Bat?” (Nagel), 219, 300–304 (doc.) White matter, 24 Wiesel, Torsten, 47–50, 256, 257 Wigner, Eugene, 164, 169 Wilki, Stonham, and Aleksander’s Recognition Device (WISARD), 150 William of Occam, 91 Wilson, Richard, 150 Winkelman, Michael, 208–209 WISARD program, 150 Working memory, 142–144 World War II, 108 Wundt,Wilhelm, 11, 12, 235, 254 Würzburg School, 11–12 Young, Graham, 55 Zeki, Semir, 50–51, 217–218, 258 Zombies, 226 Zurek,Wojciech, 170–171, 173
About the Author
nthony Freeman has degrees in chemistry and theology from Oxford University. He was ordained in the Church of England (Episcopalian) in 1972 and for twenty years held a variety of pastoral and teaching posts. In 1993 he published a controversial book, God in Us: A Case For Christian Humanism, and this resulted in his being dismissed from his parish and his position as ministerial training officer in the diocese of Chichester in England. He has been the managing editor of the Journal of Consciousness Studies since its launch in 1994; he writes and lectures on matters related to religion and consciousness research.
A