cover
next page >
Cover
title: author: publisher: isbn10 | asin: print isbn13: ebook isbn13: language: subject publi...
30 downloads
787 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
cover
next page >
Cover
title: author: publisher: isbn10 | asin: print isbn13: ebook isbn13: language: subject publication date: lcc: ddc: subject:
Rationality and the Literate Mind Routledge Advances in Communication and Linguistic Theory ; 7.7 Harris, Roy. Taylor & Francis Routledge 0415999014 9780415999014 9780203879481 English Literacy--Philosophy, Language and logic, Language and languages--Philosophy. 2009 P118.7.H37 2009eb 401 Literacy--Philosophy, Language and logic, Language and languages--Philosophy.
cover
next page >
< previous page
page_i
next page >
page_i
next page >
Page i Rationality and the Literate Mind
< previous page
< previous page
page_ii
next page >
Page ii Routledge Advances in Communication and Linguistic Theory ROY HARRIS, Series Editor 1. Words—An Integrational Approach Hayley G. Davis 2. The Language Myth in Western Culture Edited by Roy Harris 3. Rethinking Linguistics Edited by Hayley G. Davis & Talbot J. Taylor 4. Language and History Integrationist Perspectives Edited by Nigel Love 5. The Written Language Bias in Linguistics Its Nature, Origins and Transformations Per Linell 6. Language Teaching Integrational Linguistic Approaches Edited by Michael Toolan 7. Rationality and the Literate Mind Roy Harris
< previous page
page_ii
next page >
< previous page
page_iii
next page >
page_iii
next page >
Page iii Rationality and the Literate Mind Roy Harris New York London
< previous page
< previous page
page_iv
next page >
Page iv First published 2009 by Routledge 270 Madison Ave, New York, NY 10016 Simultaneously published in the UK by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business This edition published in the Taylor & Francis e-Library, 2008. To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk. © 2009 Taylor & Francis All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging in Publication Data A catalog record has been requested for this book. ISBN 0-203-87948-1 Master e-book ISBN ISBN10: 0-415-99901-4 (hbk) ISBN10: 0-203-87948-1 (ebk) ISBN13: 978-0-415-99901-4 (hbk) ISBN13: 978-0-203-87948-1 (ebk)
< previous page
page_iv
next page >
< previous page
page_v
next page >
Page v ‘Whatever Logic is good enough to tell me is worth writing down,’ said the Tortoise. Lewis Carroll La logique, après tout, n’est qu’une spéculation sur la permanence des notations. Paul Valéry
< previous page
page_v
next page >
< previous page
page_vi
next page >
page_vi
next page >
Page vi This page intentionally left blank.
< previous page
< previous page
page_vii
Page vii Contents Series Editor’s Foreword Preface 1 On Rationality, the Mind and Scriptism 2 The Primitive Mind Revisited 3 Logicality and Prelogicality 4 Reason and Primitive Languages 5 The Great Divide 6 Aristotle’s Language Myth 7 Logic and the Tyranny of the Alphabet 8 Literacy and Numeracy 9 Interlude: Constructing a Language-Game 10 The Literate Revolution and its Consequences 11 The Fallout from Literacy 12 Epilogue: Rethinking Rationality References Index
< previous page
page_vii
next page >
ix xi
1 17 30 44 61 79 95 110 125 134 147 160
179 187
next page >
< previous page
page_viii
next page >
page_viii
next page >
Page viii This page intentionally left blank.
< previous page
< previous page
page_ix
next page >
Page ix Series Editor’s Foreword Routledge Advances in Communication and Linguistic Theory Roy Harris This Series presents an integrationist approach to problems of language and communication. Integrationism has emerged in recent years as a radically innovative theoretical position. It challenges the most basic assumptions underlying orthodox modern linguistics, including those taken for granted by leading structuralists, post-structuralist and generativists. According to integrationists, human communication is an essentially creative enterprise: it relies very little on the ‘codes’, ‘systems’, ‘habits’ and ‘rules’ postulated by orthodox theorists. Instead, integrationists see the communicative life of each individual as part of a continuous attempt to integrate the present with the past and the future. The success of this attempt depends crucially on the ability to contextualize ongoing events rather than on any mastery of established conventions. The books in this series are aimed at a multidisciplinary readership comprising those engaged in study, teaching and research in the humanities and social sciences, including anthropology, the arts, education, linguistics, literary studies, philosophy and psychology.
< previous page
page_ix
next page >
< previous page
page_x
next page >
page_x
next page >
Page x This page intentionally left blank.
< previous page
< previous page
page_xi
next page >
Page xi Preface Recent developments in neuroscience have given a new lease of life to some old controversies about language and the mind. It is nowadays taken for granted that one of the most important features of the human brain is its ‘plasticity’ or ‘malleability’. We find this property variously defined as, for instance, the brain’s ‘modification of neuronal circuitry’ (Greenfield 2008:29), or capacity ‘to make new connections among its existing structures’ (Wolf 2008:3), or its ‘ability to reorder neural pathways’ (CarrWest 2008:16). It is this that enables us to learn from new experiences. Such a property is obviously implicated in the learning of languages and language-related skills. At the same time, recognition of this plasticity gives rise to concern that changes in language-related technologies may, over the course of time, change the way human beings think. Neuroscientists are already discussing ‘how to improve cognitive abilities of individuals and our collective ability to conceptualise the world’ (Carr-West 2008:19). The eventual implementation of such programmes clearly raises ethical questions. But a focus on the brain’s plasticity has also raised concerns about whether current information technologies are not already bringing such changes about, albeit unintentionally. The technologies in question are mainly visual and screen-based. Worries are voiced about the dangers here of ‘blurring the cyber-world and “reality”’ (Greenfield 2008:6). The question has explicitly been raised: ‘what impact might such a biased input of fast-moving icons to the brain have on the way we think?’ (Greenfield 2008:9). This in turn stirs echoes of similar questions from the past. In Proust and the Squid Maryanne Wolf, whose academic career has been mainly devoted to the study of dyslexia, draws an explicit parallel between the anxieties expressed by Socrates about the new ‘information technology’ of writing in Classical Greece and her own worries about what may be happening to the minds of her children, brought up in the digital age and conditioned to rely on computer-based information for understanding every aspect of their lives.
< previous page
page_xi
next page >
< previous page
page_xii
next page >
Page xii Socrates’ perspective on the pursuit of information in our culture haunts me every day as I watch my two sons use the Internet to finish a homework assignment, and then tell me they “know all about it.” (Wolf 2008:77) Susan Greenfield in The Quest for Identity in the 21st Century sketches a no less disquieting scenario in which unhealthy reliance on modern information technologies produces a culture in which attention spans are short and the demand for instant gratification—and instant answers—high. The end result would be the reduction of a ‘Somebody’ to a ‘Nobody’: if the old world of the book aided and abetted the development of a ‘mind’, the world of the screen, taken to extremes, might threaten the mind altogether, and with it the essence of you the individual. (Greenfield 2008:203) Thus it emerges that the plasticity of the brain is as much a liability as an asset. It can just as easily adapt to and foster a screen-based culture as a book-based culture. Its structure and modus operandi are just as well suited to developing the mind of a couch potato as to developing the mind of a Shakespeare or an Einstein. *** It is interesting in the light of this conclusion to re-examine the old debate about literacy and human reason. But as preliminaries to any such reexamination, there are several notes of caution to be sounded. Valuable as the results of contemporary neuroscience are, it is impossible to ignore the fact that the way these results are presented by neuroscientists often bears the mark of cultural idées fixes about literacy itself. From the examples mentioned above, for instance, it is clear that literacy is unquestionably accepted as Good Thing, while anything that threatens to undermine the achievements of the literate mind is a Bad Thing. But this is a value judgment: it is not a factual conclusion delivered by the results of neuroscientific research. Another caveat concerns the language of neuroscience itself. As Geoffrey Warnock once remarked: It seems to be almost an occupational disease of those who reflect on the human nervous system that they should picture us as somehow located inside our own heads. (Warnock 1969:80) The technical term for this is the ‘mereological fallacy’ (Bennett and Hacker 2003:29 et passim). This is the conceptual error which consists in assigning attributes to the part which in fact belong to the whole. The
< previous page
page_xii
next page >
< previous page
page_xiii
next page >
Page xiii characteristic form this error takes in neuroscience is the tendency to ascribe to the brain achievements which are properly attributable to the individual whose brain it is. Thus, for example, Wolf does not hesitate to assert that the brain ‘learns to read’. But it makes no more sense to say that the brain learns to read than to say that the foot learns to play football. What doubtless inclines neuroscientists to talk about the brain in this exaggerated and misleading way is that what the physical brain does is not directly open to inspection. The brain no doubt plays an essential role in coordinating various separate processes that are involved in the act of reading. But it is nevertheless the child—not the child’s brain— who learns to read, just as it is the child—not the child’s brain—who learns to eat with a knife and fork, play hide-and-seek, and do many other things that children learn to do. A third potential source of confusion, closely related to the mereological fallacy, is a failure to distinguish between the brain and the mind. For some neuroscientists, evidently, the mind just is the brain under another description, or when considered as operating in a certain way (especially when ‘thinking’). According to Greenfield, ‘challenging the old dichotomy of mind versus brain, or mental versus physical, is one of the most important achievements of current neuroscience’ (Greenfield 2008:50). But, again, this is to confuse the factual deliverances of neurological research with their culturally slanted interpretation. Whatever Greenfield or likeminded neuroscientists may claim for their own discipline, there is no translation available which will convert the neurophysiological predicates of English (or any other language) into corresponding mental predicates. So it would save a lot of needless embranglement to keep the two separate from the start. These cautions are all the more necessary in discussing literacy, since neuroscientists commonly fail to take due account of the fact that they are themselves products of a literate culture, and thus consider human activities—including their own research activities—in ways that are already coloured by having had a literate education. But neuroscientists are by no means the only academics guilty of this form of myopia. Some of the major thinkers of the 20th century who concerned themselves with problems of language and logic—including Bertrand Russell and Ludwig Wittgenstein—seem to have attached little importance to the fact that they themselves were approaching these problems as literate members of a highly literate society with a long literate tradition. In their generalizations about language, they showed no sign of recognizing that the way language is conceptualized in a literate community may be quite different in important respects from the way language is conceptualized in a preliterate community. They never discuss their own literacy as a factor in their own thought processes. They proceed on tacit assumptions adopted by the (Western) literate mind, but never refer to the fact that they are doing so. In their thinking on these matters there is a literate ‘blind spot’. It is, curiously, a blind spot also found in the thinking of many linguists. Throughout modern linguistics there runs what can only be described as
< previous page
page_xiii
next page >
< previous page
page_xiv
next page >
Page xiv an ‘anti-scriptorial’ bias, leading in some cases to the denial that writing is a form of language at all (e.g. Saussure 1922:45; Bloomfield 1935:21). Clearly, if writing is not language there is no case for taking literacy into account when considering the relations between language and human reason. In this respect, modern linguistics might be seen as supplying the professional justification—if one were needed —for the neglect of writing by philosophers. If linguists can afford to marginalize writing, why should philosophers bother with it? But linguists rarely seem to recognize that their anti-scriptorial attitudes are literate prejudices. Their typical complaint about writing is that it often ‘misrepresents’ the spoken word, i.e. fails to do a job that the literate community expects it to do, rather than the (quite different) job that it is actually equipped to do. The irony is all the more profound in that in both cases the linguist and the philosopher are nowadays academics whose very discipline depends on the availability of texts, editions, translations and all the other written resources of the modern university. If there had been a 21st-century Socrates whose teaching was exclusively oral, and who declined to put any ideas about language into written form at all, his work would not only never have survived: he would never have secured a university appointment in the first place, and least of all in a department of philosophy or of linguistics. It seems worth while, therefore, to examine in some detail what drops out of sight when a literate blind spot is allowed to occlude the vision of those who engage with the relationship between language and reason. That is the main purpose of this book. My point of departure I have already stated; i.e. the ‘plasticity’ or ‘malleability’ of the human brain as established in contemporary neuroscience. This is important for my purposes, because it means there is no reason to believe that human rationality (whatever that may amount to) is somehow already built into the structures of the brain ab initio. It remains to identify the goal I am aiming at. I shall argue for two interrelated theses: one, that conceptions of human rationality vary according to the view of language adopted, and the other that the view of language adopted by the literate mind is not the same as the view of language adopted by the preliterate mind. This aim has to be situated in the context of a traditional view of humanity in which reason allegedly distinguishes Homo sapiens from other species; or, as Locke put it, ‘that faculty whereby man is supposed to be distinguished from beasts, and wherein it is evident he much surpasses them’ (Locke 1706: IV.xvi.1). The tradition goes back to the Greek definition of man as the rational animal; or, as the Earl of Rochester described him less flatteringly in the 17th century, ‘that vain animal, Who is so proud of being rational’. It was the same Rochester who called Reason ‘an ignis fatuus of the mind’. Arguably, Rochester was right; at least, if we understand him to be referring more specifically to one particular ignis fatuus of the literate mind.
< previous page
page_xiv
next page >
< previous page
page_xv
next page >
Page xv The first thinker in the Western tradition to recognize the problem of rationality is generally reckoned to be Aristotle. But it is commonly overlooked that Aristotle himself was a literate thinker. Modern accounts of logic as the study of the principles of correct reasoning derive that definition ultimately from Aristotle’s formalization of the syllogism in the fourth century BC. But what lay behind Aristotle’s literate ‘solution’ to the problem is a question rarely raised. There is an obvious risk that the discussion of questions like ‘What is rationality?’ and ‘What is language?’ will be seen as degenerating into no more than sorting out linguistic quibbles about the words rational and language . It is in the hope of guarding against any such degeneration, or even perceived degeneration, that the order of presentation in the present book has been adopted. One particular range of contexts in which questions of language and rationality demand attention concerns the comparison of beliefs and practices that differentiate one society from another. Here again Greek thinkers were alert to the kinds of issue that such comparisons may raise, as we already see from some of Herodotus’s remarks about foreign peoples. If indeed rational principles were psychological universals, and all languages reflected roughly the same view of the world and of words, it would be difficult to explain the fact that some societies appear readily to entertain beliefs that other societies find quite ‘irrational’. This disparity became a problem that much exercised Western anthropologists many centuries after Herodotus. I shall take this as the point of departure for my discussion. There are two advantages to proceeding in this way. First, the anthropologists’ problem in dealing with ‘primitive’ ways of thinking provides an interesting link between ancient and modern attitudes to the subject of language and rationality, and supplies essential historical background to more recent developments. Second, at the same time it illustrates the fact that we are not dealing here just with quibbles about words, but with issues that in practice can have important social, moral and even political implications. Is there any alternative approach to these questions than the one enshrined in the Western literate tradition? The alternative proposed here involves taking a quite different view of language and meaning from that which has been dominant in the West for centuries. Such a view, based on an integrational theory of semiology, will be argued for in my final chapter. *** The author would like to thank Mary Bartlett, Peter Crook, Peter Jones, Rita Harris and David Olson for valuable comments and suggestions, which he has tried to put to good use. Roy Harris Oxford, July 2008
< previous page
page_xv
next page >
< previous page
page_xvi
next page >
page_xvi
next page >
Page xvi This page intentionally left blank.
< previous page
< previous page
page_1
next page >
Page 1 1 On Rationality, the Mind and Scriptism INTRODUCTION If you can manage to read what is written on this page, or any comparable page of printed text, whether in English or any other language, then according to many theorists you have ipso facto demonstrated—if demonstration were needed—that you possess a literate mind. For many more, it would simultaneously be a demonstration that you have a rational mind, since no one but a rational creature would be able to master the mental processes that are involved in reading a written text. That assumed nexus between literacy and rationality is the focus of inquiry in the present book. For some, however, the very expressions literate mind and rational mind beg questions that lie at the heart of any such inquiry. What is the mind? Do human beings have minds? These are problems of great generality and complexity; they cannot be pursued in detail here. Nevertheless, the reader of any book about writing and rationality may reasonably expect some preliminary account of what is going to be taken for granted about ‘the mind’. This introductory chapter aims to fulfil that authorial obligation. SCEPTICISM ABOUT THE MIND The existence of the mind has been doubted, both for theoretical reasons by philosophers and for physiological reasons by neurologists. According to some sceptics, talk about the mind is both misinformed and misleading, because there is no such thing. Its alleged existence has been called a ‘myth’ (Kenyon 1941). According to others, the mind does indeed exist, but turns out to be identical with the brain. Such scepticism has two main sources. They are separate, but interconnected. One is the stalemate reached in the debate about the mind that was initiated in the 17th century by Descartes. The other is the widespread adoption of a paradigm of scientific method into which the study of the mind (as distinct from the brain) just does not fit.
< previous page
page_1
next page >
< previous page
page_2
next page >
Page 2 It is possible to trace theories about the mind back much further than Descartes, but there would be little point in doing so for present purposes. We are sometimes told that Plato was the first thinker to draw a sharp distinction between the mind and the body. But it is far from clear that Plato or Greeks of his generation had a contrasting pair of terms corresponding unequivocally to the modern pair mind and body . What Greek writers often refer to is logos , a term which, in one of its many uses (Peters 1967:110–112), is frequently translated into English either as ‘speech’ or as ‘reason’. Logos is conceived of as what animals (as opposed to human beings) do not have; but it is often taken as lacking in barbarians (as opposed to Greeks) as well. It is also manifested on an altogether higher level in the cosmic order, where its presence can presumably be attributed only to the creator of the universe. Another Greek candidate for the role of mind is nous: but this does not quite fit the bill either. Anaxagoras claimed that nous originally set the whole cosmos in motion, and this seems a far cry from modern conceptions of the function of mind (Kirk, Raven and Schofield 1983:362–5). According to Descartes ( Principia Philosophiae, 1644), the mind that human beings have is a special kind of substance, radically different from the substance found in material objects. The latter has extension in space as its principal property or essence. Mind, on the other hand, has thought as its principal property: it is ‘thinking substance’. By advancing this thesis Descartes created for himself at least two problems. One was explaining how to make sense of the notion of an immaterial substance. The other was the problem of explaining how, in the human being, these two disparate kinds of substance were connected. The Cartesian dichotomy set the framework for subsequent debate about the status of the mind. In this debate, all possible positions were eventually taken and even given names (‘identity theory’, ‘interactionism’, ‘neutral monism’, etc.). But all encountered difficulties and no clear consensus emerged. One way of closing such a debate is to deny the validity of the original dichotomy. Since the body and its brain are visible, whereas the mind is invisible, and since it is on the whole much easier to question the existence of the invisible rather than the visible, scepticism about the mind is the predictable result. The other principal source of modern scepticism about the mind can be laid at the door of behaviourism in psychology. J.B. Watson, one of the founding fathers of the movement, memorably proclaimed that ‘what the psychologists have hitherto called thought’ is in fact ‘nothing but talking to ourselves’ (Watson 1924:238). According to Watson ‘consciousness is neither a definite nor a usable concept’ and ‘belief in the existence of consciousness goes back to the ancient days of superstition and magic’ (Watson 1924:2). In itself, this explanation seemed to be no more than a deferral of the problem, since it was unclear how talking to oneself could be achieved without having a language in which to talk. Watson was therefore obliged
< previous page
page_2
next page >
< previous page
page_3
next page >
Page 3 to elaborate a theory of language which described language acquisition as a process of developing verbal ‘habits’ through conditioned responses. This approach was taken up by linguists during the interwar period, its foremost exponent being Leonard Bloomfield. Bloomfield rejected what he called ‘mentalism’ in linguistics and proposed instead an account of linguistic meaning based solely on the mechanisms of stimulus and response (Bloomfield 1935). THE VULGAR CONCEPT OF MIND As far as the present inquiry is concerned, scepticism about the mind emanating from either of the two sources identified above may be considered an irrelevance. The methodological scruples of behaviourists are irrelevant, because the objective of the present inquiry is not to set up a ‘science’ of human thinking which copies the methods of the natural sciences. The philosophical ‘mind-body’ debate is irrelevant too, because it does not matter which of the various positions one favours, or whether one rejects them all. All that matters is that there should be some justification for a discourse that deals with the mind and the properties traditionally attributed to it. That justification is not hard to find. It is based on the commonly felt need to discuss everyday aspects of human experience that cannot be discussed in any other terms so far devised by human ingenuity. Bill Jones needs no more justification for believing that sometimes he has thoughts than he needs for believing that sometimes he is hungry. Furthermore, when Bill Jones describes those thoughts, he has to describe them using mental terms (such terms as idea , belief , reason , conclusion , opinion , etc.), because there are no physical or physiological terms in which to describe them. When he asks other people what they are thinking he is not asking for a description of any current state of their brain or body. And when he speaks, as he may sometimes do, of ‘putting his ideas down on paper’, he is not likely to take much notice of a pedant who complains that what appears on the paper are not his ideas at all, but just written words. Some philosophers would endorse Bill Jones’s blunt approach to matters of the mind. Thomas Reid declares that it is impossible to give a ‘logical definition’ of the word mind, but that nevertheless we are well aware what the mind is. ‘We are conscious that we think, and that we have a variety of thoughts of different kinds.’ It is on this basis that we take the mind of a man to be ‘that in him which thinks, remembers, reasons, wills’ (Reid 1764:132–3). But in order to clear up a point which might possibly be the source of misunderstanding, it would be as well to state straight away that although the Bill-Jones approach may be described as ‘common sense’, it is not being advanced here as a philosophical position, of the kind championed by Reid, or G.E. Moore, or any of the other ‘common sense philosophers’. If it were, then it would indeed call for supporting epistemological
< previous page
page_3
next page >
< previous page
page_4
next page >
Page 4 arguments. I do not propose to supply any, because that is not the status of Bill-Jonesian ‘mentalism’ in this book. What I endorse in this way is simply a linguistic warrant for continuing with certain everyday uses of certain familiar terms, including the term mind. To put the point somewhat differently, Bill-Jones experiences and observations are the basis of what an Oxford philosopher once called ‘our vulgar concept of mind’ (Hampshire 1971:20), and the terminology used for discussing the mind throughout this book might be called ‘vulgar mindspeak’. This vulgar concept of mind seems to be indispensable for dealing with large areas of our everyday lives. Any attempts to dismiss it out of hand as mistaken, or to pass beyond mindspeak to some higher form of scientific discourse which could or should replace it, are at best futile and at worst arrogant. Our vulgar concept of mind does not commit us to accepting all the far-from-vulgar claims about the mind that have been made by psychological theorists. It does not automatically legitimize the id, the superego, the archetype or other constructs of that order. Nor need it be the beginning of a slippery slope leading in that direction. On the contrary, it provides a quite robust but flexible safeguard against accepting any propositions about mental events that clash with common sense. Thus, for example, relying on this vulgar concept of mind, I am no more persuaded to suppose that my thinking is being done for me by some homunculus inside my head than I take seriously the proposition that there is a homunculus in my stomach demanding food when I feel hungry. Similarly, I am disinclined to pay much attention to the theorists of so-called ‘distributed cognition’ when they claim that my pocket calculator is actually a portable extension of my mind that can do some of my thinking for me. The current intellectual fashion for describing the mind as ‘distributed’ seems to me at best a misleading synecdoche (as e.g. in Renfrew 2007), and at worst a category mistake (as e.g. in Sutton 2004). I am even less happy to believe that my brain is or contains a computer as hardware, and my mental operations are its software. On this I find myself in agreement with the conclusions reached by those who argue that the computational metaphor in ‘cognitive science’ is deeply misleading (e.g. Searle 1992) and that thinking of oneself as a computer program simply ‘is not coherent’ (Bennett and Hacker 2003:432). The vulgar concept of mind, as I propose to deploy it here, can thus function as an effective prophylactic against various kinds of pretentious nonsense. It allows sufficient scope to deal with all the questions about writing that are examined in this book. At the same time, it has the advantage of making room to discuss many other things we find we need to take into account in order to make sense of the world we live in. The question whether we are right to believe that we have minds is a nonstarter. We have no option but to recognize our own mental activities. The notion that this recognition is somehow questionable, or illusory, or requires the support of proof or scientific evidence, is one kind of philosophical confusion. It is caused by trying to decontextualize certain lay ways of talking about
< previous page
page_4
next page >
< previous page
page_5
next page >
Page 5 the mind and deal with them as if they belonged to a quite different realm of discourse. Now the adoption of vulgar mindspeak does not commit anyone to any position about writing per se, and certainly to no assumptions about relations between the spoken word and the written word. I shall, however, find the expression literate mind useful, largely because it avoids having to keep repeating more tedious accounts of what can be taken for granted about the modes of communication characteristic of literate societies. THE COLLECTIVE MIND But that answer, someone will doubtless object, deals only with the mind of the individual. A more contentious issue concerns the extension from talking about the minds of individual human beings to talking about the mind of a collectivity. And isn’t that precisely what often happens in the course of discussions of the ‘literate mind’? Bill Jones finds himself in some difficulty here. Collective minds are problematic because there seems to be nowhere for any collective mind to reside (other than in the separate minds of the individual members of the collectivity). Here we encounter a more general difficulty about attributing views or characteristics to a community, and in particular a large community, where in practice it is impossible to check exactly where every single individual stands in relation to the proposed description. Thus when Allan Bloom speaks of ‘the closing of the American mind’ he lays himself open to the charge that this is a preposterous generalization, because there is no such thing as the American mind. Bloom does seem prepared to believe that various other nations have distinctive minds of their own. He says of the French, for example: Every Frenchman is born, or at least early on becomes, Cartesian or Pascalian. [ … ] Descartes and Pascal are national authors, and they tell the French people what their alternatives are, and afford a peculiar and powerful perspective on life’s perennial problems. They weave the fabric of souls. [ … ] It is not so much that the French get principles from these sources; rather they produce a cast of mind. (Bloom 1987:52) Bloom would doubtless not have denied that it is possible to find French people who have never heard either of Descartes or of Pascal. Nevertheless, he might have said in self-defence, Descartes and Pascal do represent two different types of French thinking that are recognizable in many individual cases. The difference between these types consists in the readiness to accept certain assumptions and reject others, to worry about certain problems and not about others, to accord different priorities, and so on. And if this is all
< previous page
page_5
next page >
< previous page
page_6
next page >
Page 6 that Bloom means, it is well within the bounds allowed by our vulgar concept of mind (even though rhetorical flourishes about weaving the fabric of souls might seem extravagant). To go this far with Bloom, however, is not to pass judgment on whether what he says about the French mind is accurate, misleading, oversimplified, etc. We must make up our own minds about that. So what exactly does vulgar mindspeak license in the way of recognizing and characterizing collective minds? And does this differ from what should be licensed? It was precisely in order to deal with these questions that Durkheim elaborated his theory of ‘collective representations’. Seeking to lay the foundations of sociological method, he denied that a society could be reduced to the set of individual members it contained. A whole is not identical with the sum of its parts. It is something different, and its properties differ from those of its component parts. (Durkheim 1895:102) The strategy behind this move was to rule out of court from the very beginning any possible counterevidence from individual cases. Durkheim insists that ‘there cannot be a society in which the individuals do not differ more or less from the collective type’ (Durkheim 1895:70). The reason for this is that ‘the immediate physical milieu in which each one of us is placed, the hereditary antecedents, and the social influences vary from one individual to the next, and consequently diversify consciousnesses’ (Durkheim 1895:69). He further argued that the individual, in any case, is moulded by society. Society’s laws must be obeyed. Society’s standards of conduct are taught, whether by example or by explicit instruction. From the moment of birth, the individual is subject to these constraints. When I fulfil my obligations as brother, husband, or citizen, when I execute my contracts, I perform duties which are defined, externally to myself and my acts, in law and in custom. [ … ] The system of signs I use to express my thought, the system of currency I employ to pay my debts, the instruments of credit I utilize in my commercial relations, the practices followed in my profession, etc., function independently of my own use of them. And these statements can be repeated for each member of society. Here, then, are ways of acting, thinking, and feeling that present the noteworthy property of existing outside the individual consciousness. (Durkheim 1895:1–2) The constraints thus imposed upon the individual, says Durkheim, are ‘nonetheless efficacious for being indirect’: I am not obliged to speak French with my fellow-countrymen nor to use the legal currency, but I cannot possibly do otherwise. If I tried to escape this necessity, my attempt would fail miserably. (Durkheim 1895:3)
< previous page
page_6
next page >
< previous page
page_7
next page >
Page 7 Bill Jones would probably agree with Durkheim on most of this. Although Durkheim does not specifically discuss writing, it is clear that a given writing system, used throughout a given society and handed down from one generation to the next, would provide a paradigm case of the kind of institutionalized social practice attributable to the collectivity. Accordingly, one might identify a literate society à la Durkheim as a society in which the use of a writing system has become one of the recognized institutions. The trouble is that what we know of the history of writing indicates that in many societies writing remained for long periods a practice with which only a small proportion of the population would have been familiar, and for a restricted range of purposes. This makes it difficult to say, without further qualification, that the (collective) literate mind is the kind of mind characteristic of a society that has institutionalized writing; quite apart from any wider misgivings that one might have about the legitimacy of generalizing about ‘the Greek mind’, ‘the Chinese mind’, and so on. Here vulgar mindspeak flows into muddy waters. Pace Bloom, it is difficult to accept that ‘the American mind’ and ‘the French mind’ are any more than convenient journalistic pegs on which to hang a few suspect overgeneralizations. There are even more serious objections to using the term mind on a more ambitious scale to support generalizations across societies, where, strictly speaking, no theoretical foundation (either of a Durkheimian or any other kind) has been laid for it. Many people are, understandably, suspicious of those who talk of ‘the female mind’, ‘the adolescent mind’, ‘the criminal mind’ and similar grandiose abstractions. And this is the kind of company that the expression literate mind sometimes seems to be keeping. In particular, it would be unfortunate if that expression were taken to assume or imply that there is only one form of literacy and hence only one kind of literate mind. On the contrary, research in recent years has shown that in literate societies there will usually be a gamut of different ‘literacies’ in play. Can the literate mind be rescued from associations with this dubious collection of abstractions? Part of the problem with proceeding straight away to do that is that it will involve raising exactly the same issues as those that will need to be raised in the following chapters anyway. So it seems more sensible to start simply by taking a particular example, and treat that as an illustration of the kinds of assumptions about writing and rationality which one kind of literate mind tends to take for granted. LITERACY AND RATIONALITY Let us plunge in at the deep end. Consider the following passage: Two different reasoners might infer the same conclusion from the same premisses; and yet their proceeding might be governed by habits which would be formulated in different, or even conflicting, leading principles. Only that man’s reasoning would be good whose leading
< previous page
page_7
next page >
< previous page
page_8
next page >
Page 8 principle was true for all possible cases. It is not essential that the reasoner should have a distinct apprehension of the leading principle of the habit which governs his reasoning; it is sufficient that he should be conscious of proceeding according to a general method, and that he should hold that that method is generally apt to lead to the truth. He may even conceive himself to be following one leading principle when, in reality, he is following another, and may consequently blunder in his conclusion. From the effective leading principle, together with the premisses, the propriety of accepting the conclusion in such sense as it is accepted follows necessarily in every case. Already we are dealing with a discussion that no one is likely to be able to follow easily unless familiar with such terms as premiss , principle , conclusion and a syntax that is far removed from that of everyday conversation; in short, the reader addressed is presumed to be a well educated person, and a well educated person is one familiar with reading and studying written texts of this level of ‘difficulty’. But matters get worse as the passage continues: Suppose that the leading principle involves two propositions, L and L’ , and suppose that there are three premisses, P, P’ , P’’; and let C signify the acceptance of the conclusion, as it is accepted, either as true, or as a legitimate approximation to the truth, or as an assumption conducive to the ascertainment of the truth. Then, from the five premisses L, L’ , P, P’ , P” , the inference to C would be necessary; but it would not be so from L, L’ , P’ , P” alone, for if it were, P would not really act as a premiss at all. From P’ and P” as the sole premisses, C would follow, if the leading principle consisted of L, L’ , and P. Or from the four premisses L’ , P, P’ , P” , the same conclusion would follow if L alone were the leading principle. What, then, could be the leading principle of the inference of C from all five propositions L, L’ , P, P’ , P” , taken as premisses? It would be something already implied in those premisses; and it might be almost any general proposition so implied. Here the exposition proceeds by means of notational devices (the use of capital letters and diacritics) that have no semiological counterpart at all in a preliterate society. It is not just that the vocabulary comes from a literate register (the jargon of the logician), but that the form of argument is quite incomprehensible to anyone unfamiliar with certain techniques of writing. In short, it cannot be translated into the speech of any primary oral culture, any more than the mathematical reasoning necessary to prove Pythagoras’s theorem can somehow be explained to people who have never seen a diagram of a triangle. The passage quoted is from an article by Charles Sanders Peirce, whose writings on logic fall into a tradition that stretches back as far as Aristotle.
< previous page
page_8
next page >
< previous page
page_9
next page >
Page 9 The tradition in which Peirce’s contribution takes its place is a literate tradition all the way through. It never had a purely oral form. Peirce’s article was written for a dictionary published in 1902 (Hartshorne and Weiss 1931–5: 2.589). Preliterate societies do not have dictionaries. These points may be so well known that it might seem unnecessary to repeat them here. But it is the simplest way of drawing attention to a fact that is easily overlooked; namely, that a passage such as that cited above comes to us pre-wrapped, as it were, in various layers of literate assumptions. The very practice of verbatim quotation from one publication to another is itself a literate device with its own conventions. Can it be assumed that these layers of literacy have had no effect on what Peirce and his readers expect a rational argument to be? Or would this be one more layer to unwrap? PEIRCE’S VIEW OF RATIONALITY Peirce’s work, as it happens, provides an interesting exemplification of the extent to which concepts of writing can come to dominate interpretations of thinking in general. Peirce startled those logicians who took a more traditional approach to their subject, for he roundly equated logic with semiotic. Logic, according to Peirce, is ‘only another name for semiotic’, the latter being ‘the quasi-necessary, or formal, doctrine of signs’ (Peirce 1897; Hartshorne and Weiss 1931–5: 2.227). In order to understand how Peirce arrives at this unorthodox equation it is important to grasp that a language is, for Peirce, a system of public signs, one of many systems developed in human societies. Words are signs, written words no less than spoken words. When citing examples, Peirce often assumes the word to be a written form. Thus he regards the ‘material qualities’ of ‘the word “man”’ as ‘its consisting of three letters’ (Peirce 1868; Hartshorne and Weiss 1931–5: 5.287), a description that could hardly apply prima facie to any corresponding monosyllabic utterance. A sign is ‘an object which stands for another to some mind’ (Peirce 1873: Hoopes 1991:141). It cannot exist in isolation from this triadic relationship . Furthermore, this triadic relationship is ‘real’: it is not just the product of one possible way of looking at certain kinds of experience. (For this dimension of reality Peirce has a technical term of his own: thirdness .) A sign is ‘only a sign to that mind which so considers and if it is not a sign to any mind it is not a sign at all’ (Peirce 1873; Hoopes 1991:142). Words, for Peirce, are man-made signs. Man makes the word, and the word means nothing which the man has not made it mean, and that only to some man. (Peirce 1868; Hartshorne and Weiss 1932–5: 5.313)
< previous page
page_9
next page >
< previous page
page_10
next page >
Page 10 This is far from familiar Aristotelian conventionalism. In Peirce’s perspective, there can be no establishment of verbal conventions, or social institutions based on verbal conventions, without man first making the word. The sign comes first. But this does not mean that a thinking mind is somehow prior to the sign. Without signs of some kind, there is no thinking at all. And this leads to the most controversial claim of all in Peirce’s philosophy of mind: that ‘man can think only by means of words or other external symbols’ (Peirce 1868; Hartshorne and Weiss 1931–5: 5.313). That puts rationality in quite a different light from any with which we might be familiar from the study of Aristotle. It means that speaking and writing are not just external manifestations of thought: they are themselves forms of thought. So an Aristotelian syllogism of the kind laid out in the traditional textbooks of logic is not just the verbal expression of some (hypothetical) inner wordless process of reasoning: it actually is reasoning in vivo . How can this be? How, the critic will ask, can a philosopher just abolish, elide or ignore the commonsense difference between reasoning and its material form of expression? What underlies Peirce’s argument is his celebrated distinction between type and token . But it is a distinction that only a literate mind could draw. This is already evident from the way Peirce introduces it: A common mode of estimating the amount of matter in a MS. or printed book is to count the number of words. There will ordinarily be about twenty the’s on the page, and of course they count as twenty words. In another sense of the word “word,” however, there is but one word ‘the’ in the English language; and it is impossible that this word should lie visibly on a page or be heard in any voice, for the reason that it is not a Single thing or Single event. It does not exist; it only determines things that do exist. Such a definitely significant Form, I propose to term a Type. A Single event which happens just once and whose identity is limited to that one happening or a Single object or thing which is in some single place at any one instant of time, such event or thing being significant only as occurring just when and where it does, such as this or that word on a single line of a single page of a single copy of a book, I will venture to call a Token . (Peirce 1906; Hartshorne and Weiss 1931–5: 4.537) Such a distinction, it seems self-evident, would make no sense at all to members of a preliterate community. In preliterate communities no one is ever called upon to ‘estimate the amount of matter’ in what is said by one of its members. Or if they did, it would probably be in terms of how long it took to say it. For Peirce, however, as one commentator observes, ‘the type-token relation is intended to clarify the ontological status of signs of all kinds’. In particular, Peirce draws his examples from written and printed language, where the intuitive identities on which the notion of type-token relations are based are at the most ‘visible’. Peirce therefore fails to
< previous page
page_10
next page >
< previous page
page_11
next page >
Page 11 perceive the definitional problems involved with the notion of sameness of form. There is a precise but misleading analogy that holds between the mechanics of printing (where each letter(-type) is a single item in a type-face) and the relations of identity and similarity that are conceived of as underlying language use. (Hutton 1990:19) In short, Peirce’s view of the sign is the view of Homo typographicus. There could hardly be a clearer example of one literate mind extrapolating a general theory of reasoning from its experience of the written text. If this were an isolated case, it could be dismissed as of no great significance. But to what extent does it show in starkly perspicuous form a certain modus operandi that had long been taken for granted less obviously in a whole tradition of Western thinking? THE PRESTIGE OF WRITING The impact of writing on the organization of human societies is widely acknowledged and well documented: the impact of writing on the way people think is enigmatic and controversial. Is it true, as has been claimed, that ‘writing restructures consciousness’? Did the invention of writing either require a new mentality or have a profound effect on mental processes? Does the literate mind differ in important respects from its predecessor, the preliterate mind? Or is this a flattering deception fostered by literacy itself? These questions are central to the discussion set out in the present book. That discussion will in a number of places assume the following: that habitual familiarity with the practices of reading and writing (of the kind that we take for granted in Western societies today) does induce a particular mind-set that may be called ‘scriptism’; that is, a belief in the superiority in various respects of written languages over spoken languages, and of the mastery of writing, as an intellectual achievement, over the mere command of fluent speech. According to Talbot Taylor, scriptism is ‘the influence of writing on the conceptualization of speech’: Scriptism tends to push the difference between spoken language and written language farther and farther out to the periphery of the communicative act. The essential features of speech and writing are assumed to be the same. (Taylor 1997:52) Taylor points out that some psychologists are so prone to scriptism as to hold that ‘ideal speech delivery is that which approximates the practice reading aloud of a written text’. It is but a short step to
< previous page
page_11
next page >
< previous page
page_12
next page >
Page 12 conceptualizing speech itself as ‘the reading off of a mental text’ (Taylor 1997:52–3). Per Linell prefers the term written language bias to scriptism . He identifies and discusses in detail 101 features of this bias in modern linguistics (Linell 2005). Scriptism is sometimes defined more narrowly as: the tendency of linguists to base their analyses on writing-induced concepts such as Phoneme, Word, Literal Meaning and ‘sentence’, while at the same time subscribing to the principle of the primacy of speech for linguistic inquiry. (Coulmas 1996:455) While this is certainly one conspicuous aspect of scriptism in modern linguistics, the term has a broader application. Scriptism in various forms is now endemic in Western societies because it is built into all public programmes of elementary education. It is manifested ubiquitously in the psychological attitudes that Ferdinand de Saussure once summed up in the phrase ‘ le prestige de l’écriture’. This prestige has a great deal to do with the historical fact that for many centuries writing and reading were abilities confined to relatively small and privileged sections of society. It also has to do with the fact that writing and reading have been indispensable for the exercise of power and administration in political units ranging from city-states to large empires. Throughout human history, whoever controlled writing and reading has always been able to control much more. The difference between scriptism and the mere acceptance of written practices comes out in all kinds of ways. For centuries it was possible to sign a document either by inscribing one’s own name or by marking a cross against one’s name already inscribed on the document. It is possible to imagine a society in which these two forms of signature were held to be not merely of equal legal validity but also of equal status. However, as soon as signing with a cross is seen as betraying the inability to write one’s own name, and this is regarded as a social or intellectual deficit, we are already in a scriptist society. We are in a scriptist society as soon as writing is accepted by philosophers and others as a general model for all processes of communication and understanding. The birth of modern science in England is presided over by the ubiquitous scriptist metaphor of ‘reading the book of Nature’. It was already familiar to Shakespeare. Francis Bacon appeals to it on various occasions. A man cannot be ‘too well studied in the book of God’s word, or in the book of God’s works; divinity or philosophy’ (Bacon 1605: I.i.3). According to George Berkeley’s New Theory of Vision, the writing in the book of Nature is plainly alphabetic. Berkeley appeals to the correspondence between sounds and letters to explain how it is that in Nature the correspondence between the visible and the tangible is everywhere the same:
< previous page
page_12
next page >
< previous page
page_13
next page >
Page 13 visible figures represent tangible figures much after the same manner that written words do sounds. Now, in this respect words are not arbitrary, it not being indifferent what written word stands for any sound: but it is requisite that each word contain in it so many distinct characters as there are variations in the sound it stands for. [ … ] It is indeed arbitrary that, in general, letters of any language represent sounds at all: but when that is once agreed, it is not arbitrary what combination of letters shall represent this or that particular sound. (Berkeley 1732:143) It seems very likely that Berkeley has in mind here another scriptist enterprise: John Wilkins’ famous Essay towards a Real Character and a Philosophical Language of 1688, where the author attempts to devise a form of writing that will transcribe ideas ‘directly’, and thus free mankind from the ‘curse’ imposed upon it at Babel (Harris 2000). Scriptist attitudes tend to be encouraged in societies with a dominating book-based religion, even when —and perhaps especially when—the population in general is illiterate. In this situation, the writing and copying of sacred texts invests literacy itself with a kind of sacrosanctity. In various civilizations we find deities of writing, which is often regarded as a gift from the gods, or from one particular god (Coulmas 1996:119–24). Scriptism in the West never attained the profundity that is apparent in China, where the written language itself came to be regarded as basic and spoken versions (often mutually incomprehensible) as imperfect derivatives. Doubtless this has a great deal to do with two facts. One is that traditional Chinese script remained fairly uniform and readable long after wide divergences developed between spoken dialects. The other is that Chinese writing is not alphabetic, and learning even a small inventory of written characters requires a far greater investment of time and energy than European children are accustomed to expend on learning the alphabet. In these circumstances, the identification of ‘the Chinese language’ with its script is not surprising. It is far more surprising—at least, at first sight—to find that as late as the 19th century Western scholars had difficulty in distinguishing clearly between the study of speech and the study of writing. This, at least, is the accusation that has been made against one of the founders of European Comparative Philology, Franz Bopp: Even Bopp does not distinguish clearly between letters and sounds. Reading Bopp, we might think that a language is inseparable from its alphabet. (Saussure 1922:46) The charge is all the more telling, coming as it does from a text that has been and still is regarded by many as the Magna Carta of modern linguistics, Saussure’s Cours de linguistique générale. Two capital points are
< previous page
page_13
next page >
< previous page
page_14
next page >
Page 14 worthy of note. One is that Saussure here is blaming a failure to understand the relationship between speech and writing as being responsible for a more profound failure to understand what language is (and not the other way round). The other is that Bopp’s confusion (if Saussure’s diagnosis is right) can be traced all the way back to Plato: it is rife throughout the Western tradition. DISAMBIGUATING ‘RATIONALITY’ A number of other confusions and potential confusions merit a preliminary mention here. One is the tendency to confuse rationality with rationalism. Grouped under the rubric rationalism we find a variety of philosophical positions, associated with thinkers as far apart in their philosophical stance as Descartes, Spinoza and even Aquinas. Rationalism is commonly contrasted with empiricism, but there is no such contrast between rationality and empiricism: on the contrary, empiricists commonly claim their views to be eminently rational. Feeding into this confusion is the practice of some writers to conflate the use of the adjectives rational and rationalist. For example, Rosalind Thomas speaks of a ‘rationalist view of writing’ in connexion with ancient Greece (Thomas R. 1992:74). But what it has to do with rationalism is not clear, since what she counts as ‘rationalist’ uses of writing seem to cover every kind of case except those which are ‘symbolic or non-documentary’. Even more confusingly, she describes these latter uses (e.g. writing on curse tablets) as ‘non-literate’. These are not happy choices of terminology. They are nevertheless worth mentioning here because they highlight the still-prevalent tendency among Western scholars to assume without question that the mastery of writing is eo ipso a manifestation of reasoning, and literacy itself a proof of possessing a rational mind. We are dealing here with survivals of the 19th-century belief in the mental inferiority of preliterate people. In parallel fashion, the term irrationalism is made to subsume the positions advanced by such disparate thinkers as Pascal, Schopenhauer and Nietzsche. But those most committed to preaching the limits and deceptions of human reason usually present their conclusions as being reached by some rational process. This applies even to advocates of philosophies which celebrate or recommend the abandonment of reason. As one commentator remarks, we seem to be dealing here with ‘a curious form of inverted rationalism; the rational response to an irrational world is to act irrationally’ (Gardiner 1967). Thus debates between rationalists and their opponents tend to shed little light on the matters to be explored in this book. Rather, such debates typically presuppose that all parties either already agree, or have already
< previous page
page_14
next page >
< previous page
page_15
next page >
Page 15 agreed to disagree, on what rationality is: so the need to pursue it further is not even recognized. An assimilation that has greatly confused the issue in recent times has been the self-serving equation of modern ‘science’ with reasoned thinking about the natural world. The assimilation has been adopted enthusiastically by some philosophers. ‘Science is essentially logical,’ proclaimed Alfred North Whitehead in his presidential address to the British Association in 1916 (Whitehead 1917:114), adding: ‘The nexus between its concepts is a logical nexus, and the grounds for its detailed assertions are logical grounds.’ It is difficult to know whether, if true, this would be more crippling to science or to logic. In either case, it would certainly have been news to Aristotle. Discussions of rationality often get led astray by other diversions. One of these arises from a widespread tendency to treat rational and reasonable as synonyms. (The present writer assumes that a moment’s reflection should suffice to convince anyone that they are not.) Another is the practice of describing a course of action as ‘rational’ when what is actually being judged is not the action performed but the thinking presumed to lie behind it. A third is that both philosophers and psychologists often confuse reasons with motives. (The word because … may serve to introduce either.) When all three sources of uncertainty combine (a good example is the chapter on ‘Rational Action’ in Graham 1998) the result is a quagmire of confusions. We may be invited to believe that the same course of conduct may be simultaneously ‘rational’ and ‘irrational’, that there is a difference between ‘subjective’ and ‘objective’ rationality even though both may coincide, that rationality has something to do with personal intelligence or strength of conviction, that invoking ‘bad reasons’ constitutes reasoning all the same, and so on. To avoid this quagmire, it may be as well to state here that the rationality under discussion in the following chapters has to do strictly with establishing the validity of conclusions. Anyone inclined to believe that the connexion between the conclusion of a valid syllogism and its premises varies according to one’s ‘point of view’, or depends on what is taken to be ‘subjective’ and what ‘objective’, will be worrying about concerns that are not dealt with in this book. Within the programmatic perspective just stated, there are already enough problems to be addressed, without dragging in extraneous considerations of a different order altogether. Unless attention remains focussed upon valid inference (of which countless examples are given in elementary textbooks of logic) the discussion of rationality easily slides into a free-for-all about the psychology of belief and its relation to human behaviour. The above provisos are not intended to rule out in advance the possibility that different societies or different individuals have developed somewhat different conceptions of rationality. Nevertheless, if they are to qualify as
< previous page
page_15
next page >
< previous page
page_16
next page >
Page 16 conceptions of rationality in the first place, they must involve a general notion of what ‘follows from’ what (in that special sense of ‘following’ in which conclusions are said to do so) and, furthermore, how that kind of consecution is to be distinguished from what does not follow. Where no such difference was ever entertained there could hardly be said to be any grasp of rationality at all. The discussion set out here will try to steer clear of the many traps mentioned above. It will be useful to begin by considering some problems about rationality where, at least on the surface, the question of literacy does not even arise.
< previous page
page_16
next page >
< previous page
page_17
next page >
Page 17 2 The Primitive Mind Revisited THE DOCTRINE OF THE PRIMITIVE MIND On first inspection there appears to be no connexion at all between the way Peirce qua logician conducts his discussion of the question ‘What is logic?’ and a view of the mind that was becoming influential at about the same time in the nascent field of anthropology. But that appearance is misleading. In its simplest form, the anthropological doctrine of the primitive mind maintained that the members of so-called ‘primitive’ societies, existing from the earliest times, and in some cases surviving down to the present day, have always had their own ways of thinking about the world. These ways are assumed to be, collectively, quite different from those familiar and valued in civilized countries. The doctrine was prominent in discussions of the 19th and early 20th centuries. As one anthropologist puts it in retrospect: The nineteenth-century imagination had been deeply impressed by the contrast between two of its favourite stereotypes, that of ancestral primitive man and that of modern, scientific man. Even now the fascination of the divide remains. (Douglas 1980:15–16) Another anthropologist states more bluntly: ‘European social standards of the period were being treated as definitive categories’ (Lienhardt 1966:9). According to Lienhardt: This is explicit in the common use of the terms ‘higher’ and ‘lower’ races. Since it was taken for granted that the highest standards in knowledge, morals, and religion were at that time to be found among the educated classes of Europe and America, it was inferred that the converse of those standards must have been those of our earliest ancestors, of whom some living primitive tribes were thought to be the lingering survivals. (Lienhardt 1966:9–10)
< previous page
page_17
next page >
< previous page
page_18
next page >
Page 18 These assumptions fitted in conveniently to the brief that European colonial powers took as their mandate for dealing with subject peoples throughout the period in question. The term primitive was well suited to play on, or disguise, a convenient ambiguity between chronological and evaluative implications (i.e. ‘early’ versus ‘crude, rudimentary’). Since the end of the 19th century, the doctrine of the primitive mind, in various forms, has been a more or less perennial focus of debate in the history of anthropology. It is reflected in such well-known titles as The Mind of Primitive Man (Boas 1911), La structure de la mentalité primitive (Van der Leeuw 1928), La pensée sauvage (Lévi-Strauss 1962), The Domestication of the Savage Mind (Goody 1977) and The Foundations of Primitive Thought (Hallpike 1979). In more recent times, far from dying down, the debate has acquired political implications and become enmeshed in arguments about racism, ethnocentricity and ‘political correctness’. The nub of the matter is that the contrast drawn is always with the thinking held to be characteristic of people living in ‘modern’ societies; the very same societies from which students of anthropology were and are themselves drawn. In other words, these modern societies and their thinking are assumed to be represented typically by Western countries of the present day, where programmes of universal or nearuniversal education have been in place for generations. Thus ‘modern’ tends to imply ‘civilized’ and ‘educated’, while ‘primitive’ tends to imply ‘uncivilized’ and ‘uneducated’ (i.e. by Western standards). On this basis, the distinction is often dismissed out of hand as just another form of prejudice which treats ‘lesser breeds without the law’ as inferior. Their inferiority is seen as rooted in their lacking certain mental capacities and manifested in their ineradicable tendency to think about the world in naive and superstitious ways. Thus anyone subscribing to the doctrine of the primitive mind might be taken as believing that we were at one end of the scale of human progress and the so-called savages were at the other end, and that, because primitive men were on a rather low technological level, their thought and custom must in all respects be the antithesis of ours. (Evans-Pritchard 1965:105) Evans-Pritchard goes on to point out that Herbert Spencer held the mind of primitive man to be ‘unspeculative, uncritical, incapable of generalizing, and with scarcely any notions save those yielded by the perceptions’, that Max Müller apparently accepted that the Veddahs of Ceylon had no language, and that Sir Francis Galton claimed that his dog had more intelligence that the Damara tribesmen he encountered on his travels. With these associations, the doctrine of the primitive mind has been for a long time in bad odour in intellectual circles. Its emergence is seen as revealing an unacceptable face of anthropology, a blot on the record of an otherwise praiseworthy discipline.
< previous page
page_18
next page >
< previous page
page_19
next page >
Page 19 Some anthropologists have felt it necessary to apologize for the word primitive, even while continuing to use it. Lucy Mair, for example, in her book Primitive Government , says: if I write of primitive societies I am not implying anything about the characteristics of the persons who compose them—least of all that such persons have remained in the childhood stage of a human race whose maturity is represented by the ‘western’ nations. It is ways of doing things which can be described as primitive or otherwise. (Mair 1964:8. Italics in the original.) She concedes, however, that modern European conquests of overseas territories were in large part due to the superiority of European technology in various fields, just as in antiquity the technical superiority of the Romans had enabled them to extend their domination over the whole Mediterra-nean basin. In the field of technology, she argues, it is legitimate to speak of rudimentary or primitive peoples, adding that this is ‘the only sense in which a modern anthropologist would use the word’. Thus it seems that for Mair primitive societies are to be judged primitive only on the technological level; for on that level they ‘really are’ primitive, as history has proved by their succumbing to European conquest and living under European rule. Sir Raymond Firth, professor of anthropology in the University of London, condemned the word primitive in the following terms: The terms ‘primitive’, ‘savage’ and ‘native’, which were widely current as recently as thirty years or so ago as contrast to ‘civilised’, i.e. primarily western and Asian developed cultures, have become no longer appropriate. They are not only old-fashioned, they are also felt to be derogatory. Anthropologists have often explained that by ‘primitive’ they meant simply a reference to lack of technical development, without implication for social, moral or religious development. ‘Primitive’ kinship systems, for instance, are often very complex, far more so than western systems; and ‘primitive’ religious ideas are often of considerable sophistication and depth of meaning. But the meaning of the term often remained confused for non-anthropologists. And technical development has been so rapid and extensive, whether on remote Pacific islands or in the heart of Africa, that ‘primitive’, even in this sense, is not correct. So the term best abandoned. (Firth 1975:17) Evans-Pritchard, while deploring the word primitive, continues to use it: some people today find it embarrassing to hear peoples described as primitives or natives, and even more so to hear them spoken of as savages. But I am sometimes obliged to use the designations of my authors, who wrote in the robust language of a time when offence to the peoples
< previous page
page_19
next page >
< previous page
page_20
next page >
Page 20 they wrote about could scarcely be given, the good time of Victorian prosperity and progress, and, one may add, smugness, our pomp of yesterday. But the words are used by me in what Weber calls a valuefree sense, and they are etymologically unobjectionable. In any case, the use of the word ‘primitive’ to describe peoples living in small-scale societies with a simple material culture and lacking literature is too firmly established to be eliminated. (Evans-Pritchard 1965:18) In all the cases quoted above, the embarrassment is obvious, but the excuses offered do not stand up to serious scrutiny. In the case of Mair, one question that is begged is why, even in the field of technology, superiority should be judged by political and military domination. Another is whether inferiority of technology might be due to primitive ways of thinking about technical problems. To say that it is ways of doing things which can be described as primitive or otherwise immediately invites the riposte that thinking is also a way of doing things. Moreover, insofar as technology itself may be regarded as the application of rational procedures, a judgment of the primitiveness of technology may be taken to imply a judgement about rationality, and concedes something to the idea that the criteria of rationality in western society may be properly applied more widely, even if it does not establish them as the universally valid criteria of rationality. (Wilson 1970: x) In the case of Firth, the term primitive is rejected altogether; but when we inspect Firth’s writings we find that he speaks instead of ‘simple’ or ‘simpler’ societies, ‘exotic’ societies, ‘lowly developed’ societies. It is difficult not to conclude that these terms are euphemisms for primitive. EvansPritchard claims that his own use of the term primitive is ‘value-free’. But whether it can be ‘value-free’ is precisely the issue at stake; or, to put it more bluntly, whether the ‘value-freedom’ on which some anthropologists congratulate themselves is not another case of intellectual self-deception. GREEKS AND BARBARIANS The doctrine of the primitive mind had its precursors in antiquity. In the first book of his Histories, Herodotus tells his readers: The following are certain Persian customs which I can describe from personal knowledge. The erection of statues, temples, and altars is not an accepted practice amongst them, and anyone who does such a thing is considered a fool, because, presumably, the Persian religion
< previous page
page_20
next page >
< previous page
page_21
next page >
Page 21 is not anthropomorphic, like the Greek. Zeus, in their system, is the whole circle of the heavens, and they sacrifice to him from the tops of mountains. They also worship the sun, moon, and earth, fire, water, and winds, which are their only original deities: it was later that they learned from the Assyrians and Arabians the cult of Uranian Aphrodite. The Assyrian name for Aphrodite is Mylitta, the Arabian Alilat, the Persian Mitra. As for ceremonial, when they offer sacrifice to the deities I mentioned, they erect no altar and kindle no fire; the libation, the flutemusic, the garlands, the sprinkled meal—all these things, familiar to us, they have no use for. (Herodotus 1.130) If Herodotus is the father of history, it is evident that human curiosity about the strange beliefs and customs of foreign peoples is coeval with history itself. Because he paid considerable attention to such matters, Herodotus has also been called the ‘father of ethnography’ (Malefijt 1974:5). But, as with the Greeks’ interest in their own ancient myths and traditions, there is a wide gap between mere curiosity and genuine anthropological inquiry. It is not enough to seek out further examples from a wide range of sources. It is not enough to compare and contrast them, or even to offer occasional ‘explanations’ (as Hecataeus of Miletus appears to have been interested in doing as early as the 6th century BC). It is not enough to accumulate and classify them. What is needed is a systematic effort to incorporate this mass of material into some more comprehensive account of human societies and human attempts to understand how they work. In antiquity the prerequisites for constructing such an account were lacking. Aristotle’s theory of politics was the best that Greece could achieve; but Aristotle’s view of society was nothing if not Hellenocentric. Although he collected information about the institutions of non-Greek communities (Barker, E. 1946:387), his observations regarding them are few and far between. Nor did he seem to regard it worth while inquiring very closely into their outlandish customs or ways of life. However, the notion that the Greeks had a different mentality from other peoples (and a superior one) predates Aristotle. It is implicit in the Classical concept of ‘the barbarian’, which may be seen as underlying Herodotus’ descriptions of the exotic practices of peoples such as the Persians, Egyptians and Scythians. Apart from a lack of competence in Greek [ … ], the barbarian’s defining feature is an absence of the moral responsibility required to exercise political freedom: the two are connected, since both imply a lack of logos , the ability to reason and speak (sc. Greek) characteristic of the adult male citizen. (Wiedemann 1996:233)
< previous page
page_21
next page >
< previous page
page_22
next page >
Page 22 The barbarians’ lack of logos can thus be seen as explaining their un-Greek practices and beliefs, as well as providing a reason why these bizarre practices and beliefs are hardly worthy of serious study. In addition, it offers a justification for the theory of natural slavery that we find in Aristotle (the demographic fact being that the great majority of slaves in the Greek world were barbarians). By late antiquity a different distinction between ‘us’ and ‘them’ was in place. This was the religious distinction between Christian and pagan—a distinction no less prejudicial to genuine anthropological inquiry and one which remained so for centuries. A view of the world in which the two most important events in human history were the Fall of Adam and the birth of Jesus is hardly conducive to investigating the ignorant minds of those who have never heard of either. Nor does the assumption that a Christian’s duty is to convert the heathen from their pagan ways, and as soon as possible, provide an encouraging introduction to patient and tolerant inquiry into the beliefs and practices of other peoples. On the contrary, the elimination of those misguided beliefs and practices becomes a priority. Romantic idealizations of the ‘noble savage’, unsullied by civilization, played no part in the traditional programme of missionary Christianity. DARWIN AND COMTE The idealistic picture of a primeval Golden Age, followed by deterioration and diversification, was eventually challenged by a new evolutionary concept of gradual progress, converging everywhere in the same direction. Only then did it become possible to lay the foundations for a different approach to the study of ‘strange’ societies. To believe that man was aboriginally civilised and then suffered utter degradation in so many regions, is to take a pitiably low view of human nature. It is apparently a truer and more cheerful view that progress has been much more general than retrogression; that man has risen, though by slow and interrupted steps, from a lowly condition to the highest standard as yet attained by him in knowledge, morals and religion. (Darwin 1874:224) With these words Darwin concludes the chapter in The Descent of Man that is devoted to the development of the human intellectual and moral faculties. Darwin here seems to be replying directly to those of his contemporaries who were still clinging, for religious or other reasons, to some version of the Golden Age story. But both Darwin and his opponents were equally committed to the doctrine of the primitive mind. The main difference between them was that Darwin favoured a gradualist account, whereas his conservative critics saw it as a matter not of gradation but of difference in kind.
< previous page
page_22
next page >
< previous page
page_23
next page >
Page 23 Darwin’s conception of human progress assumes it to be incontrovertible both (1) that ‘all civilised nations are the descendants of barbarians’, as shown by ‘clear traces of their former low condition in still-existing customs, beliefs, language, etc.’, and (2) that ‘savages are independently able to raise themselves a few steps in the scale of civilisation, and have actually thus risen’ (Darwin 1874:221). However, he has little to say about the original or primordial mentality characteristic of savage society. Darwin’s French contemporary Auguste Comte, on the other hand, had no hesitation in postulating and describing such an ‘ état primordial’. He distinguished between three successive phases in the evolution of human thinking. The earliest phase, which he labelled ‘theological’, and described as ‘ la vraie situation initiale de notre intelligence ’, was assigned to a time when the human mind was not yet up to tackling the simplest scientific problems (‘ l’esprit humain est encore au-dessous des plus simples problèmes scientifiques’; Comte 1844:56). By contrast, Darwin’s continuum of mental advancement stretches back in unbroken succession across races and species. He goes as far as to assert that ‘the difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind’ (Darwin 1874:193). And, more specifically: the mental powers of the higher animals, which are the same in kind with those of man, though so different in degree, are capable of advancement. Thus the interval between the mental powers of one of the higher apes and of a fish, or between those of an ant and scale-insect, is immense; yet their development does not offer any special difficulty. (Darwin 1874:931) Seen from this perspective, the primitive mind stands on the intellectual threshold where some primates develop a specifically human view of their own existence and capacities. MAX MÜLLER AND COMPARATIVE MYTHOLOGY In the mid-19th century, controversy about the primitive mind had been stimulated by Max Müller’s inauguration of the study of comparative mythology. Müller’s notorious thesis that the origin of myth lies in a ‘disease of language’ was much criticized by anthropologists of his own and later generations. The locus classicus is Müller’s long essay Comparative Mythology (1856), later reprinted in volume 2 of his Chips from a German Workshop . Müller begins by quoting a well-known passsage from the beginning of Plato’s Phaedrus , where Socrates is asked if he believes the story that Boreas carried off Orithuia from the banks of the Ilissus (where Socrates and Phaedrus are
< previous page
page_23
next page >
< previous page
page_24
next page >
Page 24 conversing). In response, Socrates observes that it would be easy enough to give a rational explanation of the story, as some do, by supposing that the girl died as the result of being blown down over the rocks by the North Wind (Boreas). But he, Socrates, has no interest in inventing such explanations. To give a rational account of every such story in Greek tradition would require not only much ingenuity but too much time. Müller’s commentary on this passage begins by taking the eminent Classicist Grote to task for using it as evidence to represent Socrates as a modern thinker, conscious of ‘the uselessness of digging for a supposed basis of truth’ in the Greek myths. On the contrary, claims Müller, Socrates’ view of the matter was totally different from ours. Socrates had no conception of ‘Universal History’. Where the Greek saw barbarians, we see brethren; where the Greek saw heroes and demi-gods, we see our parents and ancestors; where the Greek saw nations ( ethne ), we see mankind, toiling and suffering, separated by oceans, divided by language, and severed by national enmity,—yet evermore tending, under a divine control, towards the fulfilment of that inscrutable purpose for which the world was created, and man placed in it, bearing the image of God. (Müller 1856:5–6) Müller appears to be quite oblivious to the historical irony in all this. Solemnly he deploys the pious Victorian godspeak of his own generation in order to comment on Socrates’ firm refusal to engage with the popular Greek godspeak of the fifth century BC. However, he believes that the study of mythology is worth while because of the insight it gives us into what he calls a particular ‘phase of the human mind’. It is this primitive way of thinking that ‘gave birth to the extraordinary stories of gods and heroes,—of gorgons and chimaeras, of things that no human eye had ever seen, and that no human mind in a healthy state could ever have conceived’ (Müller 1856:55). The phrase ‘no human mind in a healthy state’ is particularly worthy of note. According to Herodotus, the Athenians believed that Boreas was their ‘son-in-law’ since he had an Attic woman (i.e. Orithuia) as his consort. They offered sacrifices to Boreas and Orithuia in order to enlist the assistance of the north wind to destroy the Persian fleet near Cape Sepias. Herodotus himself is no less circumspect than Socrates regarding the authenticity of the explanation. I cannot say if this was really the reason why the fleet was caught at anchor by the north-easter, but the Athenians are quite positive about it: Boreas, they maintain, had helped them before, and it was Boreas who was responsible for what occurred on this occasion too. On their return home they built him a shrine by the river Ilissus. (Herodotus 7:189)
< previous page
page_24
next page >
< previous page
page_25
next page >
Page 25 The problem, both for Herodotus and Socrates, is that this belief is not that of barbarians, who lack logos , but of Greeks, who have it. FRAZER ON MAGIC, RELIGION AND SCIENCE An original version of the doctrine of the primitive mind was put forward in the works of Sir James Frazer. One of Frazer’s notable contributions to anthropology was his tri-partite distinction between ‘magic’, ‘religion’ and ‘science’. Frazer tells a story of human thinking in which, both chronologically and ontologically, the initial stage is magic, replaced eventually by religion, which is in turn replaced by science. Science may itself, according to Frazer, be superseded in the future by something more advanced, about which we can only speculate. But a conspicuous feature of this evolution, as Frazer tells it, is the way in which magic (as opposed to religion) resembles science. Magic, says Frazer, with the confident twenty-twenty hindsight that pervades his writings, is ‘false science’ (Frazer 1922:11). It divides naturally into two branches, according to the ‘principles of thought’ on which it is based. These branches are ‘homoeopathic’ and ‘contagious’ magic. The former is based on the ‘Law of Similarity’ and the latter on the ‘Law of Contact or Contagion’. According to the Law of Similarity ‘like produces like’. According to the Law of Contact or Contagion ‘things which have once been in contact with each other continue to act on each other at a distance after the physical contact has been severed’ (Frazer 1922:11). It is the Law of Similarity that underlies the magician’s belief ‘that he can produce any effect he desires merely by imitating it’. It is the Law of Contact or Contagion that underlies his belief that what is done to an object will affect the person to whom it belonged, or the body of which it was once part. But the primitive magician, according to Frazer, does not reflect on his magical practice or the principles underlying it : ‘he reasons just as he digests his food in complete ignorance of the intellectual and physiological processes which are essential to the one operation and to the other’ (Frazer 1922:11). The two ‘laws’ on which magic is founded are, Frazer tells us, ‘two different misapplications of the association of ideas’. The mistake in one case consists in ‘assuming that things which resemble each other are the same’, while in the other case the mistake consists in ‘assuming that things which have once been in contact with each other are always in contact’. The essential difference between magic and religion, according to Frazer, hinges on the fact that religion involves ‘propitiation or conciliation of powers superior to man which are believed to direct and control the course of nature and of human life’ (Frazer 1922:50). The magician, on the other hand, engages in actions believed in themselves to be capable of bringing about the desired result: there is no appeal to a divinity for assistance. Thus, for example, the typical religious believer will pray or sacrifice to a god or gods for rain; whereas the magician will attempt to influence the
< previous page
page_25
next page >
< previous page
page_26
next page >
Page 26 weather ‘homoeopathically’ by some such ritual as pouring a bowl of water on the ground. Frazer concedes that magic and religion have often become intertwined in particular cultures; but their basic principles, he contends, are opposed. Magic is rooted in belief in an unchanging order of nature which, if understood, can be directed towards the achievement of human ends. Religion, on the other hand, believes in supernatural beings controlling the world and thus able to intervene in the course of events. Religion, argues Frazer, is the product of superior intellect: Obviously the conception of personal agents is more complex than a simple recognition of the similarity or contiguity of ideas; and a theory which assumes that the course of nature is determined by conscious agents is more abstruse and recondite, and requires for its apprehension a far higher degree of intelligence and reflection, than the view that things succeed each other simply by reason of their contiguity or resemblance. (Frazer 1922:54) Nevertheless, magic bears a greater affinity to science than does religion. For science depends on ‘postulating explicitly, what in magic had only been implicitly assumed, to wit, an inflexible regularity in the order of natural events [ … ]’ (Frazer 1922:712). Is scientific thinking, then, no more than a modern extension of primitive magic? The idea would have been quite unacceptable to most of Frazer’s readers, and in the final pages of The Golden Bough he takes great care to distance himself from any such conclusion. But while science has this much in common with magic that both rest on a faith in order as the underlying principle of all things, readers of this work will hardly need to be reminded that the order presupposed by magic differs widely from that which forms the basis of science. The difference flows naturally from the different modes in which the two orders have been reached. For whereas the order on which magic reckons is merely an extension, by false analogy, of the order in which ideas present themselves to our minds, the order laid down by science is derived from patient and exact observation of the phenomena themselves. The abundance, the solidity, and the splendour of the results already achieved by science are well fitted to inspire us with a cheerful confidence in the soundness of its method. Here at last, after groping about in the dark for countless ages, man has hit upon a clue to the labyrinth, a golden key that opens many locks in the treasury of nature. It is probably not too much to say that the hope of progress—moral and intellectual as well as material—in the future is bound up with the fortunes of science, and that every obstacle placed in the way of scientific discovery is a wrong to humanity. (Frazer 1922:712)
< previous page
page_26
next page >
< previous page
page_27
next page >
Page 27 Frazer was sometimes harshly judged by a later generation, who nevertheless had to acknowledge him as a ‘pioneer’. Thus we find historians of the subject deploying a curious combination of ritual respect and ritual condemnation. Lienhardt praises The Golden Bough as being ‘of value as an encyclopedia and bibliography alone’, but its author is referred to dismissively as an ‘armchair’ anthropologist (Lienhardt 1966:26), and his method likened to that of Sherlock Holmes. He ‘thought he could understand foreign beliefs quite out of their real contexts simply by an effort of introspection’ (Lienhardt 1966:27). EvansPritchard likewise describes The Golden Bough as ‘a work of immense industry and erudition’, ‘an essential source-book for all students of human thought’, but pronounces Frazer’s view of the relationship between science and magic ‘unintelligible’ and declares ex cathedra that savages ‘have no conception of nature as a system organized by laws’ (Evans-Pritchard 1981:144). Some of the criticism to which Frazer was subjected was unfair and even perverse. Wittgenstein, for example, writing in the early 1930s, accused him of ethnocentricity and thought that Frazer was ‘much more savage than most of his savages’. What narrowness of spiritual life we find in Frazer! And as a result: how impossible for him to understand a different way of life from the English one of his time! Frazer cannot imagine a priest who is not basically an English parson of our own times with all his stupidity and feebleness. (Wittgenstein 1979:5e) More specifically, Wittgenstein claimed that Frazer misconstrued magical and religious practices as prescientific errors. Frazer’s account of the magical and religious notions of men is unsatisfactory: it makes these notions appear as mistakes [ Irrtümer ]. [ … ] Even the idea of trying to explain the practice—say the killing of the priest-king—seems to me wrongheaded. All that Frazer does is to make this practice plausible to people who think as he does. (Wittgenstein 1979:1e) Frazer might well have replied that making something seem plausible to those who think as one does oneself is the bedrock of explanation in all scientific inquiries. Wittgenstein evidently thought he knew a great deal about the workings of the primitive mind, for he asserts confidently that the ‘characteristic feature of primitive man’ is ‘that he does not act from opinions he holds about things (as Frazer thinks)’ (Wittgenstein 1979:12e). According to Wittgenstein, in the case of ‘symbolic’ practices like killing a priest-king ‘we can only describe and say, human life is like that’ (Wittgenstein 1979:3e). Neither Frazer nor any other anthropologist could be expected to rest content with that. If anthropology had followed
< previous page
page_27
next page >
< previous page
page_28
next page >
Page 28 Wittgenstein’s armchair advice, it could hardly have passed beyond the anecdotal stage we find in Herodotus. It has been claimed that The Golden Bough is ‘obviously a conscious attempt to discredit religion— especially Christianity—by tracing its line of descent to primitive superstition’ (Jarvie and Agassi 1970:177fn). A less biased view of Frazer’s work might be this. He thought of intellectual progress in terms of a long-drawn-out struggle between two conflicting types of rationality: a faith in the rationality of cause and effect versus a faith in the rationality of design. One of these types seeks explanations in natural connexions between different events, while the other looks beyond natural connexions to the mind of a Designer. For Frazer the superiority of science over magic resides in a more sophisticated and extensive rationalization of causal relations. On the surface, the issue of literacy does not arise in the clash between Frazer and his critics over rationality. But beneath the surface it is not difficult to detect a whole raft of scriptist assumptions on both sides. In Frazer’s case, there is the tacit assumption that ‘science’ is the accepted world-view of literate communities such as his own. Then there is the assumption that the debate can only be settled by appeal to a more careful definition or redefinition of ‘rationality’. Thus, for instance, Jarvie and Agassi claim that ‘by definition’ a rational action is one based on, among other things, ‘the actor’s goals or aims, his present knowledge and beliefs’ (Jarvie and Agassi 1970:179). But where this alleged definition comes from they do not explain, except to quote the authority of one American philosopher of science. The philosopher is question, it goes without saying, is making exactly the same linguistic assumptions as they do themselves. They also claim to be able to distinguish a ‘strong’ and a ‘weak’ sense of ‘rationality’ (Jarvie and Agassi 1970:173). But here they are confusing ‘real definition’ (i.e. definition of things) with ‘lexical definition’ (i.e. definition of words). This is a familiar scriptist confusion in the Western tradition, and dates back to Aristotle (Robinson 1954:149–92). In its modern versions, it is compounded by the existence of dictionaries which purport to be able to ‘define’ all the words they list. What emerges from all this is that the debate about the primitive mind became less concerned with elucidating the characteristics of the primitive mind than with defending the use in anthropological publications of such terms as rational and logical . That kind of self-serving literal-mindedness was thought of as adopting a ‘scientific’ approach. In short, what ostensibly began as an inquiry into the primitive mind is not settled by bringing forward new evidence but turns into a typically scriptist debate about what words mean. It would be both tedious and painful to document all the ways in which Western anthropologists are still making the same mistakes about words as Aristotle made. One example here will have to serve for many. According to Steven Lukes, there are universal criteria of rationality, and these are
< previous page
page_28
next page >
< previous page
page_29
next page >
Page 29 not in any way context-dependent or confined to particular cultures; or, as he puts it uncompromisingly, there are ‘criteria of rationality that simply are criteria of rationality’ (Lukes 1970:208). Aristotle would have put it better, but the assumption manifestly is that the word rationality already identifies an unquestionable feature of extra-linguistic reality; namely, the feature which we call ‘rationality’. Men supply the words: reality supplies the meanings. The whole debate about rationality in Frazerian terms was eventually undermined by the work of Freud. Totem and Taboo (1913) was largely ignored by anthropologists, who on the whole did not welcome the idea that a psychologist sitting in an armchair in Vienna could solve the problems that their own careful fieldwork in remote lands had brought to light. Totem and Taboo can read as propounding the thesis that apparently irrational customs among primitive peoples have a hidden rationality which is revealed when the unconscious psychological motivations underlying it are properly understood. Freud also put paid to the idea that the modern mind was governed by logical principles anyway. All this suggested that arguing about the primitive mind was somewhat pointless, since all minds are ‘primitive’, i.e. in thrall to fears and associations that cannot be dispelled or overridden by reason. In particular, Freud inserted a psychoanalytic wedge between between rationality, the giving of reasons, and truth. Reasons are often given in order to hide the truth and conceal the underlying motivation of the agents. So what appears to be rational cannot always be taken at face value. Unfortunately, psychoanalysis did not arrive on the Western intellectual stage in time to pre-empt an even more bitter row over a contention that, in one sense, out-Frazered Frazer. This was the contention that not only was there such a thing as the primitive mind, but that ‘civilized anthropologists could never fully grasp its workings’ (Malefijt 1974:191), precisely because they had civilized minds. One can see why that aroused passions more deeply than Frazer’s thesis. If accepted, it would have amounted to writing a death certificate for the whole discipline of social anthropology.
< previous page
page_29
next page >
< previous page
page_30
next page >
Page 30 3 Logicality and Prelogicality LÉVY-BRUHL AND THE ‘PRELOGICAL’ MIND The version of the doctrine of the primitive mind most severely and repeatedly condemned by the anthropological Establishment is associated with the French scholar Lucien Lévy-Bruhl. In his book Les fonctions mentales dans les sociétés inférieures (1910), Lévy-Bruhl ascribed to the members of these ‘lower’ societies what he called, to the outrage of many readers, a ‘prelogical’ mentality. This idea was more fully worked out in La mentalité primitive (1922). Eventually, at the end of his life, Lévy-Bruhl seems to have recanted; but by the time he did so his thesis of prelogicality had achieved lasting notoriety. According to Lévy-Bruhl in his prime, primitive mentality is essentially mystical ( mystique). This fundamental characteristic pervades its whole mode of thinking, feeling and acting. Hence the extreme difficulty in understanding it and following its operations. (Lévy-Bruhl 1922:503) The difficulty Lévy-Bruhl is referring to is experienced, it hardly needs saying, by the European observer, not by les primitifs. For them, the mystical view of the world is natural. C. Scott Littleton, in ‘Lucien Lévy-Bruhl and the concept of cognitive relativity’ (1985), observes that the most vocal opposition to Lévy-Bruhl’s contentions came from ‘cognitive relativists’ (where cognitive relativity is defined as ‘the notion that the logic we bring to bear in our descriptions of the world is not universal, but rather a function of our immediate technoenvironmental circumstances and our particular linguistic and ideological heritage, and that no one logic is superior to any other logic’). Among the most vehement of Lévy-Bruhl’s critics were followers of Franz Boas, the most eminent American anthropologist of his generation. Boas himself was committed to the view that there is ‘no fundamental difference in the ways of thinking of primitive and civilized man’ (Boas 1938:17). This
< previous page
page_30
next page >
< previous page
page_31
next page >
Page 31 might more accurately be called ‘the doctrine of mental uniformity’, rather than ‘cognitive relativity’. Boas seems to have hesitated over the status of this uniformitarian doctrine. Sometimes he appears to treat it as stating an observable fact and sometimes as a theoretical premise. In his book Primitive Art, he declares that ‘the fundamental sameness of mental processes in all races and in all cultural forms of the present day’ is one of the two basic ‘principles’ on which his investigations are founded (Boas 1927:1). This suggests that there is no question of regarding it as open to empirical confirmation or disconfirmation. Confusingly, Boas continued to refer to ‘the mind of primitive man’ while refusing to recognize it as different from what he called ‘our’ mind, i.e. the mind of civilized man. He asserts: To the mind of primitive man, only his own associations can be rational. Ours must appear to him just as heterogeneous as his own to us, because the bond between the phenomena of the world, as it appears after the emotional associations have been eliminated by increasing knowledge, does not exist for him , while we can no longer feel the subjective associations that govern his mind. (Boas 1938:223) Nor does he hesitate to make quite sweeping generalizations about primitive man’s typical beliefs and behaviour; for instance, that for primitive man ‘human life has little value, and is sacrificed on the slightest provocation’ (Boas 1938:172), and that primitive man ‘is not in the habit of discussing abstract ideas’ (Boas 1938:196). Thus it seems, paradoxically, that Boas had no difficulty in identifying features which followers of Lévy-Bruhl might have regarded as supporting the doctrine of the primitive mind. As Littleton shrewdly remarks, the relativists failed to see that LévyBruhl could be read as being no less committed to cognitive relativity than they were themselves. For Lévy-Bruhl was in no doubt that the ‘prelogical’ mentality of primitive societies served their needs just as well as a ‘logical’ mentality served modern needs. It ‘made sense’ of the world from which it had emerged. Lévy-Bruhl was sometimes accused of self-contradiction. According to Alasdair MacIntyre, the problem for writers like Lévy-Bruhl is that ‘they have to treat their own conclusions as palpably false in order to arrive at them’ (MacIntyre 1970:65). The example MacIntyre takes is the Australian aboriginal belief that the sun is a white cockatoo. Lévy-Bruhl ‘concluded that he was faced with a total indifference to inconsistency and contradiction’ (MacIntyre 1970:64). On this basis, argues MacIntyre, what the aboriginals say can be described, but not understood as language : ‘we cannot grasp its concepts for they cannot, on this view, be conceptual’. And yet ‘unless Lévy-Bruhl had grasped that “white cockatoo” and “sun” were being used with apparently normal referential intentions, he could not have
< previous page
page_31
next page >
< previous page
page_32
next page >
Page 32 diagnosed the oddity of asserting that the sun is a white cockatoo’ (MacIntyre 1970:65). But this attempt to catch Lévy-Bruhl out is clumsy and unconvincing. In the first place, Lévy-Bruhl never denied that the primitive mind had or could express concepts, as the following quotations show. ‘Are we to take it for granted, then, that this mentality, even in the very lowest social aggregates, makes no use of concepts whatsoever? Certainly not’ (Lévy-Bruhl 1910:116 ). ‘The primitive mind is well acquainted with concepts, but they are not at all like ours’ (Lévy-Bruhl 1910:168). How do they differ? Precisely in that the primitive mind sees no contradiction, for example, in something being both the sun and a white cockatoo. That is to say, for the aboriginal the concepts ‘sun’ and ‘white cockatoo’ are not mutually exclusive. Whereas, for the modern or civilized mind, saying that the sun is a white cockatoo is automatically a nonsense. It is clear in retrospect that Lévy-Bruhl’s critics did not always take the trouble to examine carefully what Lévy-Bruhl’s position actually was. Even distinguished scholars fall down here. Ernst Cassirer, in his Essay on Man, goes to some trouble to distance himself from Lévy-Bruhl, while nevertheless expounding a suspiciously similar case about ‘primitive’ thought. According to Cassirer, the ‘French sociological school’ had advanced two basic theses, but proved only one of them. The proven thesis was ‘the fundamental social character of myth’. The thesis of ‘prelogical thought’, on the other hand, had no such status. But that all primitive mentality necessarily is prelogical or mystical seems to be in contradiction with our anthropological and ethnological evidence. We find many spheres of primitive life and culture that show the well-known features of our own cultural life. As long as we assume an absolute heterogeneity between our own logic and that of the primitive mind, as long as we think them specifically different from and radically opposed to each other, we can scarcely account for this fact. (Cassirer 1944:80) Cassirer attributes this mistaken view to the ‘French sociological school’. But Lévy-Bruhl, whom he had just quoted in the same paragraph, and clearly regarded as a member of that school, held no such view. In fact, Lévy-Bruhl explicitly disavowed it, declaring it to be ‘very improbable’ that there have ever been ‘groups of human or pre-human beings whose collective representations have not yet been subject to the laws of logic’ (Lévy-Bruhl 1910:78). What he means by ‘prelogical’ (or ‘mystical’) is not ‘lacking in logic’: rather, he uses that term to indicate recognition of a different order of interconnexions that can— and will often—take precedence over the logical. This different order is seen as springing from what Lévy-Bruhl calls a ‘law of participation’. By virtue of this law,
< previous page
page_32
next page >
< previous page
page_33
next page >
Page 33 objects, beings, phenomena can be, though in a way incomprehensible to us, both themselves and something other than themselves. In a fashion which is no less incomprehensible, they give forth and they receive mystic powers, virtues, qualities, influences, which make themselves felt outside, without ceasing to remain where they are. (Lévy-Bruhl 1910:76–7) Why a ‘law of participation’ rather than just belief in magic and the supernatural? Lévy-Bruhl explains as follows. Why, for example, should a picture or portrait be to the primitive mind something quite different from what it is to ours? [ … ] Evidently from the fact that every picture, every reproduction “participates” in the nature, properties, life of that of which it is the image. This participation is not to be understood as a share—as if the portrait, for example, involved a fraction of the whole of the properties or the life which the model possesses. Primitive mentality sees no difficulty in the belief that such life and properties exist in the original and in its reproduction at one and the same time. (Lévy-Bruhl 1910:79–80. Italics in the original) This is why, according to Lévy-Bruhl, North American Indians are reluctant to have their portraits taken by a white man. The reason is that by virtue of an inevitable participation, anything that happens to their pictures, delivered over to strange hands, will be felt by them after their death. And why is the tribe so uneasy at the idea that the repose of their chiefs should be thus disturbed? Evidently [ … ] it is because the welfare of the tribe, its prosperity, its very existence depend, by virtue of this same participation, upon the condition of the chiefs, whether living or dead. (Lévy-Bruhl 1910:80) We should note that in this example nothing depends on whether the Indians give that explanation for their reluctance. The point is of some importance. For Lévy-Bruhl was often criticized by those who held that primitive peoples operated with the same logic as civilized peoples, but simply started from different assumptions. Cassirer is again among the guilty. He writes: What we, from our own point of view, may call irrational, prelogical, mystical, are the premises from which mythical or religious interpretation starts, but not the mode of interpretation. If we accept these premises and if we understand them aright—if we see them in the same
< previous page
page_33
next page >
< previous page
page_34
next page >
Page 34 light that primitive man does—the inferences drawn from them cease to appear illogical or antilogical. (Cassirer 1944:80- 81) What this fails to reckon with is that it would b e perfectly possible to hold that Lévy-Bruhl’s ‘law of participation’ operates in particular cases above (or below) the level of overt logical justification. What Lévy-Bruhl has done in the instance cited above is rationalize its operations for the benefit of his modern readers. Moreover, this is done in such a way that the conclusion ‘Having your portrait painted by a white man is to be avoided’ could easily be exhibited as following from a series of Aristotelian syllogisms, given the appropriate premises. Nevertheless, if Indians are reluctant to have their portraits painted by a white man, it may not be for any reason they could articulate in this explicit manner. Perhaps they just feel it intuitively to be ‘wrong’ or ‘unsafe’ or ‘dangerous’. In that case, what Lévy-Bruhl has done is to theorize on their behalf. And that is quite different from analyzing the reasons the Indians themselves give, if any. It is unfortunate that Lévy-Bruhl does not always distinguish between these two cases; and his failure to do so is one obvious area of potential confusion in his position. What is clear, nevertheless, is that Lévy-Bruhl never held that primitive peoples were incapable of logical thought, as judged by ‘modern’ standards. He goes out of his way to make this evident. The member of a primitive society, in most circumstances, ‘will usually feel, argue and act as we should expect him to do.’ The inferences he draws will be just those which would seem reasonable to us in like circumstances. If he has brought down two birds, for instance, and only picks up one, he will ask himself what has become of the other, and will look for it. If rain overtakes and inconveniences him, he will seek shelter. If he encounters a wild beast, he will strive his utmost to escape, and so forth. But though on occasions of this sort primitives may reason as we do, though they follow a course similar to the one we should take (which in the more simple cases, the most intelligent among the animals would also do), it does not follow that their mental activity is always subject to the same laws as ours. (Lévy-Bruhl 1910:79) Cassirer, notwithstanding his objections to Lévy-Bruhl’s ‘unproven’ thesis, advances a very similar view: The real substratum of myth is not a substratum of thought but of feeling. Myth and primitive religion are by no means entirely incoherent, they are not bereft of sense or reason. But their coherence depends much more upon unity of feeling than upon logical rules. This unity is one of the strongest and most profound impulses of primitive thought. (Cassirer 1944:81)
< previous page
page_34
next page >
< previous page
page_35
next page >
Page 35 So there is, after all, such a thing as ‘primitive thought’! And the more fully Cassirer describes it, the more it begins to sound as if it conformed to Lévy-Bruhl’s ‘law of participation’. If scientific thought wishes to describe and explain reality it is bound to use its general method, which is that of classification and systematization. Life is divided into separate provinces that are sharply distinguished from each other. The boundaries between the kingdoms of plants, of animals, of man—the differences between species, families, genera—are fundamental and ineffaceable. But the primitive mind ignores and rejects them all. Its view of life is a synthetic, not an analytical one. Life is not divided into classes and subclasses. It is felt as an unbroken continuous whole which does not admit of any cleancut and trenchant distinctions. (Cassirer 1944:81) There is nothing in the above passage that Lévy-Bruhl would have objected to. It could almost stand as his own account of ‘participation’ and the participatory perspective on life. And when Cassirer says that the primitive view of life is a ‘synthetic’ one, he echoes exactly the term that Lévy-Bruhl uses (Lévy-Bruhl 1910:108). The Russian psychologist Lev Vygotsky put an interesting gloss on Lévy-Bruhl’s findings when he identified what Lévy-Bruhl called ‘participation’ with a typical feature of thought that Storch had noted in the insane and Piaget in children. Vygotsky himself saw it as a feature of ‘complex think-ing’, in which the mind forms ‘pseudo-concepts’. Since children of a certain age think in pseudo-concepts, and words designate to them complexes of concrete objects, their thinking must result in participation, i.e., in bonds unacceptable to adult logic. A particular thing may be included in different complexes on the strength of its different concrete attributes and consequently may have several names; which one is used depends on the complex activated at the time. In our experiments, we frequently observed instances of this kind of participation where an object was included simultaneously in two or more complexes. Far from being an exception, participation is characteristic of complex thinking. (Vygotsky 1962:71–2) According to Vygotsky, ‘primitive people also think in complexes’. The result is that the word in their languages does not function as the carrier of a concept but as a “family name” for groups of concrete objects belonging together, not logically, but factually. (Vygotsky 1962:72)
< previous page
page_35
next page >
< previous page
page_36
next page >
Page 36 From this he concluded that Lévy-Bruhl had misinterpreted such claims as are made by primitive peoples when they identify themselves with their totem animal, as e.g. in the case of the Bororo of Brazil, who—as reported by von den Steinen, who found this incredible—insist that they are red parrots. We therefore believe that Lévy-Bruhl’s way of interpreting participation is incorrect. He approaches the Bororo statements about being red parrots from the point of view of our own logic when he assumes that to the primitive mind, too, such an assertion means identity of beings. But since words to the Bororo designate groups of objects, not concepts, their assertion has a different meaning: The word for parrot is the word for a complex that includes parrots and themselves. It does not imply identity any more than a family name shared by two related individuals implies that they are one and the same person. (Vygotsky 1962:72) But this is a feeble attempt to beat Lévy-Bruhl with the wrong linguistic stick. Vygotsky’s ‘explanation’ that the Bororo happen to have a word for a class that includes both themselves and red parrots does not hold water for a moment. Had that been so, there would have been plenty of statements that the Bororo accepted as being true of red parrots but not of the Bororo, and vice versa (e.g. ‘Red parrots have wings, but we don’t’); it would have been easy enough for von den Steinen to ascertain this and thus allay his incredulity. Vygotsky simply ignores other evidence about the Bororo that Lévy-Bruhl cites in favour of the participation hypothesis: their belief that pictures have the same harmful properties as the objects depicted, and that when a child is ill it is the father who should swallow the medicine. None of this can be dismissed by imputing to Lévy-Bruhl a naive confusion of class-membership statements with class-identity statements. Firth was another eminent academic who misrepresented Lévy-Bruhl. According to Firth, the kind of non-rational attitude of man to the external world which Lévy-Bruhl characterised as ‘mystic participation’ with nature, is not as he thought a hallmark of the mental functioning of ‘primitive’ peoples but is a feature of some kinds of thought among people in any society anywhere. (Firth 1975:46) But, pace Firth, the coexistence in society of prelogical and logical thought is exactly what Lévy-Bruhl himself maintained (Lévy-Bruhl 1910:386) . Firth further claimed that Lévy-Bruhl had ‘overlooked or whittled down’ the ‘mass of evidence’ showing that in practical affairs non-Western people use exactly the same logic as Europeans. Thus in making a canoe a
< previous page
page_36
next page >
< previous page
page_37
next page >
Page 37 Polynesian will choose a light wood for the outrigger float and streamline the hull ‘just as a European boat-builder would if he built such canoes’. This, according to Firth, is ‘a striking illustration of their rationality in technical affairs’ (Firth 1975:131). Again, there is no case for Lévy-Bruhl to answer; for he goes out of his way to make the point that in practical matters primitive man ‘may reason as we do’ (Lévy-Bruhl 1910:79). The conclusion to which these examples point is at first sight surprising. It suggests that the doctrine of the primitive mind that was so often attacked in the early and mid-twentieth century was no more than an Aunt Sally set up by those who had misread Lévy-Bruhl. Once these misguided attacks are discounted, we see that the distinction he had tried to draw remains standing, and the difficulties he faced in drawing it carefully and convincingly are still with us today. Furthermore, it is necessary to recognize not only that Lévy-Bruhl did not imagine ‘prelogical’ minds to be incapable of logic, but—no less important—that he saw ‘modern’ minds as being just as capable of ‘prelogical’ thought. This is stated unequivocally on the final page of his book: Even among peoples like ourselves, ideas and relations between ideas governed by the law of participation are far from having disappeared. They exist, more or less independently, more or less impaired, but yet ineradicable, side by side with those subject to the laws of reasoning. Understanding, properly so called, tends towards logical unity and proclaims its necessity; but as a matter of fact our mental activity is both rational and irrational. The prelogical and the mystic are co-existent with the logical. (Lévy-Bruhl 1910:386) According to Evans-Pritchard, on the basis of conversations with Lévy-Bruhl, Christianity and Islam were for Lévy-Bruhl examples of prelogical mentality in the modern world; but he did not spell this out in his published work in order to avoid giving offence (Evans-Pritchard 1981:130). It might have been better if he had. That at least would have given his readers a better idea of what he was driving at, and exemplified a distinction that many of them could relate to their own experience. LURIA AND THE SYLLOGISM Paradoxically, Lévy-Bruhl and his relativist critics did not seriously disagree about what ‘logic’ was. Both sides meant by ‘logic’ the kind of thinking associated with the traditional teaching of that subject in Western education. Logic, one of the seven liberal arts, had been a basic component of the European university curriculum for centuries. No relativist ever claimed that there were viable alternatives to the syllogism favoured
< previous page
page_37
next page >
< previous page
page_38
next page >
Page 38 in other parts of the world (that is, ways of drawing true conclusions from true premises, but via some different route). The argument was about the extent to which, in non-Western societies studied by anthropologists, ‘logical’ thinking was in practice overrridden by, subordinated to, or at the mercy of, other belief-systems. The specific empirical question of whether ‘backward’ or ‘uneducated’ people could actually grasp traditional syllogistic reasoning as such was not addressed until the early 1930s, when a colleague of Vygotsky’s, Alexander Luria, conducted research in Uzbekistan and Khirgizia in Central Asia (Luria 1979:58–80). His subjects were drawn from the inhabitants of rural ‘hamlets and nomad camps’. He presented them with incomplete syllogisms of the following kind: In the far north, where there is snow, all bears are white Novaya Zemlya is in the far north What colour are the bears there? Not unexpectedly, those among Luria’s subjects who were illiterate did not fare well on these tests, in spite of the painstaking efforts made by the team of psychologists to ensure that those questioned understood what was being asked. Some, however, responded by saying, for instance, that they had never been in the north and never seen bears. Luria describes this reaction as ‘refusing to accept the major premiss’. But it is far more than that. The response given is perfectly justifiable. Seen from the subject’s point of view, the situation is the following. Someone you do not know comes along and makes some unverified statements about bears in the far north. You are then asked ‘What colour are the bears there?’ Whether or not you can see what the interrogator ‘wants’ you to say, or is trying to trick you into saying, the only sensible answer is that you don’t know if you haven’t actually been there. The sole ‘evidence’ you have been presented with consists of the interrogator’s unconfirmed say-so. Why trust that? Arguably, the illiterate peasant is operating with a logic as robust as, if not superior to, that of the university-educated researcher, and, furthermore, it could easily be presented if need be in perfect syllogistic form (even though the peasant will obviously lack the academic training necessary to do that himself). Luria pressed the unfortunate informant concerning his unsatisfactory response about the bears: ‘But what do my words imply?’. The informant’s reply is crushing, and shows an adroitness in debate that one imagines even Socrates would have admired: ‘Your words can be answered only by someone who was there, and if a person wasn’t there, he can’t say anything on the basis of your words.’ There are a number of points about this exchange worth noting. The psychologist led with his chin when presenting a question of inference as a question to be answered on the basis of ‘ my words’. He should have asked: ‘But what do those words imply?’ A person’s words are only as reliable as the
< previous page
page_38
next page >
< previous page
page_39
next page >
Page 39 person is (i) trustworthy, and (ii) well informed. In the absence of any evidence on either count, an appeal to ‘my words’ is an appeal to my authority. The next point is that the way Luria presses the question reveals something about his own conception of rationality which has not so far been disclosed in his account of the research programme. This is the assumption that logical relations are relations between forms of words; or, at the very least, between items of an undisclosed nature, for which forms of words can be treated as valid substitutes. This is not a logical but a metaphysical thesis, and it does not seem to occur to Luria that the problem with his psychological tests may be that his metaphysics of language does not match that of his subjects in the relevant respects. Luria’s own interpretation of the results of classification tests included in the same programme is not in doubt. He thinks that: Words for these people had an entirely different function from the function they have for educated people. They were used not to codify objects into conceptual schemes but to establish the practical interrelations among things. (Luria 1979:73) Nor does this surprise him. For the progression from one function of words to another represents nothing less than the transition from sensory to rational consciousness, a phenomenon that the classics of Marxism regard as one of the most important in human history. (Luria 1979:74) Or, to put it another way, what underlies the whole of Luria’s research programme is the Marxist version of the doctrine of the primitive mind. The programme comes with ready-made ideological explanation of the results incorporated in advance. There could be no clearer illustration of the way the literate mind reasons about reasoning than the research reported in Luria’s Cognitive Development, a work still regarded by many as a landmark in modern psychology. What is immediately striking about Luria’s description of, and comments on, the responses of his ‘uneducated’ subjects is the investigator’s own obliviousness to the built-in metaphysics of his inquiry. Luria cannot see this. Solemnly he construes their (by his standards) inadequate—and sometimes recalcitrant—responses as demonstrating incompetence in ‘abstract’ thinking and categorization. If we try to look at the interviews from the point of view of the interviewee, it becomes evident that the person who is slow to recognize different orders of priority among relations between one thing and another, and among different ways of describing them, is Luria himself. His thinking is
< previous page
page_39
next page >
< previous page
page_40
next page >
Page 40 so straitjacketed by the Aristotelian hierarchy of genus and species that he has to ‘explain away’ the reactions of his subjects by attributing to them a so-called mode of ‘situational thinking’. But it is Luria who cannot escape the confines of ‘situational thinking’, i.e. the artificial situation he himself has engineered to ‘test’ the mental agility of his victims. They in fact display much more imagination and flexibility of thought than the designer of their tests, who takes as given from the outset a psychological dichotomy between ‘theoretical operations’ and ‘practical operations’. Thus when a sixty-year-old illiterate peasant explains very clearly to Luria why he does not regard a picture of a log as being ‘the odd one out’ in a set of four pictures (of which the other three are pictures of a hammer, a saw and a hatchet), Luria’s only comment is: ‘Replaces abstract classification with situational thinking’. Later in the same dialogue, when Luria points out that one word–‘ tool’—covers the other three items but not the log, the peasant again explains why he regards that as irrelevant. Luria comments: ‘Rejects use of generalizing term’. Luria just cannot see that the peasant’s line of reasoning is perfectly sensible, that he is perfectly capable of ‘abstract classification’, but that his abstract classification in this instance is based on relevance to context (i.e. to making sense of the seemingly arbitrary group of items that the interviewer has suddenly presented him with). The peasant is also right to insist that the hammer, the saw and the hatchet are only classified as ‘tools’ insofar as they are regarded as instruments for performing operations on pieces of wood and similar materials and that is why the log of wood is an essential member of the group. (Otherwise, disregarding their function, they might be classified even more ‘abstractly’ just as ‘metal objects’.) Luria sums up his observations of this and similar tests with the bland conclusion that ‘we had no luck getting these subjects to perform the abstract act of classification’ (Luria 1976:59). An impartial inspection of the evidence reported shows that, on the contrary, the subjects were performing ‘abstract acts of classification’ all the time, but not giving Luria the classifications he wanted. Furthermore, their thinking involved ‘theoretical operations’ considerably more sophisticated than Luria’s, because they saw their task as having to work out the rationale behind the various groups of assorted pictures placed in front of them. Luria, on the other hand, was wanting them to ignore that and classify the pictures as if there were some pre-existing taxonomy in place, into which the objects depicted automatically fell. Quite understandably, Luria’s subjects rejected this approach. The research Luria reports does reveal something about the difference between literacy and illiteracy, but almost the opposite of what the investigator thought it showed: not the inability of illiterates to think abstractly but the reluctance of literates to accept classification not based on a taxonomy already supplied in advance by words. Literacy produces a mind-set in which words are treated as context-free items with context-free meanings, thus providing a ready-made universal labelling-system for the world around us. Luria’s subjects, by contrast, saw objects as objects and not in
< previous page
page_40
next page >
< previous page
page_41
next page >
Page 41 the first instance as bearers of verbal labels (although they clearly knew perfectly well what these objects were called ). Nowhere are Luria’s scriptist prejudices more clearly revealed than in his discussion of reasoning: For the nonliterate subjects, the processes of reasoning and deduction associated with immediate practical experience follow well-known rules. These subjects can make excellent judgments about facts of direct concern to them and can draw all the implied conclusions, displaying no deviation from the “rules” and revealing much worldly intelligence. The picture changes, however, just as soon as they have to change to a system of theoretical thinking—in this instance, making syllogistic inferences. (Luria 1976:114) Luria identifies three factors which ‘substantially limit their capabilities for theoretical, verbal-logical thinking’. The first is their ‘mistrust of an initial premise that does not reproduce personal experience’. (Quite a sensible mistrust, one might have thought, if the objective of reasoning is to arrive reliably at the truth.) The second is their reluctance to accept premises as universals. Instead, they treat premises as ‘particular messages reproducing some particular phenomenon’. According to Luria, this means that the premises, as interpreted by the illiterates, create ‘no firm logical system or basis for logical inference’. The third factor, which is little more than an extension of the second, is that the syllogism disintegrates into ‘three independent and isolated particular propositions with no unified logic and thus no access for thought to be channeled within this system’ (Luria 1976:114–5). What these three ‘factors’ actually identify are not features in the mental processes of the illiterates (on which they cast no new light at all) but features of Luria’s own scriptist assumptions about the syllogism, which have hitherto not been clear. For Luria, evidently, it now emerges that the syllogism itself only yields ‘rational’ or ‘logical’ thought insofar as it is treated as instantiating a structure regarded as valid quite independently of how things stand ‘in reality’. In short, the illiterates are being tested on whether they can treat words as the same decontextualized entities as their literate comrades have learnt to do at school. It is interesting to note that more recent investigations of syllogistic reasoning in non-Western communities have reached quite different conclusions. According to Sylvia Scribner and Michael Cole, ‘most psychologists’ think that failure to give the correct answers in tests of syllogistic reasoning is not so much a sign of inability to reason logically as it is an indication of how people understand this particular verbal form. [ … ] learning how to respond to these problems appropriately (confining one’s answer to the information given in the problem) may be a matter of particular language experiences. (Scribner and Cole 1981:155)
< previous page
page_41
next page >
< previous page
page_42
next page >
Page 42 Nowhere does Luria entertain the possibility that the illiterates might be using ‘syllogisms’ of their own, yielding conclusions by a different method from Aristotle’s, but no less rigorous. It is an odd lacuna, given Luria’s admission that somehow , when dealing with practical matters, the illiterates manage to arrive at the right conclusions. So how did they manage those pieces of mental processing? Luria’s readers are never told. With Luria, but for the psycholinguistic metalanguage, we are back in the 19thcentury world of anthropologists discussing the mental limitations of ‘primitive’ tribes. The approach is basically the same. The ‘civilized’ investigator already knows, a priori , that ‘uncivilized’ people cannot reason ‘properly’ (i.e. as we do): the only ‘scientific’ mission is to examine just where and how they fail. RETROSPECT The mirror image of this assumption is to be found in more recent discussions which assume from the outset that the opposite is true, i.e. that the primitive mind has no such deficit. One writer begins his discussion of logic and literacy with the matter-of-fact observation: ‘Of course all people are rational; it is part of the definition of a human being’ (Olson 1994:22). Why ‘of course’? Why begin inquiry by appealing to a definition that already settles the controversial issue in advance? The same writer goes on to assure us that ‘humans could always reason, but they did not always reason about reason’ (Olson 1994:35). This advance, it is implied, is what we owe to the Greeks. They formulated the ‘rules’ for reasoning that ‘make up logic’ (Olson 1994:35). But from the perspective of the present inquiry, this would be tantamount to allowing scriptist prejudices to dictate in advance the interpretation of intellectual history. Suzanne Romaine notes that ‘what goes by the name of ‘logic’ in language is mainly an acquired way of talking and thinking about language which is made possible largely by literacy’ (Romaine 1994:200). An example is the way teachers ‘correct’ children’s use of double negation in sentences like I don’t have no money . This is condemned on the ground that two negatives make a positive. Therefore, it is argued, it must mean I have some money . It would be ‘illogical’ to suppose that it meant anything else (Romaine 1994:88). So those speakers who habitually use double negatives in English where ‘educated’ speakers use single negatives are thus shown to be deficient in reasoning. The literate mind automatically assumes its own linguistic practices are ‘logical’ and any which deviate from it are not. It is possible to extend this assumption across languages and societies: this is exactly what the 19thcentury doctrine of the primitive mind did. In its heyday it was a doctrine that fulfilled a number of functions simultaneously. Some of these were political, some religious and some academic. It served to put a stop to what otherwise risked elevating half-naked savages
< previous page
page_42
next page >
< previous page
page_43
next page >
Page 43 to the mental level of Europeans, or else embarking on an unending chain of unanswerable questions about human rationality. It purported to show that no purpose is served in pursuing such conundrums as why some people genuinely believe that the sun is a white cockatoo or that they themselves are red parrots. The all-purpose answer supplied by the doctrine of the primitive mind is that such beliefs are beliefs of a type that only primitive peoples, or those with as-yet-undeveloped minds, can entertain without any feeling of incongruity or irrationality because their thinking differs qualitatively from that of which civilized peoples are capable. That, quite simply, is the way the primitive mind is. Nowadays the notion that the primitive mind was any different from the modern mind is officially rejected by self-styled ‘cognitive archeologists’, who prefer the designation ancient mind. Oddly, nevertheless, they regard discovering how the ‘ancient mind’ worked as a problem yet to be solved. On behalf of cognitive archeology, it is claimed: No distinction is implied between the ancient mind and the modern mind. It is important at once to assert that that no a priori argument is entertained about some notional series of evolutionary stages in human cognition, and no such assumption is made here. In the same way we are uneasy about the title of Lévi-Strauss’s work The savage mind [ … ]— La pensée sauvage in the original French. We make no assumptions about different kinds or categories of thought. (Renfrew 1994:5) Perhaps that is why the publications of cognitive archeologists on the whole tell us more about the cognitive processes of archeologists than about those of the ancient mind. They certainly tell us little or nothing about the principles of ancient reasoning. The doctrine of the primitive mind was itself the product of a certain conception of rationality. It decreed an intellectual limit to certain forms of inquiry, in the same way as, for physicists of the 19th century, the doctrine of the indivisible atom decreed a limit to scientific attempts to resolve the material world into its most basic components. But the way that limit was drawn in the case of rationality already begged a number of crucial questions. In particular, it often assumed that the irrationality of many a primitive belief was self-evident, and could be exhibited simply by the anthropologist providing a translation of the relevant proposition into a modern European language. That assumption in turn raised a linguistic problem to which the doctrine itself supplied no solution; namely, how it was possible to be sure that the rational anthropologist’s translation of the irrational proposition was correct. What guarantee was there that language could provide a reliable bridge across the mental gulf separating reason from unreason?
< previous page
page_43
next page >
< previous page
page_44
next page >
Page 44 4 Reason and Primitive Languages WINDOWS ON THE MIND Both advocates and opponents of the doctrine of the primitive mind cited linguistic evidence in support of their positions. The assumption that languages provide ‘windows on the mind’ was common ground between them. But the conclusions they drew from this evidence differed widely. The whole linguistic discussion is coloured and complicated by issues inherited from 19th-century comparative philology, which in certain respects perpetuated unquestioningly distinctions derived from traditional European grammar. The so-called ‘parts of speech’ were treated as general names of classes. Nouns ( man, house , ox, etc.) were supposed to be ‘names of things’, verbs ( walk, hit, fall, etc.) were ‘names of actions’, adjectives ( tall , heavy, red , etc.) were ‘names of properties’. Here we see the practical application in the pedagogic sphere of the same kind of analysis that the Western tradition had taken over from the Greeks. Underlying it were universalist assumptions about language and the mind that can be traced back to Aristotle: within an Aristotelian framework it seemed natural to treat every language as having to provide some way of expressing a range of basic perceptions and circumstances common to all humanity. Thus when the comparative philologists compared one language with another (Sanskrit with Greek, or French with German) there was no preliminary theoretical inquiry into whether a viable basis for comparison could be established. The availability of such a basis, together with its exemplification in the traditional descriptions of Indo-European languages, was simply presupposed. Nor was any attempt made to clarify the traditional notion of ‘naming’. Names of things, names of actions and names of properties were all taken for granted without further inquiry, in spite of the obvious dissimilarities in the grammatical behaviour of the corresponding words. Whether the sense in which the noun ox ‘names’ a type of animal is parallel to that in which the verb run ‘names’ a type of action, or the adverb outside a type of relation, was left both unasked and unanswered. The result was to build up a vague picture of languages as consisting basically of lists of ‘names’
< previous page
page_44
next page >
< previous page
page_45
next page >
Page 45 for everything, presumably originating with some primitive baptismal act of the kind attributed to Adam in the Book of Genesis, or, as by Socrates, to some mythical ‘name-giver’ ( nomothetes ). REOCENTRIC AND PSYCHOCENTRIC VIEWS OF LANGUAGE Given this nomenclatorial approach, two views about the basis of language can be distinguished in the Western tradition. Aristotle is the source of both. One is the assumption that what words name or ‘stand for’ are things that exist independently of human consciousness: the main function of words is simply to designate objective features of the external world. The other is the assumption that what words name or ‘stand for’ are ideas present in the human mind: words reflect how human beings perceive the world, rather than the world as it is. These two assumptions may be called ‘reocentric’ and ‘psychocentric’ respectively. They may be combined (as they were by Aristotle himself: De Interpretatione 16a), and varying combinations may assign priority to the reocentric component over the psychocentric component, or vice versa. More often, however, the question of priority is simply held in abeyance. One commonplace example of this would be the practice followed today in bilingual dictionaries. The Anglo-French lexicographer who unhesitatingly lists chien as ‘the French word for’ dog vouchsafes no information as to the basis for this equivalence. There seem to be two possibilities. One is the reocentric assumption that dogs are the same animals on either side of the English Channel (or the world over): so that what underwrites the lexical equivalence chien = dog is a physical, biological fact about the canine population. The other possibility is psychocentric: that the concept ‘dog’ is the same in French minds as in English minds. So what guarantees the equivalence chien = dog is not biology but a psychological fact, viz. that people’s ideas about dogs do not differ. (That equation would not hold, patently, if in France—but not in England—the dog was a sacred animal, or if eating dogs were a common practice in one country but an abomination in the other. On the other hand, it would still hold even if public beliefs about dogs turned out to be quite mistaken.) The relevance of all this to the doctrine of the primitive mind cannot be overestimated. Central to many arguments about language and the mind is the (tacit) psychocentric assumption that a society’s language will reflect more or less directly that society’s perceptions of and assumptions about reality, rather than reality itself. The very fact that different languages do not offer parallel lists of names for exactly the same aspects of the external world, and that some languages draw distinctions that other languages ignore, is often taken to show that names, whatever else they may be, are not just verbal labels for the same universal inventory of items supplied in advance by God or Nature. For in that case one would expect to find a
< previous page
page_45
next page >
< previous page
page_46
next page >
Page 46 far closer correspondence between historically unrelated languages than is actually evident. The fact that even in ‘civilized’ languages one can occasionally find words that supposedly ‘stand for’ non-existent animals (e.g. unicorn ) or non-existent substances (e.g. phlogiston) is taken as further proof that a rigidly reocentric account of names, pairing names with ‘real’ classes, will not do. The existence of names for abstractions (such as honesty , tolerance , etc.) offers another difficulty for the reocentric account, since the relation between e.g. honesty and honesty is manifestly not on a par with the relation between dog and the class of dogs. These considerations lay the ground for a plausible presumption that languages provide a public mirror not of the world as it actually exists but of the ways different communities habitually view the world, and the beliefs they habitually entertain about it. PSYCHOCENTRISM AND PORT-ROYAL An explicit rationale for the thesis that universal mental operations underlie and explain linguistic structure was provided in 17th-century France by the Port-Royal school. There, in the hands of Arnauld, Lancelot and Nicole, psychocentrism was made the basis of teaching both grammar and logic. According to the Port-Royal Grammaire générale et raisonnée (1660), there are three essential operations of the mind ( opérations de nostre esprit ): conceiving, judging and reasoning. Conceiving is the operation by which the mind singles something out for attention; judging is the operation of grasping that the thing in question has—or does not have—certain properties; reasoning is the operation of using two judgments to arrive at a third. Thus, for example, the conclusion that patience is praiseworthy may be reached from two prior judgments—that all virtues are praiseworthy and that patience is a virtue—which in turn presuppose that the mind has already grasped what patience is, what virtue is and what praise is (Lancelot and Arnauld 1660:28). In the Port-Royal logic ( La logique ou l’art de penser ), a fourth operation is added to these three: ordering ( ordonner ). This is described as arranging ideas, judgments and reasons in whatever way is appropriate for knowledge of the particular thing under consideration (Arnauld and Nicole 1683). But it is not required in grammar for the purpose of explaining what a noun is, what a verb is, or any of the other parts of speech. [ … ] men having had need of signs to mark everything that is going on in their minds, it is necessary also to have the most general distinction between words, having some to signify the objects of thought, and others the form and manner of our thoughts, although often that is not signified alone, but with the object, as we shall show.
< previous page
page_46
next page >
< previous page
page_47
next page >
Page 47 Words of the first sort are what are called nouns, articles, pronouns, participles , prepositions and adverbs . Those of the second are verbs , conjunctions and interjections . All of which are derived by a necessary consecution from the natural manner of expressing our thoughts, as we shall demonstrate. (Lancelot and Arnauld 1660:29–30) Whether the demonstrations supplied in the Grammaire générale et raisonnée are convincing is another matter. The relevant point here concerns the conviction that semantic explanations of this kind can be found to explain grammatical structure, not just for French, but in principle for any language. The comparative philologists of the 19th century worked on just such a basis of semantic assumptions, which they never made explicit or felt it necessary to justify. One result of this was that, in analysing any newly discovered language, the first question the Western scholar would ask was: ‘What thoughts do these unfamiliar verbal forms and constructions express?’ For only then could analysis proceed. The thoughts in question were assumed to be those not of any individual, but of the collectivity whose language was being studied . William Dwight Whitney, the most distinguished American philologist of his generation, was voicing a common academic opinion when he wrote: Language is but the instrument for the expression of thought. If a people has looked at the world without and within us with a penetrating and discerning eye, has observed successfully the resemblances and differences of things, has distinguished well and combined well and reasoned well, its language, of however apparently imperfect structure, in the technical sense of that term, enjoys all the advantage which comes from such use; it is the fitting instrument of an enlightened mind. [ … ] In another sense also a language is what its speakers make it: its structure, of whatever character, represents their collective capacity in that particular direction of effort. It is, not less than every other part of their civilization, the work of the race; every generation, every individual, has borne a part in shaping it. (Whitney 1880:223–4) Here we have in a nutshell the classic 19th-century philological version of the collectivist theory of mind. (It is relevant to the present discussion to note that, according to Whitney, if the community has ‘reasoned well’ that becomes embodied in their language. And presumably if they have reasoned badly, that too will be reflected. But in either case the reasoning comes first.)
< previous page
page_47
next page >
< previous page
page_48
next page >
Page 48 LINGUISTIC TYPOLOGY Feeding into this theory were current views of linguistic typology. It was held that all the world’s languages fell into a very small number of basic ‘types’. ‘Analytic’ (or ‘isolating’) languages were languages whose words were invariable, using word order alone to indicate syntactic relationships. ‘Synthetic’ languages, by contrast, had variable words, and could be subdivided according to the type of variation in question. In ‘agglutinating’ languages, the typical word consisted of a linear concatenation of meaningful elements, each expressing a single idea; whereas in ‘inflectional’ (or ‘fusional’) languages the typical affix would express a combination of ideas. ‘Polysynthetic’ (or ‘incorporating’ languages) tended to favour long, complex word forms, particularly combinations expressing ideas by means of affixes attached to the verb. This kind of ‘typology’ had been initiated early in the 19th century by the brothers August Wilhelm and Friedrich von Schlegel. It was then taken up by the German polymath Wilhelm von Humboldt. Shortly after his death in 1835, his brother brought out a three-volume work, which Hum-boldt himself had not managed to complete, on the ancient Kawi language of Java. It contained an introduction significantly entitled Über die Verschiedenheit des menschlichen Sprachbaues und ihren Einfluss auf die geistige Entwicklung des Menschengeschlechts (Humboldt 1836). Humboldt’s theory was based on the quasimystical notion that in speech the universal human spirit was striving for expression in sound. According to Humboldt, this attempt was more successful in some languages than in others, depending on how far sounds succeeded in capturing the various ideas they were called upon to express; and this in turn depended on word-structure. In Humboldt’s view the language most closely approaching the ideal structure was Sanskrit, closely followed by Greek. At the bottom of the ladder came Chinese. Humboldt claimed that ‘the perfecting of language demands that every word be stamped as a specific part of speech , and carry with it those properties that a philosophical analysis of language perceives therein’ (Humboldt 1836:140). But, as his critics pointed out, this is an arbitrary criterion with a built-in bias in favour of the Indo-European languages. After Humboldt, different scholars gave their own versions of the various language-types. According to some, the main types corresponded historically to different phases of linguistic evolution (a claim Humboldt had not made). Max Müller, for instance, held that there were or had been three such phases (Müller 1861:298ff.). In the most primitive, which he called ‘radical’, ‘roots may be used as words, each root preserving its full independence’. In the highest, which he called ‘inflexional’, ‘two roots may be joined together to form words, and in these compounds both roots may lose their independence.’ In between these two stages, he recognized an intermediate ‘terminational’ stage (commonly called ‘agglutinative’) in which ‘two roots may be joined together to form words, and in these compounds
< previous page
page_48
next page >
< previous page
page_49
next page >
Page 49 one root may lose its independence.’ According to Müller, the great majority of languages fall into the intermediate category. In Müller’s typology there is no necessary correspondence between languagetype and civilization. For although Müller holds that the Aryan and Semitic languages are those that ‘best represent the highest or inflectional stage’ nevertheless Chinese, spoken in a land with a long and impressive record of civilized institutions, belongs to the lowest or ‘radical’ type. Müller explains this by supposing that every language, ‘as soon as it once becomes settled, retains that morphological character which it had when it first assumed its individual or national existence’ (Müller 1861:342). But why this should be, and why further typological development is inhibited from that point on, remain unexplained. Müller’s account of the differences between languages representing the higher two stages is couched in overtly psychocentric terms: The chief distinction between an inflectional and an agglutinative language consists in the fact that agglutinative languages preserve the consciousness of their roots, and therefore do not allow them to be affected by phonetic corruption; and, though they have lost the consciousness of the original meaning of their terminations, they feel distinctly the difference between the significative root and the modifying elements. Not so in the inflectional languages. There the various elements which enter into the composition of words, may become so welded together, and suffer so much from phonetic corruption, that none but the educated would be aware of an original distinction between root and termination, and none but the comparative grammarian able to discover the seams that separate the component parts. (Müller 1861:337–8) Here ‘the language’ is reified as a living being with its own self-consciousness. It even has its own moral standards, as we see in the insistence on calling sound-change ‘phonetic corruption’. Furthermore, according to Müller, the uneducated masses may not actually understand the structure of the language they speak. They unwittingly preserve and pass on linguistic forms about which they are profoundly ignorant. It is against this background that one must assess the use of ‘linguistic evidence’ in the debate about the primitive mind. PRIMITIVE CLASSIFICATION Three years after Müller’s death, and seven years before the appearance of Lévy-Bruhl’s Les fonctions mentales dans les sociétés inférieures , Émile Durkheim and Marcel Mauss published their influential essay ‘De quelques formes primitives de classification’. It has been described as ‘a
< previous page
page_49
next page >
< previous page
page_50
next page >
Page 50 sociological classic’, and one which ‘draws attention, for the first time in sociological enquiry, to a topic of fundamental importance in understanding human thought and social life’ (Needham 1963: xxxiv). Although their academic agenda was far removed from that of Müller (who is mentioned only once, in a footnote), it is clear that Durkheim and Mauss too subscribed to the doctrine of the primitive mind. Theirs was an extreme version. They held that our primitive ancestors could hardly distinguish one idea from another. It would be impossible to exaggerate, in fact, the state of indistinction from which the human mind developed. Even today a considerable part of our popular literature, our myths, and our religions is based on a fundamental confusion of all images and ideas. They are not separated from each other, as it were, with any clarity. Metamorphoses, the transmission of qualities, the substitution of persons, souls, and bodies, beliefs about the materialization of spirits and the spiritualization of material objects, are the elements of religious thought or of folklore. Now the very idea of such transmutations could not arise if things were represented by delimited and classified concepts. The Christian dogma of transubstantiation is a consequence of this state of mind and may serve to prove its generality. (Durkheim and Mauss 1903:5) Here we see the postulation of a primitive mentality which is not merely prelogical but ‘preconceptual’. A concept, according to Durkheim and Mauss, must have determinate boundaries. It is the basis of modern forms of classification, enabling the thinker to arrange things ‘in groups which are distinct from each other, and are separated by clearly determined lines of demarcation’ (Durkheim and Mauss 1903:4). The absence of any such clearly determined lines of demarcation is what characterizes primitive thinking. This was hardly a novel idea. In the 18th century Vico had held that ‘the first men, the children, as it were, of the human race’ were not ‘able to form intelligible class concepts of things’ (Vico 1744:74). What is interesting is its adoption by Durkheim and Mauss as a tenet of modern sociology. According to Durkheim and Mauss, primitive thinking still exists, but ‘only as a survival’: and evenin this form it is found only in certain distinctly localized functions of collective thought. But there are innumerable societies whose entire natural history lies in etiological tales, all their speculation about vegetable and animal species in metamorphoses, all scientific conjecture in divinatory cycles, magic circles and squares. In China, in all the Far East, and in modern India, as well as in ancient Greece and Rome, ideas about sympathetic actions, symbolic correspondences, and astrological influences not only were or are very widespread, but exhausted or still exhaust collective knowledge. They all presuppose the belief in
< previous page
page_50
next page >
< previous page
page_51
next page >
Page 51 the possibility of the transformation of the most heterogeneous things into one another, and consequently the more or less complete absence of definite concepts. (Durkheim and Mauss 1903:5–6. Italics added.) Durkheim and Mauss go as far as to cast doubt on whether the primitive thinker at the lowest level even recognizes himself as a distinct individual: If we descend to the least evolved societies known, those which the Germans call by the rather vague term Naturvölker, we shall find an even more general mental confusion. Here, the individual himself loses his personality. There is a complete lack of distinction between him and his exterior soul or his totem. He and his ‘fellow-animal’ together compose a single personality. (Durkheim and Mauss 1903:6) It lies outside the scope of the present discussion to consider what grounds Durkheim and Mauss had for making these claims. What is relevant is the extent to which, and the form in which, they too embraced the doctrine of the primitive mind. In retrospect, the importance of the essay that Durkheim and Mauss published in 1903 seems to reside in the way that it brought to the fore for the first time the general question: ‘On what social basis does linguistic classification rest?’ Durkheim and Mauss may at least be credited with seeing that processes of classification cannot be assumed to result from the application of universal principles: The faculties of definition, deduction, and induction are generally considered as immediately given in the constitution of the individual understanding. Admittedly, it has been known for a long time that, in the course of history, men have learned to use these diverse functions better and better. But it is thought that there have been no important changes except in the way of employing them; that in their essential features they have been fully formed as long as mankind has existed. [ … ] Logicians and even psychologists commonly regard the procedure which consists in classifying things, events and facts about the world into kinds and species, subsuming them one under the other, and determining their relations of inclusion or exclusion, as being simple, innate, or at least instituted by the powers of the individual alone. Logicians consider the hierarchy of concepts as given in things and as directly expressible by the infinite chain of syllogisms. (Durkheim and Mauss 1903:3–4) But what if reasoning is not innate, and what if there is no ‘hierarchy of concepts’ that is reocentrically ‘given in things’? Where, in that case, do classifications come from? The answer that Durkheim and Mauss
< previous page
page_51
next page >
< previous page
page_52
next page >
Page 52 propose is that they come from society and social organization, and that this origin can still be seen in primitive societies. The trump card they play in support of this thesis is a type of classificatory system found primarily among Australian aborigines and Amerindian peoples, and expressed in their languages. The salient feature of this type of system is that ways of dividing up the world of Nature mirror the social divisions in the world of the people concerned. At Mount Gambier in Australia, for example, the whole community is divided into two moieties, Kumite and Kroki, each of which is subdivided into five totemic clans. But the ten classes thus produced are not limited to the men and women of the community. ‘All things in nature belong to one or other of these ten sub-classes’ (Durkheim and Mauss 1903:18–19, quoting Curr 1886–7: III.461). Thus smoke, honeysuckle and trees fall under one category, while dogs, fire and ice fall under another. These classes are clearly not based on any ‘natural’ differences that a European would recognize. And if one asks what is the common feature shared by the items in any such grouping, it is in many cases difficult to suggest any common feature except that they are so grouped by the aboriginal people themselves, who have words reserved for designating each of these categories. However, the names given to the categories do not constitute the main source of evidence about the community’s recognition of the distinctions in question. For they are widely implemented in social practices, determining, for example, who can marry whom, who can hunt what animals, who can eat which kinds of foods, and in the organization of various ceremonies and rituals. This is therefore a system of which the consequences pervade every aspect of aboriginal life. What Durkheim and Mauss are keen to show is that primitive classification, of the kind found among Australian aboriginal communities, is not arbitrary, even though it may appear so to outsiders, but based on patterns of social organization firmly rooted in those communities. However, they continue to describe the way these classifications operate as if dealing with an obscure system of logic . ‘They are not products of a logic identical with our own. They are governed by laws which we do not suspect’ (Durkheim and Mauss 1903:21). Arguably, the mistake Durkheim and Mauss are making here is that, as their title indicates, they are treating as classification—in the European sense—a process which in practice is not classification at all, but something different. They make this mistake because their sophisticated European education has taught them to treat all general terms as class names. So when they find a word in some strange language apparently applicable to such disparate items as dogs, fire and ice, Durkheim and Mauss interpret this by supposing that the speakers have a class concept which, bizarrely, includes members whose grouping together defies European ‘logic’, since—considered ‘objectively’ (i.e. as products of Nature)—the items have nothing in common. So Durkheim and Mauss supply what is in effect an alternative psychocentric semantics for these
< previous page
page_52
next page >
< previous page
page_53
next page >
Page 53 terms. According to this alternative explanation, each such class name designates a collection of items grouped together not on the basis of natural similarity, but on the basis of traditional folk beliefs about their relevance to social life. What Durkheim and Mauss fail to see is that they are just forcing the evidence into the Procrustean semantics of their own cultural tradition. (The Procrustean principle is: ‘Meanings that are not reocentric must be psychocentric’.) What the evidence suggests, however, is that we are dealing with terms and concepts having the semiological function not of classifying a given set of items, but of integrating a whole range of activities (marriage, hunting, feasting, etc.) which, if left unintegrated, would—it is feared, whether rightly or not—result in social chaos. It hardly needs stressing how this will give rise to forms of reasoning that a European would find bizarre (‘Her family are lizards: therefore she must not eat fish’, etc.). In such cases, the prohibition seems to be not externally imposed (e.g. by law, or royal decree, or any other human agency) but unchallengeable—part of the social structure—because it is built into the language everyone uses. THE AMBIVALENCE OF ABSTRACTION Lévy-Bruhl makes a quite different use of linguistic evidence. For him, linguistic features reveal aspects of primitive thought that might not otherwise be apparent to the investigator. The most general of these concerns the way in which North American languages take care ‘to express concrete details which our languages leave understood or unexpressed’ (Lévy-Bruhl 1910:140). Thus (quoting an example given by Powell) a Ponka Indian, in saying that a man killed a rabbit, would have to say the equivalent of ‘the man, he, one, animate, standing (in the nominative case), purposely killed by shooting an arrow the rabbit, he, the one, animal, sitting (in the objective case)’. The form of the verb would express whether the killing was done accidentally or purposely, whether it was done by shooting, and if so whether a bow and arrow or a gun was used. The first thing that strikes one about this example is that, just like Müller, Lévy-Bruhl reifies, or rather personifies, the language being described. The language itself becomes the agent of analysis, not the speaker. Speakers, the implication is, have no control over the distinctions the language obliges them to make. So there is no guarantee that any individual consciously views the shooting of the rabbit in this complex way at all. Another example (quoted from Gatschet’s grammar of the Klamath language) is plurality in Klamath, where the word nep ‘means hands as well as hand, the hand, a hand, but its distributive form nénap means each of the two hands or the hands of each person when considered as a separate individual’ (Lévy-Bruhl 1910:141). This state of affairs is contrasted with ‘our own’ system in the following terms:
< previous page
page_53
next page >
< previous page
page_54
next page >
Page 54 We oppose the singular to the plural; a subject or object is either singular or plural, and this mental habit involves a rapid and familiar use of abstraction, that is, of logical thought and the matter it deals with. Prelogical mentality does not proceed thus, however. (Lévy-Bruhl 1910:141) Why having a separate plural form should be regarded as more characteristic of ‘logical thought’ than having a separate distributive form is never explained. Gatschet is then quoted on ‘the mind’ of the Klamath Indian: To the observing mind of the Klamath Indian [ … ] the fact that things were done repeatedly, at different times, or that the same thing was done severally by distinct persons, appeared much more important than the pure idea of plurality, as we have it in our language. (Lévy-Bruhl 1910:141) This is a blatant case of arguing from the absence of a grammatical category to a mental attitude on the part of the language-user, and then producing the mental attitude as an explanation of why the grammatical category is ‘missing’. Nothing could be more circular. A no less obvious flaw vitiates Lévy-Bruhl’s arguments concerning tense forms (Lévy-Bruhl 1910:145ff.). Whenever he finds ‘primitive’ languages with a more complex system of tenses than French, he takes this to show the presence of ‘a mentality which makes little use of abstraction’. But exactly the same evidence could be made to support quite the opposite conclusion. For, on Lévy-Bruhl’s assumptions, it is difficult to see how any grammatical distinction could arise in any language without a mental effort of abstraction on the part of its speakers: hence if French marks fewer tense distinctions than some other language, then French speakers presumably make less ‘use of abstraction’. Similar considerations apply to vocabulary. The fact that Tasmanians have many words for ‘every variety of gum-tree or bush’, but ‘no word for tree’ (Lévy-Bruhl 1910:170) does not necessarily show—as LévyBruhl assumes—their incapacity to form the abstract idea of the hyperordinate concept ‘tree’. It could simply be that they are perfectly well aware of what all the trees in their land have in common, but that this knowledge is of little practical relevance to the way of life they lead. The root of the trouble in all these cases can be traced to Lévy-Bruhl’s question-begging psychology of ‘abstraction’. It is on this that his interpretations of linguistic evidence are based. He seems to take it as a premise that general concepts are always arrived at by the exercise of mental effort directed originally at specifics: so the more general the concept, the more mental effort has gone into it. Hence languages with more general terms are indicative of a higher mentality. But this simplistic notion fails to recognize what everyday
< previous page
page_54
next page >
< previous page
page_55
next page >
Page 55 experience suggests: that a general concept might be the result of a failure to notice—or be interested in—specific differences. In other words, the generalization might be indicative of less rather than greater mental effort. BOAS, WHORF AND LINGUISTIC RELATIVITY The ‘benighted’ view of the primitive mind presented by Lévy-Bruhl is often contrasted with the ‘enlightened’ view taken by such scholars as Franz Boas and Benjamin Lee Whorf. Although Boas is sometimes regarded as a relativist, his position was significantly different from that of Whorf. According to Whorf, different societies see the world in different ways because they see it through the conceptual spectacles provided by their own languages. At the beginning of his famous essay on ‘The relation of habitual thought and behavior to language’ (1939), Whorf quotes with approval Edward Sapir’s claim that human beings are ‘very much at the mercy of the particular language which has become the medium of expression for their society’. Sapir went on to assert that ‘the “real world” is to a large extent unconsciously built up on the language habits of the group’. Whorf’s essay attempts to illustrate this dependency by contrasting the language of the Hopi Indians with what he calls ‘Standard Average European’. (Under this questionable label he groups a variety of common features found in the modern languages of Europe.) In another paper on thinking in primitive communities, unpublished at his death, Whorf takes up arms against Lévy-Bruhl. According to Whorf, the notion that the mentality of primitive communities is ‘less rational’ than ours conflicts with the evidence: many American and African languages abound in finely wrought, beautifully logical discriminations about causation, action, result, dynamic or energic quality, directness of experience, etc., all matters of the function of thinking, indeed the quintessence of the rational. In this respect they far outdistance the European languages. (Whorf 1936:80) In Whorf’s view, the linguistic facts point to the conclusion that many primitive communities, ‘far from being sub-rational, may show the human mind functioning on a higher and more complex plane of rationality than among civilized men’ (Whorf 1936:81). Whorf, in short, attempts to stand the doctrine of the primitive mind on its head. Do thought patterns determine linguistic expression, or vice versa? Whorf recognizes a mutual influence, but pronounces ultimately in favour of the power of the language to dictate to its users. The language of the community, in his view, ‘represents the mass mind’:
< previous page
page_55
next page >
< previous page
page_56
next page >
Page 56 it is affected by inventions and innovations, but affected little and slowly, whereas TO inventors and innovators it legislates with the decree immediate. (Whorf 1939:156) The reply that Boas had offered thirty years earlier was almost exactly the opposite. In the section on ‘Language and Thought’ in his introduction to the Handbook of American Indian Languages (1911), he claimed to have been able to teach a native speaker of Kwakiutl the abstract concepts of ‘love’ and ‘pity’, even though the Indian’s language had no independent words for them. Boas explained the absence of terms for generalizations in many Indian languages as being due not to any inability to grasp general concepts, but to the fact of their not being needed in the culture in question. The fact that generalized forms of expression are not used does not prove inability to form them, but it merely proves that the mode of life of the people is such that they are not required; that they would, however, develop just as soon as needed. (Boas 1911:65–6) Boas concludes this section of his introduction as follows: Thus it would seem that the obstacles to generalized thought inherent in the form of a language are of minor importance only, and that presumably the language alone would not prevent a people from advancing to more generalized forms of thinking if the general state of their culture should require expression of such thought; that under these conditions the language would be moulded rather by the cultural state. It does not seem likely, therefore, that there is any direct relation between the culture of a tribe and the language they speak, except in so far as the form of the language will be moulded by the state of culture, but not in so far as a certain state of culture is conditioned by morphological traits of the language . (Boas 1911:67. Italics added.) Boas certainly interpreted the linguistic evidence from primitive languages in a quite different way from Lévy-Bruhl, but his conclusions are no more convincing. Like Lévy-Bruhl, he assumed that the fundamental issue was whether or not the primitive mind was capable of grasping ‘abstract’ general concepts. One illustrative example Boas takes is: The eye is the organ of sight. Can an American Indian say this in his own language? Boas concedes that the Indian ‘may not be able to form the expression the eye, but may have to define that the eye of a person or of an animal is meant’ (Boas 1911:64). This Boas obviously regards as a failure to achieve the degree of generality represented by the expression the eye in the English sentence.
< previous page
page_56
next page >
< previous page
page_57
next page >
Page 57 But is he right? It seems that Boas is confusing generalization with vagueness. The English sentence leaves it quite unclear whether this statement is to be taken as applying to human beings, to primates, to the whole animal kingdom, or to the whole of creation. The hypothetical Indian sentence (which Boas does not give) at least appears to have the advantage of precision here. But that too expresses a generalization: the generalization, simply, is a different generalization from the one in Boas’s English sentence. It is clear that Boas is committed to exactly the same error as Lévy-Bruhl. Both of them were probably taught elementary logic on the basis of Venn diagrams. Did they—or their teachers—make the mistake of supposing that a more comprehensive class inclusion is mentally ‘superior’ to a less comprehensive one, or requires more mental effort? It would seem so. Boas was greatly admired as a linguist by Whorf, who nevertheless interprets the evidence from Amerindian languages in a quite different way from Boas. He writes: There are cases where the “fashions of speaking” are closely integrated with the whole general culture, whether or not this be universally true, and there are connections within this integration, between the kinds of linguistic analyses employed and various behavioral reactions and also the shapes taken by various cultural developments. (Whorf 1939:159) All this is about as vague as it possibly could be. Explanatory rigour was never Whorf’s forte. The caveat ‘whether or not this be universally true’ seems to allow that perhaps in some communities language is less closely integrated with ‘general culture’ than in others. The whole passage reflects the hesitations of a linguist engaged in a theoretical flirtation with behaviourism. For Whorf was also an admirer of Bloomfield, and behind the phrase behavioral reactions it is not difficult to detect a nod in the direction of Bloomfieldian conditioned responses. More than just a nod, however, can be detected at the beginning of his paper on ‘Science and linguistics’, where he attacks the view that language merely serves to express ‘what is essentially already formulated nonlinguistically’ (Whorf 1940:207). He rejects the assumption that the use of language is shaped by ‘correct, rational, or intelligent THINKING’, rather than by the grammar of the language in question. He continues: Thought, in this view, does not depend on grammar but on laws of logic or reason which are supposed to be the same for all observers of the universe—to represent a rationale in the universe that can be “found” independently by all intelligent observers, whether they speak Chinese or Choctaw. (Whorf 1940:208)
< previous page
page_57
next page >
< previous page
page_58
next page >
Page 58 This is Whorf, as behaviourist fellow-traveller, condemning what Bloomfield had dismissed as ‘mentalism’ in linguistics. Whorf proceeds to comment on the prestige accorded in Western culture to ‘the formulations of mathematics and formal logic’. These are assumed to deal with ‘the realm and laws of pure thought’; but, for Whorf, they are just ‘specialized extensions of language’. In short, it is a complete misconception of their status to regard them as independent of grammar. According to Whorf, a ‘scientific’ examination of ‘a large number of languages’ had established that the grammar of each is not merely a reproducing instrument for voicing ideas but rather is itself the shaper of ideas, the program and guide for the individual’s mental activity, for his analysis of impressions, for his synthesis of his mental stock in trade. (Whorf 1940:212) Whorf shrank from maintaining that what we take to be distinctions in Nature are ‘not really there’ or are figments of the language-deluded imagination. The categories and types that we isolate from the world of phenomena we do not find there because they stare every observer in the face; on the contrary, the world is presented in a kaleidoscopic flux of impressions which has to be organized by our minds—and this means largely by the linguistic system in our minds. We cut nature up, organize it into concepts, and ascribe significances as we do, largely because we are parties to an agreement to organize it in this way—an agreement that holds throughout our speech community and is codified in the patterns of our language. (Whorf 1940:213) Thus eventually Whorf is obliged to fall back on the traditional notion of languages as conventional verbal ‘codes’ binding on all members of the community. He seems never to have grasped the fact that radical behaviourism and linguistic relativity are incompatible, nor that a language cannot simultaneously be a system of thought and an explanation of why the system is set up in the way it is. (That would be like insisting that the reason for there being one hundred pence in a pound is that we always find when we accumulate a hundred pence that the total comes to exactly one pound, neither more nor less.) LINGUISTIC RELATIVITY AND SAUSSUREAN STRUCTURALISM Although Whorfian relativity is sometimes seen as a development of structuralism (inasmuch as a language is envisaged as a self-contained system, determined by nothing external to its own internal structure of relations
< previous page
page_58
next page >
< previous page
page_59
next page >
Page 59 between units), it seems that Whorf had never read Saussure. If he had, he would have discovered that Saussure, although insisting that a language was ‘a system of pure values’, was highly sceptical of the notion that languages either reflected or determined the psychology of communities. The question is addressed explicitly in a section of the Cours de linguistique générale headed ‘Linguistic types and group mentality’ (Saussure 1922:310–12). There Saussure objects to the common opinion that ‘a language reflects the psychology of a nation’. His objection is based on the observation that the history of languages shows us that what are regarded as typical and significant features of grammar are often the product of phonetic accidents. Over the course of time, a sound-change may obliterate what were once obligatory grammatical distinctions, thus forcing a reorganization of the system. Saussure does not deny that such changes have the effect of ‘imposing strict constraints upon thought, obliging it to take the particular path made available by the material state of the signs involved’. Nevertheless, the psychological characteristics of the language community count for little as against facts like the fall of a vowel or a modification of stress, and many similar events capable at any time of disturbing the relation between sign and idea in any linguistic form. (Saussure 1922:311–2) Saussure recognizes that linguistic change can have profound effects on grammatical structure. Thus, for example, in the evolution from Latin to French a system in which relations between words in a sentence are marked mainly by case endings has been replaced by one in which there are no case endings at all and the relations are marked by means of particles and word order. As a result the basic Latin sentence and the basic French sentence are completely anisomorphic structures. But it is no part of Saussure’s thesis to maintain either (1) that the new system changed the way speakers viewed the world, or (2) that a change in speakers’ world view was the underlying cause of structural change in the language. Saussure’s rejection of the view that languages reflect national psychology is based on his premise (taken over from the Neogrammarians) that soundchanges operate mechanically and ‘blindly’, having no regard for the meanings or functions of the forms affected. Whorf rarely discusses linguistic change, so whether he would have accepted such a premise is a question that must remain unanswered. The relevant point for present purposes is that it is unclear what would count as decisive evidence for or against the claim that the group mentality correlates directly with linguistic structure. THE LINGUISTIC IMPASSE So we reach another academic stalemate. The debate about the primitive mind cannot be settled by appeal to linguistic evidence, because there is
< previous page
page_59
next page >
< previous page
page_60
next page >
Page 60 no agreed way to interpret the evidence. It is evident in retrospect that neither proponents nor opponents of the doctrine were able to break out of the intellectual straitjacket provided by the traditional literate view of languages as verbal codes providing fixed pairings of forms and meanings. But once that view was accepted by both parties, the problem became insoluble, because arguments had to rely on question-begging comparisons between one code and another. Whorf, who denied that there were any primitive languages (Whorf 1942:260), was as guilty on this count as Lévy-Bruhl, who held that there were. Whorf tried to show, for instance, that ‘the same event’ is ‘formulated’ differently in English and Nootka. Whereas an English sentence takes six words in a subject-predicate structure to say ‘HE INVITES PEOPLE TO A FEAST’, the corresponding Nootka sentence is just a single word, consisting of a root and five affixes, which translates as ‘BOIL–ED–EAT– ERS– GO-FOR–HE DOES’. In what sense these are ‘formulations’ of the same event is never explained; but it is clear that a nonlinguistic item (the event) provides the basis for Whorf’s comparison between two linguistic items (the sentences). This is reocentric semantics as blatant as anything to be found in Aristotle. (If Whorf were being consistent with his own relativist principles, he should be rejecting this kind of analysis and arguing instead that the two sentences are structurally incommensurable.) All arguments about the mind and rationality based on linguistic evidence of this kind are worthless, because the way the evidence is selected, presented and compared already begs the very questions being asked. It rests on scriptist assumptions about the function of languages and their relationship to the mental life of their speakers; in particular, the assumption that the language of a preliterate community is in no important way different from the language of a literate community like ‘ours’.
< previous page
page_60
next page >
< previous page
page_61
next page >
Page 61 5 The Great Divide WRITING AND CIVILIZATION Champions of the doctrine of the primitive mind, like their opponents, often took it for granted—without comment—that primitive peoples are preliterate. The failure to develop any art of writing was seen as being on a par with their failure to develop other arts and techniques. But treating literacy as the defining characteristic of civilized societies, and thereby equating the primitive mind with the preliterate mind, goes one step further. What anthropologists sometimes call the ‘Great Divide’ is relocated at the point where societies acquire a system of writing. But what, if any, is the connexion between being able to reason and being able to write? The importance attached to the Great Divide is already evident in the work of Edward Burnett Tylor, the holder of Oxford University’s first Chair of Anthropology. In his magnum opus, published under the title Primitive Culture in 1871, Tylor had seen no particular significance in the advent of writing. But this view had changed very noticeably by the time he published Anthropology ten years later. There he not only devotes a whole chapter to writing, but declares that its invention was ‘the great movement by which mankind rose from barbarism to civilization’ (Tylor 1881: I.142). In order to appreciate what Tylor is claiming here, it is important to realize that for Tylor ‘barbaric’ is a stage intermediate between ‘savage’ and ‘civilized’. The barbaric stage is defined by the adoption of a way of life dependent either on agriculture or on keeping flocks and herds, as opposed to the savage stage, in which the only means of subsistence is that of the hunter-gatherer. Civilized life, Tylor asserts , may be taken as beginning with the art of writing, which, by recording history, law, knowledge, and religion for the service of ages to come, binds together the past and the future in an unbroken chain of intellectual and moral progress. (Tylor 1881: I.18–19)
< previous page
page_61
next page >
< previous page
page_62
next page >
Page 62 Here, probably, we see the unacknowledged source of H.G. Wells’s much-quoted remarks about writing in his Short History of the World : It put agreements, laws, commandments on record. It made the growth of states larger than the old city states possible. It made a continuous historical consciousness possible. (Wells 1946:49) Postulating a tripartite succession of stages in the history of humanity is familiar from the 18th century, particularly in the works of Vico and Rousseau. Tylor’s terms echo those of Rousseau ( sauvage , barbare , policé), although he defines the three stages differently. Tylor justifies his own schema of the ‘three great stages of culture’ as being not only ‘practically convenient’ but as having the advantage of ‘not describing imaginary states of society, but such as are actually known to exist’ (Tylor 1881: I.19). In this way, existing societies are construed as living evidence of the past. For those who wished to establish anthropology as a ‘science’, this was an important move; for it meant that the anthropologist could go beyond mere speculation about what had happened in prehistory. Tylor makes sure the point is not lost on his readers: So far as the evidence goes, it seems that civilization has actually grown up in the world through these three stages, so that to look at a savage of the Brazilian forests, a barbarous New Zealander or Dahoman, and a civilized European, may be the student’s best guide to understanding the progress of civilization [ … ]. (Tylor 1881: I.19) Perhaps the first thing to note about Tylor’s three-fold division is that the third stage is defined on quite a different basis from the other two. In the cases of savagery and of barbarism, culture is defined by reference to the way society exploits Nature for purposes of survival. Writing does nothing of the kind. It is not a way of exploiting Nature but a means of communication. So there is at first sight something odd about ranking the invention of writing along with hunting game or sowing crops in the human struggle for existence. Written texts do not feed many mouths. What is it that is civilized or civilizing about writing? Tylor sees writing as an evolution of pictorial communication. The first stage, which he calls ‘picturewriting’, may be achieved even by ‘uncivilized races’ (Tylor 1881: I.132). Tylor’s history of writing postulates a gradual progression from (1) pictures which represent people, animals and artifacts to (2) pictures that ‘stand for the sound of a spoken word’. The key to this transition is alleged to be the rebus, where, as in children’s games, the picture of an object is substituted for its name. In this way, for example, the word candidate can be rendered by drawing a can (= can-), a man being
< previous page
page_62
next page >
< previous page
page_63
next page >
Page 63 shot (= di -) and a date fruit (=date ) (Tylor 1881: I.133). When this stage is reached, ‘what the pictures have come to stand for is no longer their meaning, but their mere sound’. This is true phonetic writing, though of a rude kind, and shows how the practical art of writing really came to be invented. (Tylor 1881: I.133–4) Unfortunately, it does nothing of the kind. Tylor’s book appeared a decade before the publication of Garrick Mallery’s monumental study Picture-Writing of the American Indians (1893). Mallery believed that the pictographs of America ‘bear significantly upon the evolution of the human mind’ (Mallery 1893: I.28). However, unlike Tylor (whom he does not mention in this connexion), Mallery held that it was the invention of the alphabet—not the invention of writing—that was ‘the great step marking the change from barbarism to civilization’ (Mallery 1893: I.26). By this criterion, presumably, the Chinese remained barbarians for as long as they clung to their ancient script (a view held by Dr Johnson in the 18th century, and on the same grounds). But Mallery’s patiently gathered documentation does have a bearing on Tylor’s evolutionary theory of writing. It seemed to show beyond reasonable doubt that societies could remain indefinitely at a stage where occasional use was made of verbally motivated pictorial symbols, without ever being led to develop this into a full and systematic use of rebus writing. This is not the only problem that Tylor’s theory encounters. There is a more fundamental difficulty with the psychological mechanisms that Tylor tacitly assumes to underlie the transition from pictures to writing. It is far from clear, for instance, that when an Indian whose name ( Kishkemunazee ) means ‘kingfisher’ (Tylor 1881: I.133) is identified in ‘picture-writing’ by a kingfisher, that is an attempt to represent his name . That is doubtless how it would appear if an Englishman called Kingfisher adopted a kingfisher emblem as his personal signature, or as a brand image on his company’s products. But Tylor is here guilty of reading back a literate interpretation into a preliterate sign. The objection to applying the term rebus to such symbols in the Indian case is that the rebus—of the kind familiar from Western children’s books—is a literate jeu d’esprit from the start. It would make no sense to someone not already familiar with writing. Nor does Tylor offer any evidence that a full system of rebus writing developed anywhere in the world. It is difficult not to suspect that the reason why Tylor attached so much importance to writing is that writing, for him as for other Western academics of his generation, was the indispensable basis of education. It was a question of semantics rather than of anthropological evidence: ‘civilized’ had to entail ‘educated’. And even had Tylor’s generation been able to imagine what a purely non-literate system for the transmission of knowledge might
< previous page
page_63
next page >
< previous page
page_64
next page >
Page 64 in practice be like, it would automatically have been—in their eyes—an inferior system. Which brings us back to Tylor’s admission at the beginning of Primitive Culture: it is all a matter of placing Western nations ‘at one end of the social series and savage tribes at the other’ (Tylor 1871: I.26). Tylor’s brand of overt ethnocentrism did not die out in the 19th century. After World War 2, Sir Leonard Woolley, a world authority on prehistory and archaeology, commissioned by UNESCO to write the volume on The Beginnings of Civilization in its ‘official’ History of Mankind, could still assert that ‘knowledge of the art of writing’ is ‘the most convenient and easily recognizable criterion of civilization’ (Woolley 1963:359). One encounters the same dogma in some areas of scholarship today, not least in those concerned with the study of writing. For example: ‘Humankind is defined by language; but civilization is defined by writing’ (Daniels 1996:1). It is difficult to know whether this is a serious proposition, or just a botched attempt at an epigram. WRITING AS MENTAL TECHNOLOGY According to some writers, what distinguishes civilization from barbarism for them is not just that civilized communities are materially more advanced and better organized, but that literate communities think differently. This claim is currently endorsed by prehistorians reluctant to abandon the traditional view that history ‘proper’ begins with writing. (Presumably they fear that if this pivotal role of writing were questioned, that would at one stroke destroy the identity of their academic field of speciality.) The appearance of writing is therefore allowed to survive as one of the conspicuous features of the so-called ‘theoretic’ stage in cultural evolution (Donald 1991; Renfrew 2007). This stage is said to be characterized by ‘massive external memory storage’ and to involve ‘the transformation of “mind” through literacy’ (Renfrew 2007:204). But what this ‘transformation’ amounted to is often left in metaphorical obscurity. A detailed case for this claim had to wait until nearly a hundred years after anthropologists of Tylor’s generation had promoted literacy to criterial status in assessing the progress of mankind. It came about as part of a new interest in forms of communication taken by a number of anthropologists, sociologists, cultural critics and literary scholars in the second half of the 20th century. Collectively, they have been called the theorists of ‘communications technology’. Over a period of roughly twenty years, from the 1960s to the 1980s, most of the assumptions about writing that had been taken for granted by Tylor were called in question. Perhaps the boldest claims were those made by Walter Ong in a book entitled Orality and Literacy (1982). Its subtitle is The Technologizing of the Word. It is not altogether a simple matter to establish exactly what Ong’s thesis is. All Ong tells his readers is that writing, print and the computer are
< previous page
page_64
next page >
< previous page
page_65
next page >
Page 65 three ‘ways of technologizing the word’ (Ong 1982:80). But this still leaves it uncertain whether it is the same ‘word’ that is being ‘technologized’ in all three cases, and whether, or how, it retains its identity— if it does—throughout the ‘technologizing’ process. But Ong is in no doubt about what a technology is. A technology, as far as he is concerned, involves the use of tools (Ong 1982:81–83). Writing in all its forms depends on the use of special tools such as styli, brushes and pens, and print on type, printing machines, etc. Writing and printing are thus ‘artificial’ and not, like speech, ‘natural’ (presumably in the same way as eating with a knife and fork is artificial, whereas eating with your fingers is not, or riding a bicycle is artificial, but walking is not). There is ‘no way to write “naturally”’ (Ong 1982:82). Ong claims that ‘understanding of the differences between orality and literacy developed only in the electronic age, not earlier’ (Ong 1982:3). This implies that Tylor and anthropologists of his generation who attached such cultural importance to writing could not have grasped these differences. But why the differences became clear only with the advent of electronic forms of communication is something else Ong does not explain. It is certainly the case that Tylor and those anthropologists who had followed him in taking writing to be criterial for civilization had placed no emphasis at all on the ‘artificiality’ of writing. Possibly this was because the appearance of writing in human history was accompanied by a profusion of other technologies. So why should writing qua technology be of particular importance? Ong’s answer is that ‘writing restructures consciousness’ (Ong 1982:78–116). This claim is by no means perspicuous either, even after Ong has spent thirty-odd pages elaborating it. He undertakes to tell us ‘what functionally literate humans really are’: they are ‘beings whose thought processes do not grow out of simply natural powers but out of these powers as structured, directly or indirectly, by the technology of writing’ (Ong 1982:78). Without writing, the literate mind would not and could not think as it does, not only when engaged in writing but normally even when it is composing its thoughts in oral form. More than any other single invention, writing has transformed human consciousness. (Ong 1982:78) Before we can accept that ‘writing restructures consciousness’, however, we need to ask: consciousness of what? For unless this is specified, the claim collapses into a vague tautology. Any technology restructures consciousness, if all that involves is making the individual aware of practical possibilities and procedures that were not previously available. (Thus the technology of knives and forks restructures our consciousness of eating, of meals, of appropriate ‘table manners’, etc. The technology of the bicycle restructures our consciousness of how it is possible to go from one place to
< previous page
page_65
next page >
< previous page
page_66
next page >
Page 66 another, how quickly, and so on.) So what is it that is restructured by the technology of writing? And exactly how? It is difficult to find any clear answers to these questions in Ong’s discussion. He makes the point that the restructuring affects not only the mental processes engaged in the actual activity of writing (e.g. when using paper and pencil to jot something down), but even what happens when the mind ‘is composing its thoughts in oral form’. This latter phrase seems to suggest that Ong considers thinking to be what goes on as the essential preliminary to speech, or even, as in the classic behaviourist view, the verbal process by which the mind silently ‘talks to itself’. But is this verbal or pre-verbal process the only kind of thinking, or the only kind of thought, that human beings have? And is the alleged ‘restructuring’ limited to this particular form of mental activity? If so, it becomes very hard to make out a credible prima facie case for the restructuring thesis. Does a literate person’s conversational ‘yes’, or ‘no’, or ‘good morning’, or ‘nice day’, or ‘how’s your wife?’, or ‘today is Tuesday’, involve or presuppose differently structured mental processes from the oral production of these words by a person who can neither read nor write? And if it did, how could one tell? (It would be blatantly unhelpful to reply that the mental difference resides just in the fact that the literate person knows it would have been possible to write those words down.) On the other hand, if under ‘thought’ Ong includes non-verbal processes as well, it becomes even harder to see in what sense the literate person’s consciousness differs radically from that of the non-literate person. Consciousness of colours, for instance, or of distances, or of changes of temperature, seems on the face of it to be quite unaffected by whether the sensory perceptions are those of a literate person or not. If there is any such evidence of relevant differences, Ong signally fails to produce it. Ong takes an extremely narrow view of writing (or, more exactly, of what he calls ‘true writing’). This, he declares, ‘is a representation of an utterance, or words that someone says or is imagined to say’ (Ong 1982:84. Italics in the original.). ‘Writing is always a kind of imitation talking’ (Ong 1982:102). The trouble is that this immediately excludes from consideration as ‘writing’ many things which have an obvious claim to be included. An algebraic formula is in no sense a representation of an utterance (even though it may be read aloud in many different languages). A symphony score is not ‘a kind of imitation talking’, but it is a written text nevertheless. By squeezing the concept of ‘true writing’ into this linguistic corset Ong both reveals his own intellectual bias as a literary scholar and at the same time reduces his grandiose claim about the ‘restructuring of consciousness’ to near vacuity. However, according to Ong writing does not immediately or automatically bring a restructured consciousness with it. Ong makes a great deal of a process he calls ‘interiorizing’ the technology. To explain this, he draws a parallel between writing and music.
< previous page
page_66
next page >
< previous page
page_67
next page >
Page 67 A violin is an instrument, which is to say a tool. An organ is a huge machine, with sources of power— pumps, bellows, electric generators—totally outside its operator. [ … ] The fact is that by using a mechanical contrivance, a violinist or an organist can express something poignantly human that cannot be expressed without the mechanical contrivance. To achieve such expression of course the violinist or organist has to have interiorized the technology, made the tool or machine a second nature, a psychological part of himself or herself. This calls for years of ‘practice’, learning how to make the tool do what it can do. (Ong 1982:83) It is difficult to feel altogether comfortable with this account of ‘interiorization’ for at least two reasons. In the first place, many people would question whether the appreciation of music necessarily has anything at all to do with being able to play an instrument. Doubtless the violinist approaches a violin concerto from the viewpoint of a performer, recognizing difficulties and subtleties that only the experience of performing would lead one to detect. But does this mean that a violin concerto is lost musically on someone who has never played a violin at all? And if it is possible to enjoy listening to a violin concerto, to be entranced and moved by it, without ever having touched a violin in one’s life, where does ‘interiorizing the technology’ come in? In the second place, the process of ‘interiorization’ as described by Ong seems essentially to be one of gaining increasingly confident control over the operation of a tool, until it becomes, as people say, ‘second nature’. Eventually, using the tool with facility no longer requires ‘thinking about what one is doing’ all the time: it comes ‘automatically’. In short, ‘interiorization’ is another word for ‘habituation’ in the acquisition of tool-using. It is on a par with acquired competence in handling a chisel or driving a car. But if that is all, it still remains to be explained how ‘interiorization’ of a technology has any important impact on human thinking. And we are still left wondering what it is that is being ‘technologized’ by playing a violin or an organ. According to Ong, ‘Many modern cultures [ … ] have known writing for centuries but have never fully interiorized it’ (Ong 1982:26). Among these failures he ranks ‘Arabic culture’ and ‘certain other Mediterranean cultures’. It goes without saying that American culture is not among them. ‘We’, says Ong, have ‘deeply interiorized writing’ (i.e. we Americans). Ong distinguishes between levels of technological interiorization. There even seems to be a hierarchy of interiorization itself. Writing, he claims, is an ‘even more deeply interiorized technology than instrumental musical performance is’ (Ong 1982:83). Print goes one stage further: apparently it ‘deeply interiorizes’ writing itself (Ong 1982:97). But this seems odd, because although the output of printed books has multiplied prodigiously since the Renaissance, the proportion of readers who are themselves skilled
< previous page
page_67
next page >
< previous page
page_68
next page >
Page 68 in the technology of printing is minute. And how one measures ‘depth’ of interiorization is never explained. It is possible that Ong regards children who have difficulties with reading and writing as having failed to interiorize the technology, or not having interiorized it ‘deeply’ enough. But this is never made explicit in Orality and Literacy. So it is quite unclear from Ong’s account how ‘interiorization’ relates to execution. Is the Wimbledon champion a player who has ‘interiorized’ playing tennis, mastered what can be done with the instruments of the game, more effectively, or more ‘deeply’, than rival players? Or is skill on the court a different matter altogether? The question is particularly important in the case of writing. Having a beautiful hand is not the same thing as being able to write a sonnet. What seems to be going wrong here with Ong’s presentation of his case is a progressive conflation of different possible senses of the term writing . The result, unfortunately, is increasing obfuscation about what this alleged ‘restructuring of consciousness’ by writing is supposed to consist in. If the primary function of ‘technologizing the word’ is to preserve utterances by artificial means, the technologies that most readily spring to mind are those of the gramophone record and the tape recorder. It is one of the oddities of Ong’s book that neither is discussed at any point. But Ong evidently thinks that familiarity with writing ‘technologizes’ thought processes too. Here he sets great store by the findings of Luria. According to Ong, what Luria’s research showed was that illiterates categorize differently from literates. They cannot master ‘abstract classification’, or even understand ‘abstract’ questions. Ong cannot find anything wrong with the methods and assumptions that Luria adopted, although he does—somewhat begrudgingly—concede that perhaps Luria’s subjects, unaccustomed to the discourse of the classroom, often just did not understand what they were being asked to do. Ong is quick to make the politically correct disavowal: none of this shows that the illiterates were mentally inferior in any way. Of course not. We know that formal logic is the invention of Greek culture after it had interiorized the technology of alphabetic writing, and so made a permanent part of its noetic resources the kind of thinking that alphabetic writing made possible. In the light of this knowledge, Luria’s experiments with illiterates’ reactions to formally syllogistic and inferential reasoning is particularly revealing. In brief, his illiterate subjects seemed not to operate with formal deductive procedures at all—which is not the same as to say that they could not think or that their thinking was not governed by logic, but only that they would not fit their thinking into pure logical forms, which they seem to have found uninteresting. (Ong 1982:52) So in the end all is well. Illiterates are welcomed into the civilized community at the price of a very small concession that Ong is only too willing
< previous page
page_68
next page >
< previous page
page_69
next page >
Page 69 to pay on their behalf; namely, renouncing any interest in ‘pure logical form’. What ‘pure logical form’ is they cannot in any case be expected to understand, never having had the benefit of reading Aristotle. THE ALPHABETIC MIND Another scholar on whose work Ong relied heavily was the Yale Classicist Eric Havelock. It was Havelock who coined the phrase alphabetic mind. He regarded his studies of Greek literacy as an investigation of ‘the material conditions surrounding a change in the means of communication between human beings, social and personal’: Underlying the analysis, and for the most part unstated but perceptible, lies the possibility of a larger and more formidable proposition, that the change became the means of introducing a new state of mind —the alphabetic mind, if the expression be allowed. (Havelock 1982:7) Allowed or not, here it is. The coinage is not altogether unexpected, since Havelock is much given to invoking abstractions like ‘the Greek experience’, ‘the history of the Greek mind’, ‘the Homeric state of mind’, ‘the oral state of mind’ and ‘the Hellenic intelligence’, so why not ‘the alphabetic mind’? In the case of the alphabetic mind, there is no doubt as to cause and effect. It is a change in ‘material conditions’ that is assumed by Havelock to be responsible for this new ‘state of mind’, and not vice versa. The Greeks did not first develop a new mode of thought and then set about finding some way of expressing it, but the other way round. The core of Havelock’s thesis is his controversial interpretation of Plato. For Havelock, the Republic is not a treatise about political theory at all, but an attack on poetry. In it, Plato ‘exhorts us to fight the good fight against poetry, like a Greek Saint Paul warring against the powers of darkness’ (Havelock 1963:4). But not poetry in the modern sense: ‘something more fundamental in the Greek experience, and more powerful’. What could this poetic component in ‘the Greek experience’ be? According to Ong, what Havelock succeeded in showing was that Plato excluded poets from his ideal republic essentially (if not quite consciously) because he found himself in a new chirographically styled noetic world in which the formula or cliché, beloved of all traditional poets, was outmoded and counterproductive. (Ong 1982:24) Ong seems to endorse Havelock’s reading of the Republic. Some Classicists take a quite different view of the text. F.M. Cornford thought that
< previous page
page_69
next page >
< previous page
page_70
next page >
Page 70 Book 10, which is crucial to Havelock’s case and contains much of the critique of the poets, ‘has the air of an appendix, only superficially linked with the preceding and following context’ (Cornford 1941:314). Gilbert Ryle considered the Republic to be a late and not altogether successful amalgamation of quite separate discourses that Plato had previously delivered but never published (Ryle 1966:44–9). None of this fits very comfortably with Havelock’s view of it as a monument to the triumph of the ‘alphabetic mind’ (which he identifies with philosophy) over the old oral traditions of the Greeks. Like Ong, Havelock refers to writing as a ‘technology’. But he uses that term in a wider sense. Whereas for Ong technology requires the physical use of tools or instruments, Havelock is happy to speak of ‘verbal technology’ antedating writing. He appears to be referring mainly to devices used in oral poetry, including rhythmic and metrical patterns incorporated into formulaic composition. The problem with extending the use of the term technology in this way is that writing then becomes somewhat less of an innovation in forms of expression than it previously appeared to be. Indeed, Havelock emphasizes the extent to which these essentially oral devices used by the poets were carried over into early written texts, where they were no longer strictly necessary—necessary, that is, to make what was said memorable and memorizable. Underpinning this argument all along is Havelock’s basic but questionable assumption that the main function of poetry in a preliterate society is to preserve information and pass it on from generation to generation. For Havelock, Homer was the ‘encyclopedia’ of the preliterate Greek world (Havelock 1963:61–86). The Homeric poems are ‘enclaves of contrived language’ and ‘their contrivance was a response to the rules of oral memorization and the need for secure transmission’ (Havelock 1982:167). But oral encyclopedias are outdated once writing appears on the scene, because written texts are a much more reliable way of preserving information. Havelock goes on to argue that writing, because of its superior capacity for recording facts, has the psychological effect of liberating ‘psychic space’. The mind no longer has to be concerned with the basic task of accumulating and storing information, but is free to spend more time and energy in scrutinizing and comparing ideas of all kinds, solving intellectual problems, and so on. In brief, Havelock explains the supposed Greek genius for analytic thought as one of the mental side-effects of literacy. However, there are some awkward facts that do not fit very neatly into Havelock’s scenario. One is that Socrates, Plato’s mentor and hero, who was obviously literate, conducted all his teaching by oral discussion and refused to put any of it into written form. Another is that Plato never praises writing as a technology or mental discipline: on the contrary, in Phaedrus and Letter 7 his remarks about writing are highly sceptical ( Phaedrus 274–5, 277–8; Letter 7 341–4). Thus, in effect, we are presented by Havelock with a picture in which both Socrates and Plato, the foremost philosophers
< previous page
page_70
next page >
< previous page
page_71
next page >
Page 71 of their day, emerge as men who totally failed to realize the intellectual advantages of that ‘alphabetic mind’ of which they were, unwittingly, the ground-breaking beneficiaries and exponents. That does not say much for their own analytic powers. DOMESTICATING THE SAVAGE MIND Both Ong and Havelock refer with approval to the work of the Cambridge anthropologist Jack Goody, whose book The Domestication of the Savage Mind was published in 1977. When introducing his own coinage alphabetic mind, Havelock refers to a remark by Goody in which writing is described as ‘a technology of the intellect’. The phrase technology of the intellect is one Goody uses a number of times, although never providing any further explanation. He suggests that examining the technology of the intellect can throw light on ‘developments in the sphere of human thinking’ (Goody 1977:10), as well as being crucial for those studying ‘social interaction’. One thing that Goody seems to count as a ‘technology of the intellect’ is the use of syllogistic reasoning, which in turn, in his view, was ‘a function of writing’, since it was ‘the setting down of speech that enabled man clearly to separate words, to manipulate their order’ (Goody 1977:11). This, on the face of it, is a quite extraordinary claim, since the linguistic use of word order to differentiate e.g. a sentence like Peter hit Paul from Paul hit Peter is by no means confined to literate communities. If Goody were right, and our preliterate ancestors were unable ‘clearly to separate words’, their mastery of such constructions would be not just a psychological mystery but a miracle. Furthermore, if there has ever been a linguistic community whose members were unable ‘clearly to separate words’, presumably none of them could even be sure of exactly what their own name was. If there is any reliable evidence for the existence of societies where this remarkable state of affairs obtained, linguists and anthropologists have been slow to report it. One is inevitably reminded of earlier anthropological claims to the effect that the primitive mind is unable to form general concepts, or even to grasp its own identity. Such parallels do not enhance the plausibility of Goody’s generalization. Goody claims that the introduction of writing into a previously preliterate culture has the effect of ‘transforming the nature of cognitive processes’ (Goody 1977:18). Furthermore, this is done ‘in a manner that leads to a partial dissolution of the boundaries erected by psychologists and linguists between abilities and performance’. (He repudiates, however, the suggestion that preliterate societies have no intellectual life of their own, and in particular Durkheim’s identification of ‘intellectual laziness’ as a characteristic of primitive peoples.) In terms very reminiscent of Havelock, Goody asserts that with the introduction of writing ‘the problem of memory storage’ no longer dominated ‘man’s intellectual life’:
< previous page
page_71
next page >
< previous page
page_72
next page >
Page 72 the human mind was freed to study static ‘text’ (rather than be limited by participation in the dynamic ‘utterance’), a process that enabled man to stand back from his creation and examine it in a more abstract, generalised, and ‘rational’ way. (Goody 1977:37) So writing, it seems, promoted an advance in rationality. But Goody goes further than some in maintaining that this ‘change in consciousness’ was not limited to ‘the alphabetic mind’, and that earlier pre-alphabetic forms of writing also ‘had an influence both on the organisation of social life and on the organisation of cognitive systems’ (Goody 1977:76). It is an essential part of Goody’s argument that ‘writing gives a permanent form to speech’ (Goody 1977:76). But this is another extraordinary claim. Writing does nothing of the kind. Writing is not an early precursor of the tape-recording, which allows a particular utterance to be heard over and over again. Furthermore, the alleged ‘permanence’ has nothing to do with writing as such. There is a double confusion here in what Goody says. First, the survival of a text is entirely dependent on the materials used and the way they are used by the user. Second, regardless of the materials, the survival of a text also depends on the survival of a population of readers. Writing that cannot be read is not a text but a collection of marks. Furthermore, even under conditions of near-universal literacy such as prevail in modern Western societies, very little of what is written is destined to function as an everlasting record for future generations. This is a projection of the historian’s concerns into the lives of the historian’s sources. According to Goody there are just ‘two main functions of writing’. One is ‘the storage function’ and the other is to ‘shift language from the aural to the visual domain’ (Goody 1977:78). This latter claim ventures into the territory of linguistic theory, and would be rejected by those linguists who do not recognize writing as a form of language at all. In their view, Goody’s analysis would be flawed from the outset by falling into the popular pitfall of assuming that ‘speech and writing are merely two different manifestations of something fundamentally the same’ (Hockett 1958:4). A scale drawing of a building does not become a building in its own right simply by representing the architectural features accurately. The nearest Goody comes to explaining how writing could bring about the reorganization of a ‘cognitive system’ is contained in his discussion of the way that lists and tabulations present and classify information. These devices, already used extensively by early Sumerian and Egyptian scribes, certainly facilitate deliberate scrutiny and revision in a way that has no counterpart in speech. In speech, an item cannot just be ‘shifted’ from one place to another, or held in abeyance pending a reorganization of the whole sequence. These points that Goody makes are well made and well taken. But what Goody seems reluctant to admit is that examples of this kind (written lists and tables), which support his case best, are actually those which show up
< previous page
page_72
next page >
< previous page
page_73
next page >
Page 73 most clearly the misconception involved in regarded writing as just a ‘representation’ of speech. For the (physiological) fact is that speech affords no room for structures that rely on two-dimensional relations, as do writing and all forms of drawing. It is that—and not the sensory shift to ‘the visual domain’ as such—which makes writing structurally different from speech. MISUNDERSTANDING MEDIA Some of these issues were first brought to a wider audience by the publications of Marshall McLuhan, particularly The Gutenberg Galaxy (1962) and Understanding Media (1964). Ong refers to McLuhan’s work in laudatory terms, praising his ‘vast eclectic learning’ and ‘startling insights’. Similarly, Havelock congratulates McLuhan on performing ‘the enormous service of bringing into focus the issue as between oralism and literacy’ and raising the question of how ‘states of mind’ are connected with ‘conditions of communication’ (Havelock 1989:88–9). Like Ong and Havelock, others who join in the general chorus of praise for McLuhan fail to point out that he shares with Goody (who more or less ignores him) no less dubious assumptions about basic relationships between speech and writing. In McLuhan’s case, these assumptions are reflected in the way he constantly refers to the alphabet as ‘the phonetic alphabet’, as if individual sound values were somehow invisibly attached to the letters themselves. This ‘phonetic alphabet’, according to McLuhan, gives its user ‘an eye for an ear’ and by so doing releases that person from ‘the tribal web’ in which orality entraps the individual. This fact has nothing to do with the content of the alphabetized words; it is the result of the sudden breach between the auditory and the visual experience of man. (McLuhan 1964:86) Unlike Goody, McLuhan claims that societies with alphabetic writing are profoundly different from literate societies with non-alphabetic writing. Civilization is built on literacy because literacy is a uniform processing of a culture by a visual sense extended in space and time by the alphabet. In tribal cultures, experience is arranged by a dominant auditory sense-life that represses visual values. The auditory sense, unlike the cool and neutral eye, is hyper-esthetic and delicate and all-inclusive. Oral cultures act and react at the same time. Phonetic culture endows men with the means of repressing their feelings and emotions when engaged in action. To act without reacting, without involvement, is the peculiar advantage of Western literate man. (McLuhan 1964:88)
< previous page
page_73
next page >
< previous page
page_74
next page >
Page 74 Here, then, we have a position in which the Great Divide is clearly not ‘literate vs. preliterate’ but ‘alphabetic mind vs. pre-alphabetic mind’. One of McLuhan’s more interesting but more debatable contentions is that the linear sequencing intrinsic to alphabetic writing profoundly influenced the basic Western conception of what an inference is. In Western literate society it is still plausible and acceptable to say that something “follows” from something, as if there were some cause at work that makes such a sequence. It was David Hume who, in the eighteenth century, demonstrated that there is no causality indicated in any sequence, natural or logical. The sequential is merely additive, not causative. “Hume’s argument,” said Immanuel Kant, “awoke me from my dogmatic slumber.” Neither Hume nor Kant, however, detected the hidden cause of our Western bias towards sequence as “logic” in the all-pervasive technology of the alphabet. (McLuhan 1964:87–8) Leaving Hume and Kant aside, the trouble is that linearity—in the sense in which alphabetic writing is ‘linear’—is also a pervasive feature of speech, as Saussure recognized when he made the linearity of the linguistic sign one of his two foundational principles of general linguistics (Saussure 1922:103). STRUCTURAL ANTHROPOLOGY Lévi-Strauss, the founder of ‘structural anthroplogy’, came up with his own version of the Great Divide. Sociologists, who have raised the question of relations between socalled ‘primitive’ mentality and scientific thought, have usually settled it by invoking qualitative differences between the workings of the human mind in each of these domains. But they have not doubted that in both cases the mind was always contemplating the same objects. Considerations adduced in the preceding pages lead to a different conception. The logic of mythical thought has appeared to be as demanding as that upon which positive thought rests and, in the end, is not much different. For the difference relates not so much to the quality of the mental processes as to the nature of the things on which they are brought to bear. For a long time this has been recognized in the field of technology: a steel axe is not superior to a stone axe because one is ‘better made’ than the other. Both are equally well made, but steel is not the same as stone. Perhaps one day we shall discover that the same logic is at work in mythical thought as in scientific thought, and that man has always
< previous page
page_74
next page >
< previous page
page_75
next page >
Page 75 been thinking equally well. Progress—if the term would then be applicable—would not have taken place in the theatre of consciousness, but in the world, peopled by human beings endowed with unchanging faculties, but, in the course of their long history, constantly grappling with new objects. (Lévi-Strauss 1958:254–5) Here we see Lévi-Strauss struggling to reconcile the notion (1) that there is an important difference between scientific thinking and mythological thinking, with the notion (2) that, nevertheless, the difference has nothing to do with the quality of the thinking. For, as he puts it, ‘man has always been thinking equally well’. In effect, what he is arguing is that the doctrine of the primitive mind is intrinsically mistaken. The mistake consists in confusing reason with its application. TECHNOLOGICAL DETERMINISM Critics of some of the writers discussed above have derided the naivety of their basic assumptions. Florian Coulmas remarks: ‘What is surprising about this approach is that it was ever taken seriously and discussed by serious scholars’ (Coulmas 1989:160). According to Ruth Finnegan, the underlying mistake is a naive belief in technological determinism. The idea that the technology of communication is ‘in some sense the key to human history’ has come to be ‘conventional wisdom’. This ‘fashionable idea’, she thinks, has been welcomed too uncritically. We are assured on all sides that this or that new development will inevitably have a profound impact on society and ourselves. ‘There is also often the further implication that this technology is in some sense an autonomous cause of all those consequences’ (Finnegan 1989:110). This autonomy is then retrojected back into the past: writing and print are identified as ‘communication technologies’ that at one time changed the world no less radically than computer-based technologies are changing it at the present day. ‘The technology is viewed as autonomous and independent of social shaping, and as more or less inescapably determining social forms and relationships’ (Finnegan 1989:113). Here perhaps we may have the answer to the question that Ong left pending, i.e. why it was not until the electronic age that attention began to be focussed on older technologies such as writing and print. Worries about present-day conditions are being seen through a historical lens, and history is being reexamined in the hope that it might provide solutions for modernity. But that hope will be vain if the analogies between past and present do not hold. According to Finnegan, technological determinism involves turning a blind eye to historical evidence of two kinds. One is evidence that the same technology can have different results at different times and in different
< previous page
page_75
next page >
< previous page
page_76
next page >
Page 76 places. The other evidence is that what is presented by the determinist as one of the consequences of a new technology can also be found in societies where that technology is absent. She quotes with approval Roger Abrahams’ view that we are dealing here with a latter-day idealization of the primitive, a kind of ‘Rousseauvian trap, reinventing the idea of the nonliterate community which has maintained a kind of organic unity in the ways they act and interact’. Specifically, she argues that a secular attitude to life, economic accounting, abstract thought, ideas of the ‘self’, and literary detachment and reflection—all held to be the result of writing or print by the technological theorists— can also be found in non-literate cultures and groups. (Finnegan 1989:116) A variant version of determinism that Finnegan does not address contrasts the development of science in the West with its retarded progress in the East. Robert Logan’s book The Alphabet Effect (1986) is a quite remarkable tour de force in which the author attempts to persuade his readers that the basic reason why science did not develop in China at the same time as it did in Europe was that Chinese writing was not alphabetic. Logan claims that even the most abstract scientific term must be rendered in a concrete form when it is written. This, no doubt, has had a subliminal effect on Chinese scientific thinking. (Logan 1986:55) At precisely the point where evidence is required to lead from premises to conclusion, what one finds inserted into that gap is nothing but an evasive ‘no doubt’. But there is a great deal of doubt. Whether the script did have any effect on Chinese scientific thinking was exactly what needed demonstrating, and no demonstration has been given. Logan declares: It is my belief that the first scientific literature, whether Oriental or Occidental, was destined to be written in an alphabetic script because the alphabet creates the environmental conditions under which abstract theoretical science flourishes. (Logan 1986:54) All one can say is: that is interesting as a belief in predestination but hardly convincing as a thesis about scripts. Furthermore, it is paradoxical that the argument in favour of this view of scientific reasoning is made to rest on flouting one of the principles usually taken to be basic to scientific thinking. It would not matter how many civilizations with alphabetic scripts developed forms of science, or how many civilizations without alphabetic
< previous page
page_76
next page >
< previous page
page_77
next page >
Page 77 scripts did not. Post hoc ergo propter hoc is still a fallacy, whichever kind of script you happen to be writing. THE GREAT DIVIDE RECONSIDERED According to their critics, the ‘technologists of communication’ make the basic mistake of attributing solely to writing consequences that are actually the result of various combinations of factors. Some ‘technologists’ protest that they do not intend to deny that other factors also count. But then the question arises of how to disentangle the contribution of literacy. Brian Street argues that it is not possible to test Goody’s hypothesis ‘since one cannot find an isolated society on which to test the cognitive and other consequences of “purely” oral communication’ (Street 1984:46). Similarly, Rosalind Thomas doubts whether it is even possible ‘to find a case where the effect of literacy by itself can be studied without being disturbed by many other factors’ (Thomas 1989:26). Others have thrown doubt on the claims of both sides in the debate. One classicist rejects both Havelock and Street: ‘Unfortunately, the argument is conducted in such an amateurish fashion that its force is lost’ (Harris W.V. 1989:41). So we appear to reach yet another stalemate. On the one hand, it is difficult to demonstrate that literacy itself was the key causal factor in the development of rational thinking. On the other hand, it cannot be shown that literacy was inoperative as a causal factor. Neither side in the debate can suggest a way forward. It is now time to ask how these claims and counterclaims stand up in the light of the findings of contemporary neuroscience. The short answer is: ‘Not at all well’. Literacy presupposes the ability to read. But it seems that the human brain has no dedicated ‘centre’ for reading. There is no ‘part of the brain specifically for reading’: brain scans confirm that ‘circuits in the brain that are somehow important in this mental skill’ are able ‘to change as a result of learning’ (Greenfield 2000:58). Learning to read changes the visual cortex of the brain. Because the visual system is capable of object recognition and specialisation, the expert reader’s visual areas are now populated with cell networks responsible for visual images of letters, letter patterns, and words. (Wolf 2008:147) It seems highly likely, therefore, that when our ancestors first developed writing systems, they were indeed developing the brain in ways unknown to preliterate humanity; but this development was based on the recruitment and adaptation of resources that the preliterate brain already possessed. For present purposes what emerges as important is that, neurophysiologically, the literate brain is different from the preliterate brain. That
< previous page
page_77
next page >
< previous page
page_78
next page >
Page 78 difference cannot be dismissed as a theoretical fiction or cultural illusion. If the literate brain exists, how about the literate mind? Even in the absence of any precise cartography mapping brain processes on to mental processes, it would be odd to claim that those with literate brains do not have literate minds, or that there is no correlation at all between the two. What has to be recognized is that, just as the literate brain is variously configured as between different individuals and communities, so too is the literate mind. If even this much is admitted as the contribution of neuroscience, there is no ground for dismissing out of hand the thesis that literate and preliterate communities tend to produce typically different mental habits, insofar as these are favoured by the brain’s development of new patterns of circuitry. To that extent, the doctrines of the ‘primitive mind’ and the ‘Great Divide’—in some version or other—have not been definitively ruled out of court by contemporary neuroscience. They are still in play. On the other hand, as far as rationality is concerned, brain research offers no positive evidence to suggest that reasoning processes are in some way independent of other mental abilities (including linguistic and language-related abilities). Just as there is no brain ‘centre’ for reading and writing, there seems to be no brain ‘centre’ for reasoning either. In short, we have to learn to reason and to develop our reasoning, just as we have to learn to do many other things with our brains and minds. This conclusion warrants reconsidering the whole question of exactly how rationality relates to literacy. And this in return requires us to examine how our conceptions of both depend on our assumptions about language.
< previous page
page_78
next page >
< previous page
page_79
next page >
Page 79 6 Aristotle’s Language Myth ORIGINS OF THE MYTH Any serious attempt to reassess the impact of literacy on conceptions of rationality has to start with Aristotle. Aristotle’s historical position as the founding father of Western logic is unshakeable. For many, he not only discovered the grammar of human rationality but was its first grammarian. Furthermore, the grammar he left to posterity, set out in the texts of the Organon, remained authoritative for more than two millennia. It was not revised in any radical way until modern times. In this sense Aristotle taught generations of Europeans not only how to reason but, more fundamentally, what rationality was . Modern logicians have been virtually unanimous in acknowledging this historical debt, describing their own inquiries as ‘a development of concepts and techniques which were implicit in the work of Aristotle’ (Basson and O’Connor 1959:1). Consequently it verges on heresy to raise the question of whether Aristotle himself was clear about ‘what rationality was’. Nevertheless, it is a question worth asking. The discussion that follows will focus more particularly on an aspect of that question rarely raised by philosophers or historians of philosophy: to what extent Aristotle’s conception of rationality was shaped by his engagement with literacy as it was understood by educated Greeks of his generation. One reason for asking it is that Aristotle was the first philosopher in the Western tradition to accumulate a personal library. That fact in itself betokens an importance of the written text for Aristotle which becomes very visible in his thinking once its traces are seen in the right light. This is not a matter of, for instance, taking care to quote predecessors verbatim, or distinguish between different versions of a text, which Aristotle never does. Concern for that kind of philological scruple belongs to a later generation than Aristotle’s. The influence of literacy on a mind trained in the Academy has to be sought at another level. It is worth noting at the outset that when we attempt to interrogate Aristotle today what we are interrogating is itself a literary construct.
< previous page
page_79
next page >
< previous page
page_80
next page >
Page 80 This is particularly relevant to the investigation of Aristotle’s conception of rationality, because the selection of texts comprising the Organon, and their being grouped together at the head of the surviving corpus of Aris-totle’s writings, was not the work of Aristotle but of later Aristotelians. It is also salutary to be reminded that we do not know ‘who first gave to Aristotle’s logical works the name ‘Organon’—the instrument’ (Allan 1952:125). So we cannot be sure how Aristotle regarded the relationship of these texts to the rest of his work. There are signs that he treated the study of reasoning as being primarily ‘of a practical nature, being undertaken in the hope of learning how to reason efficiently and prevail over opponents in debate’ (Allan 1952:125). The Topics , in particular, reads in part like a debating manual. De Sophisticis Elenchis is an exposé of the sly tricks your adversary may try to get away with. These facts cannot be overlooked if one would like to have an answer to the question of how and why Aristotle came to be, apparently, the first Greek thinker to deal systematically with the matters that are nowadays thought of as his ‘logic’ ( logike )—a term he never used as the name of a discipline (Kneale and Kneale 1984:7). The argument that will be presented here takes as its starting point the significance of the fact that the Greeks did not begin to discuss speech and reasoning in the way Aristotle does until well after teaching reading and writing had become part of the standard curriculum of Greek education for children. According to Havelock, this did not occur much before 430 BC (Havelock 1982:187). Others put it earlier, on the basis of evidence from Attic red-figure vases (Harris, W.V. 1989:97). But whether we accept the earlier or the later date, it seems clear that reading and writing were already a well established part of elementary education considerably before Aristo-tle entered the Academy. Can we assume that from time immemorial—i.e. even before they acquired the alphabet—the Greeks had always conceptualized speech in more or less the way Aristotle describes? That seems doubtful. It is more plausible to regard both Aristotle’s account of speech in the Organon and his analysis of syllogistic reasoning as joint products of the advent of literacy. Aristotle’s own account of these matters points in that direction. The foundation of Aristotle’s account ( De Interpretatione 16a) is the idea that articulated speech is simply vocal utterance deliberately produced in conformity with a set of established social conventions. The basic role of these conventions is to correlate certain sounds and sound sequences with existing non-vocal items in the human environment (as, for example, the name Socrates with Socrates, the adjective wise with being wise, grass with grass and green with the colour green). These one-one correlations are assumed to provide the infrastructure on which human verbal communication proceeds, allowing verbal units to combine, according to patterns also determined by convention, into oral complexes correlated with specific statements, e.g. Socrates is wise , Grass is green , etc. ( Categories 1a).
< previous page
page_80
next page >
< previous page
page_81
next page >
Page 81 Aristotle’s account already implies or presupposes a theory of linguistic communication; namely, that the hearer understands what the speaker says in virtue of knowing the set of conventions to which the speaker’s utterances conform. Only thus can the hearer grasp what it is that the speaker is saying. It is worth emphasizing that this must be Aristotle’s assumption , because nowhere in the Organon or anywhere else does he provide any alternative explanation, and without it there is no possibility of grasping your opponent’s argument, or your opponent grasping yours. It is important not to lose sight of the fact that for Aristotle it is dialectic that provides the context for the whole inquiry into speech and reasoning. However, there is an elementary difficulty. How is it that the hearer can relate the sounds the speaker articulates to the same things as the speaker—in his own mind—relates them to? For while the sounds the speaker utters can be heard, what he means by them is not audible. Aristotle could hardly have overlooked this problem. In fact, it is clear that he has not overlooked it. But the way he deals with it— or rather dismisses it—is extremely unsatisfactory. He does not even discuss it but pre-empts the difficulty by declaring that the world is the same for everybody. Grass is not a different kind of vegetation for speaker and for hearer; nor green a different colour. So what Grass is green means for the speaker must mean exactly what Grass is green means for the hearer if the same linguistic conventions apply in both cases . In other words, these conventions themselves are anchored by everyone sharing a common pre-linguistic experience of the world and what it looks like, feels like, sounds like, smells like, etc., irrespective of whatever language they happen to speak. This itself is a very shrewd epistemological move on Aristotle’s part. But its shrewdness does not make it altogether convincing. It is, in effect, the counterpart of Plato’s doctrine of Forms, which Aristotle rejected. The point that is important here is that this epistemological feature of Aristo-tle’s conventionalism has important implications for his logic. The way Aristotle tells it, the fact that every language has its own words, and will not automatically be understood by speakers of foreign languages, does not prevent speakers of two different and mutually incomprehensible languages from making the same statements about the same things, regardless of whether or not they themselves recognize that, and regardless of the particular sounds they utter in order to do that. In short, all languages, for Aristotle, have a universal basis grounded in common perceptions and judgments shared by all speakers. This is Aristotle’s ‘universal grammar’ (although the term did not appear in the Western tradition until many centuries later). Spoken words are simply agreed, conventional signs that identify and label these common elements. Whether there are common elements for which words are lacking is a question Aristotle does not raise. The only alternative view of speech ever discussed at this time in Greece is the one examined in some detail in Plato’s Cratylus: i.e. the possibility that there is some ‘natural’ connexion between the sound of a word and
< previous page
page_81
next page >
< previous page
page_82
next page >
Page 82 what it signifies. The conclusion there reached by Socrates is that, even if that ‘natural’ connexion were originally present , it has been lost or obscured in the course of time, and can in any case be postulated with plausibility for no more than a small number of words. But Aristotle, for his part, evidently did not think this worth discussing. Where did the Aristotelian ‘conventionalist’ theory of speech come from? Its immediate predecessor is already up and running in Plato’s Craty-lus, although Socrates in the end hedges his bets on endorsing it ( Cratylus 435c). But Aristotle sees that this uncertainty will not do for purposes of his syllogistic. He needs a stronger version, in which the convention becomes a fixed linguistic code, subscribed to by all debating parties. Otherwise debate inevitably degenerates into logomachical wrangling. There have to be linguistic ‘rules’ for its conduct. It seems highly probable that A ristotle’s code-based conception of speech is derived directly from his own experience of literacy. Since this experience was doubtless roughly the same as that of his contemporaries, or at least of those who were as highly literate as he was, it would explain why Aristotle never finds it necessary to produce any arguments in favour of it, or dismiss any possible objections. If that is right, then Aristotle views speech as the oral counterpart of writing , and that perspective is treated as normal or uncontentious. The profession of speech writer ( logographos ), which presupposes that perspective, was well established in Athens long before Aris-totle’s day. The profession flourished because ‘Greek law required every citizen to speak in his own behalf in prosecution or defense’ (Kennedy 1963:57). We do not know exactly how children of Aristotle’s generation were taught their letters ( grammata ). But there is certain amount of indirect evidence. On two occasions in Philebus Socrates appeals to the alphabet in order to illustrate a philosophical thesis. He has been arguing that there is a natural and universal principle of inquiry, a ‘gift from the gods’, that applies to our recognition of ‘the one and the many’: we must not grant the form of the unlimited to the plurality before we know the exact number of every plurality that lies between the unlimited and the one. ( Philebus 16d) When Protarchus protests that he is not sure he has grasped the point, Socrates replies that it is clear in the case of letters, ‘and you should take your clue from them, since they were part of your own education’ (a remark which confirms that there was at this time a standard way of teaching the grammata, which all educated people were familiar with). Socrates goes on to explain that it is only when we know ‘how many kinds of vocal sounds there are and what their nature is’ that we can be considered literate ( Phile-bus 17b). This seems to imply that Greek children were taught not just a list
< previous page
page_82
next page >
< previous page
page_83
next page >
Page 83 of individual grammata (i.e. taught to recite the ‘letters of the alphabet’) but a specific classification of the grammata by phonetic criteria of some kind. This supposition is confirmed when Socrates returns to the analogy later in the same discussion, and attributes to the Egyptian divinity Theuth the discovery of the facts about letters: He was the first to discover that the vowels in that unlimited variety are not one but several, and again that there are others that are not voiced, but make some kind of noise, and that they, too, have a number. As a third kind of letters he established the ones we now call mute. After this he further subdivided the ones without sound or mutes down to every single unit. In the same fashion he also dealt with the vowels and the intermediates, until he had found out the number for each one of them, and then he gave all of them together the name “letter”. And as he realized that none of us could gain any knowledge of a single one of them, taken by itself without understanding them all, he considered that the one link that somehow unifies them all and called it the art of literacy. ( Philebus 18b-d) This again is corroborated by what Socrates says about the classifica-tion of stoikheia in Cratylus 424. There he speaks of distinguishing vowels from consonants or ‘mutes’, and these from a third class that are neither vowels nor ‘mutes’. And, as if apologizing for the excursion into technical metalanguage, he adds ‘as they are called by specialists in these matters’. It is difficult to see to whom Socrates could be referring here except those professionally engaged in teaching the grammata . It is evident from Poetics 1456b that Aristotle too was familiar with this phonetic classification of stoikheia . In Categories 4b he declares that speech ( logos ) is discrete or discontinuous, like number, because it is measured by long and short syllables, and its parts do not join together at any common boundary. Each syllable is separate and has no common boundary with its neighbours. This does not sound like the assertion of someone who relies on the evidence of his own ears or his own tongue: one might think it could come only from someone brought up to treat the alphabetic segmentation as a faithful reflection of the facts of speech. The implications of this view are potentially far-reaching. Once the writing system is recognized as standing in this relation to speech, it is tempting to extend the same analogy to speech itself. In the case of speech, the prior activity will be the mental activity of thought. Aristotle seems to have taken precisely this step, for he uses the same technical term sumbolon to designate both the relation of the spoken word to what he calls ‘affections of the soul’ and the quite separate relation of the written word to the spoken. In this way, there emerges a neatly unified theory of the forms of linguistic communication: speech stands to thought as writing to speech. Plato’s
< previous page
page_83
next page >
< previous page
page_84
next page >
Page 84 misgivings about writing are thus swept aside by Aristotle’s ex cathedra postulation that the same psychological and semiological relationship is present in both cases. If the spoken word is a sign, then the written word is a metasign. Conjoining this with Aristotle’s other assumption—that the world is the same for all observers—we arrive at the following simplistic conception of what a language is (whether Greek or any other). A language is a set of conventions for describing reality and enabling thoughts to be conveyed from one person’s mind to another’s, either orally or visually, as the case may be. Linguistic communication is thus a process of thought-transference, or telementation. It involves the use of a public fixed code (i.e. the language in question), which permits one person to ‘encode’ thoughts for transmission and another person to ‘decode’ the thoughts previously encoded. Those who are literate can use either mode of transmission, while those who are not remain limited to the oral option. In brief, Aristotle has invented a muthos of his own, a myth about language, in order to supply the necessary foundations for his logic. Aristotle’s language myth became, in a number of variant guises and modifications, one of the longestlived myths in the Western tradition. It outlasted many other beliefs (in the Olympian gods, the invincibility of the Roman empire, the sun going round the earth, etc.). It survived because it became a myth built into the foundations of Western education and Western law. Its unquestioned acceptance seems to underlie most of the arguments about writing advanced nowadays both by the ‘communication technolo-gists’ and by their critics. That is one reason why the debate between the two parties seems irresoluble and even futile. But once the myth is rejected, it becomes possible to develop inquiry into the mental effects of writing in quite a different way. WHY ARISTOTLE NEEDED THE LANGUAGE MYTH Aristotle’s syllogistic is concerned with drawing conclusions from premises. Conclusions and premises are alike identified by citing sentences—Greek sentences in Aristotle’s case. The reasoning captured in the syllogism depends on the way the sentences are—or are perceived to be—related to one another. The conclusion has to be seen as ‘following from’ the premises. This is the crucial point. If someone understood the individual sentences in question, but could not see how one ‘followed from’ the others, or saw that they were connected but could not decide which one was the conclusion, then such a person would have failed to grasp the syllogistic relationship, i.e. the structure of the syllogism. And pro tanto such a person would be failing to manifest the kind of intelligence that Aristotle expects of a rational mind.
< previous page
page_84
next page >
< previous page
page_85
next page >
Page 85 The case would be roughly parallel to showing someone three coins of different denominations—say two skudos, three skudos and five skudos—and finding that this person cannot tell you which of the three coins has a value equal to the sum of the other two. Such a person clearly fails to understand the simple arithmetic of the problem, i.e. the relations between the numerical values of the coins. Similarly, someone who cannot see, given the three propositions All men are mortal , Socrates is a man and Socrates is mortal , which of these follows from the other two, does not understand something important about the relations between the semantic values of these sentences. If the syllogism is to stand as the paradigm case of rational thought, then Aristotle must explain what guarantees that the conclusion does invariably ‘follow from’ the premises. It will not do to say ‘That is intuitively obvious’. If one appealed without further ado to intuition, and intuition were infallible, there would be no need for logicians, and Aristotle’s logical analyses would be pointless. So where is that guarantee to come from? There are various possibilities. One would be that the validity of the syllogism rests on the way the world is. In other words, every syllogism mirrors some particular aspect of the structure of the universe (e.g. the relationship between Socrates and the rest of humanity, or the relationship between men and animals). The conclusion follows from the premises simply because that is how the facts of the matter stand in reality. But to advance this as the guarantee underwriting each and every syllogism is tantamount to advancing a claim to omniscience. Much of the universe remains unexplored. Furthermore, any claim to omniscience runs counter to the spirit of inquiry promoted in the Academy. Its hero, Socrates, was constantly proclaiming his own ignorance. Another possibility would be that the guarantee resides in the way the human mind works. But that will not do for Aristotle, since he wants to ensure that the structure of the syllogism guarantees preservation of truth as between premises and conclusion. Now truth—in Aristotle’s philosophy—is not just a relationship between ideas, but between propositions and facts. So if the syllogism were underwritten solely by the workings of the human mind, that would leave open the unwelcome possibility that syllogistic conclusions reflected nothing more than the limitations of human thinking. Once these two possibilities are set aside, there remains a third possibility, which consists in looking to language for the missing guarantee. This is the option that Aristotle chooses. Evidently, it is not ideal. But again we are brought back to the relationship between dialectic and rhetoric, and recognizing that the day-to-day context for the whole exercise is debating with one’s opponents. This is where Aristotle needs the language myth. He needs to found the syllogism not on a claim to know more about the universe than his opponent does, nor on a claim to have a better understanding of the human mind, but on a claim that fruitful debate rests on speaking the same language as one’s opponent. Here the crucial element is ‘the same language’.
< previous page
page_85
next page >
< previous page
page_86
next page >
Page 86 If one Greek is debating with another educated Greek, this sounds plausible enough. (‘Do we not both speak Greek? If not, what language are you speaking?’) But even this down-to-earth ‘commonsense’ assumption will not quite do the trick. Speaking ‘the same language’, for Aristotle the logi-cian, has to be construed in such a way that the language (its vocabulary and constructions) constitutes a fixed code for the formulation of propositions. There must be no room for your opponent to wriggle out of an awkward spot by claiming that such-and-such a sentence means something different from what what you are taking it to mean. The bottom line must always be: ‘If you are speaking Greek, then this follows from that ’. (‘A nd, of course, if you are not speaking Greek, then you cannot possibly understand what I am saying.’) The syllogism as a dialectical weapon thus comes to depend on a notion of ‘the same language’ where it is taken for granted, as the opening section of De Interpretatione makes perfectly clear, that speaking the same language rests on people accepting the same external world and having the same perceptions of the external world. That way, there is no risk that the syllogism may ‘leak’. All verbal joints have been sealed. Having eliminated the risk of leakage, there is no further room for manoeuvre within that structure. What we are dealing with, in the end, is a way of limiting the moves that those engaged in debate can make, under pain of risking the objection that ‘you can’t say that’ (sc. ‘in Greek’). LOGIC AND THE LETTER Many before Aristotle had recognized the importance of formulating and presenting arguments. Rhetoric was acknowledged as a specialized subject, the proper province of systematic exposition in detailed treatises. Reasoning as such was not. Although Socrates and Plato were both brilliant exponents of debating tactics, ‘What is reason?’ (unlike ‘What is justice?’, ‘What is truth?’, etc.) is not a question that Socrates ever tackles, and Plato devotes no dialogue to it (although he devotes one—Cratylus—to language). But all three philosophers were engaged in the kind of intellectual inquiry in which inference, whether valid or not, inevitably plays a major part. Aristotle’s categories, unlike Kant’s, were never designed to provide the basis for any foundational analysis of the workings of the human mind, but, more modestly, as a framework for dissecting your opponent’s arguments and constructing sound arguments of your own. This explains why Aristotle devotes most of his attention to certain categories and says very little about others. Space and Time are given surprisingly short shrift if this was ever intended to be a treatise on how human beings conceptualize the world in which they find themselves. Dialectic, however, was not the only language-related inquiry in which Aristotle broke new expository ground. His Poetics seems to have
< previous page
page_86
next page >
< previous page
page_87
next page >
Page 87 been the first systematic treatise on that subject, even though Greek poetry and drama must have been commented on and discussed for centuries before Aristotle came to Athens. A psychological explanation favoured by Ong and others is that Aristotle’s generation was the first to have ‘interiorized’ the technology of writing to the point where literacy automatically sponsored ‘abstract thought’ about any subject under discussion. But that explanation, couched in such general terms, is far too weak to stand on its own two feet. Is there any concrete evidence that might support it? When we examine the Organon, it is evident that there is one pivotal piece of thinking that bears the hallmark of a literate mind above all else. That is Aristotle’s invention of the seemingly simple device which is crucial to his syllogistic. The device in question is nowadays known as the variable , and without it modern academic logic would not be in business. It has been described as an ‘epoch-making device in logical technique’ (Kneale and Kneale 1984:61). The idea of allowing a single arbitrary letter of the alphabet to ‘stand for’ a whole class of items cannot, for obvious reasons, be entertained in a preliterate society. Here we see literacy leaving its indelible mark on Western thinking about reasoning, and about much else besides. For once the legitimacy of that move is admitted in logic, it has important ramifications. While Plato might have conceived of such a use for letters, it is highly doubtful that he would ever have put it into practice as part of a teaching programme, since he held writing to be intrinsically incapable of representing any act of speech. He was therefore unlikely to welcome—far less pro-mote—a form of analysis that involved, in effect, substituting mere arrays of letters for classes (whether of words or of things or of anything else). The use of alphabetic letters as variables in Aristotelian logic tacitly implies certain assumptions about what kinds of things letters ‘are’ and what kinds of things words ‘are’. We must ask what these assumptions were, and in particular what ensures that the substitution involved in using letters as variables preserves rationality. Aristotle never addresses these questions, although it is difficult to believe that he had never thought about them. Had he dealt with them in his teaching, they would presumably have merited a treatise on ‘metalogic’, or rather ‘meta-analytic’. That treatise was never written because Aristotle thought that the linguistic doctrine he had put in place (i.e. his language myth) forestalled the need for it. But he was wrong about this. For speaking Greek does not on any commonsense interpretation already include accepting letters as variables. That convention is not Greek, but a literate extension of Greek. Letters of the alphabet had two main functions in Aristotle’s day. In elementary education, they were the atomic units of a spelling system, as they are still. In this capacity, they were commonly identified by the Greeks as the written equivalents of the individual sounds of speech, the terms gram-mata and stoikheia often being applied interchangeably to both the sounds
< previous page
page_87
next page >
< previous page
page_88
next page >
Page 88 and their corresponding letters. This led to the crude conception of writing as a ‘depiction’ of speech (a conception already criticized as meretricious by Plato, but one which survived for centuries in the Western tradition and still does). The other use of alphabetic letters with which Aristotle would have been familiar is their use in one of the two systems of Greek arithmetic notation, the so-called Ionian system, in which each letter of the alphabet has a numerical value. This system was already in use by the 5th century BC, and seems to have been an original Greek invention. According to one authority, the earliest known inscription with Ionian numerals dates from ‘not long after 450 BC’ (Heath 1921:32), but some speculate that they may have been developed as early as the 8th century BC (Heath 1921:33–4). It is a cumbersome system which has been regarded by later scholars as an obstacle to the development of mathematics: it has even been called ‘a fatal mistake’. (For further discussion and the sexagesimal system used in Greek astronomy, see Thomas I. 1939.) There are two main points to note for present purposes. One is the gen-eral point that notational systems can sometimes turn out to be as much a hindrance as a help, even though they superficially make things ‘easier’. That should be borne in mind when considering whether Aristotelian alphabetic variables advanced the study of logic or restricted it. The other point is that although the Ionian system gives letters of the alphabet a numerical value independently of the phonetic value derived from ordinary spelling, it does not employ letters in an ‘algebraic’ fashion, i.e. as standing for an unknown or for any arbitrarily chosen number. So it can hardly have been familiarity with Ionian numerals that gave Aristotle the idea of using letters for his own notational purposes in his syllogistic. It has been suggested that Aristotle got the idea of alphabetic variables from geometry, where lines and sides of plane figures were designated by terminal points identified by alphabetic letters. (The practice continues to this day. ‘Let AB be 3 inches long’, etc.) But the identification of lines in a diagram is a quite different kind of use from Aristotle’s: this model just does not capture the kind of relationship that logical variables introduce. The arrival of the Aristotelian variable means that the whole semiological status of the alphabet has suddenly been upgraded. A single letter can now ‘stand for’ a whole class of items, each of which in turn is also representable by a sequence of letters. In this way the letters of the alphabet, between them, can ‘stand for’ everything that ever was, or will be, or can be, plus everything that can be said about it. No previous set of signs in human history could do that. It amounts to what might be thought of as the ultimate apotheosis of writing. And it has no counterpart in speech. As regards the everyday orthographic use of the alphabet, Aristotle never seems to have questioned the common opinion which matched the letters with the supposedly atomic (i.e. indivisible) sequential elements of speech. But the use he made of them in his syllogistic is, manifestly,
< previous page
page_88
next page >
< previous page
page_89
next page >
Page 89 neither orthographic nor numerical/geometrical. So what is it? And why does Aristotle never discuss its validity? These are questions that must be recognized as central to any inquiry aiming to elucidate Aristotle’s conception of rationality. But they are not broached by modern historians of logic. In William and Martha Kneale’s authoritative The Development of Logic , for example, they are not even mentioned. What Kneale and Kneale do point out, however, is something very relevant to a related issue; namely, that Aristotle’s own use of alphabetic writing, when judged by modern standards, is curiously handicapped. It lacks any orthographic device for distinguishing regularly between what philosophers nowadays call ‘use’ and ‘mention’—a device which would have been extremely useful, to put it no higher than that, in the texts of the Organon. Its absence means that in various places it is far from clear exactly what Aristotle is claiming. Furthermore, Greek does not permit the free coinage of abstract nouns. Thus, according to Kneale and Kneale, Aristotle had no way of differentiating between the equivalents of what would be in modern English the predicates ‘man’ and ‘humanity’, or between either of these and the word man. In Aristotle’s Greek text, a single orthographic form does duty for all three (Kneale and Kneale 1984:27). The metalinguistic distinction between use and mention is certainly introduced early on to students of logic nowadays. It goes back to the medieval scholastic concept of suppositio materialis: without it, propositions like ‘Man is a word with three letters’ are easily confused with propositions like ‘Man is an animal with two legs’, given that spoken usage regularly employs the same form ([man]) to designate the word as to designate the creature. But the fact that Aristotle makes nothing of it might be interpreted in two ways. One would be that he just does not see the need to bother with that detail in his exposition. The other would be that the practical limitations of Greek alphabetic writing in his day (no such devices as italics, inverted commas, etc.) act as blinkers to his recognition of it. In other words, this would not be just impatience or carelessness, but being prevented from seeing it clearly. In the latter case, we would be dealing with a tyranny of the alphabet far more serious than any usually acknowledged by historians of writing. Nowadays it seems obvious that if you are going to use letters as variables it behoves you to make it clear what they ‘stand for’; that is, whether they ‘stand for’ e.g. words, or ideas, or things. (Because as letters they do none of these automatically, and they can hardly do all of them simultaneously.) Kneale and Kneale take the view that, for whatever reason, Aristotle just did not see the problem, but add: If, however, he had been able to ask the question, Aristotle would almost certainly have answered that he was dealing with things and not with words. (Kneale and Kneale 1984:27)
< previous page
page_89
next page >
< previous page
page_90
next page >
Page 90 According to Kneale and Kneale the ‘clearest proof’ that this is the right interpretation of Aristotle’s position is how in Categories he illustrates what he means by ‘being in’ something: his example is that grammatical knowledge is ‘in’ the mind ( Categories 1a20–1b9). Here, they say, ‘he could scarcely conceive that he was talking of a linguistic expression’ (Kneale and Kneale 1984:27). Grammatical knowledge, although to do with words, is not itself a word or combination of words. Whatever weight one attaches to this argument, it does not clear the mystery up entirely. For we cannot suppose that Aristotle blandly assimilated names to the things named, or spellings to the words (or sounds) represented. On the contrary, at the beginning of De Interpretatione (16a) he makes a point of distinguishing letters from speech sounds, but treats the former as ‘symbols’ (sumbola) of the latter. In short, it is again his lin-guistics—or rather the dubious language myth underlying his linguistics —that permits him to proceed as if the distinction did not matter, at least for dialectical purposes. He consequently ignores the gap that the medieval doctrine of suppositio was expressly designed to fill. In so doing he makes a typically scriptist assumption—that there is no possible confusion between written forms themselves and what they are to be taken as ‘standing for’. In short, he assumes—as literate people tend to—that writing is a perfectly perspicuous medium of expression. Worse still, he fails to see that the respects in which writing is not perspicuous matter a great deal if you are about to introduce, for technical purposes, a quite new use of written forms (i.e. letters as variables). No one can take away from Aristotle the twin achievements not only of formalizing logic but simultaneously of extending the boundaries of writing. What commentators, both ancient and modern, have so often lost sight of is the relationship between the twins. It is not sufficient to claim, as many have done (some with monotonous insistence), that Western logic was one historical by-product of Greek literacy. ‘Logic itself,’ Ong assures us, ‘emerges from the technology of writing’ (Ong 1982:172). That observation tells us little until we realize exactly by what mechanism that advance was made possible, and on what theoretical assumptions about language it was based. The argument presented in this chapter has attempted to supply what was previously missing. The lacuna has resulted in one stalemate after another in debates about rationality and literacy. That lacuna is not filled but at least becomes fillable once we identify the Achilles’ heel of Aristotelian logic as inherent in Aristotle’s misconception of literacy, a misconception based on naive and untenable linguistic assumptions concerning the relationship between utterances and their alphabetic notation. For in order to generalize from, say, If all men are mortal and all doctors are men, then all doctors are mortal to If all As are Bs and all Cs are As, then all Cs are Bs , etc. we have to make the tacit assumption that classes of living creatures in the ‘real world’ (such as men and doctors) stand in relation to one another in a quite different way from that
< previous page
page_90
next page >
< previous page
page_91
next page >
Page 91 in which, for a literate mind, one alphabetic letter does in relation to a different letter of the same alphabet. Unless that assumption is made, the conclusion all Cs are Bs is counterintuitive; for we already know as writers and readers that no Cs are Bs. If they were, the whole structure of the alphabet would be different. In fact, the alphabet could not function as an alphabet (for orthographic purposes) unless B , C, and each such alphabetic letter was everywhere and always both different and visually distinct from every other. The paradox at the heart of the way Aristotle elected to formalize deductive inference is that, once letters are pressed into service as variables in the way Aristotle wants, the logic of the syllogism contradicts the logic of the alphabet. This was the paradox that the medieval doctrine of suppositio mate-rialis was designed to resolve. But it cannot be retrospectively invoked to rescue Aristotle, for Aristotle draws no such distinctions as those made by William of Ockham between material supposition, simple supposition and personal supposition. On the face of it, all Cs are Bs could only plausibly be construed in the first instance as an appropriate alphabetic shorthand designed to capture every self-evidently absurd proposition (e.g. ‘All circles are squares’, ‘All fish are horses’), where what is affirmed is in contradiction with the very terms in which in its expressed. Some philosophers make the bizarre claim that substituting letters for words makes the (logical) form of the argument clear. For instance, Muel-ler states: ‘Aristotle is the founder of logic because he was the first person to see and state clearly that validity is formal or depends on form.’ (Muel-ler 1978:3). Mueller agrees that this assessment hinges crucially on what is meant by ‘form’, and proposes to explicate it by analyzing an argument considered by Aristotle in Prior Analytics . Mueller formulates the argument in question as follows; (i) All war against neighbors is bad; (ii) All war by Athens against Thebes is war against neighbors; (iii) All war by Athens against Thebes is bad. (Mueller 1978:3) Then he observes: In this formulation the argument contains only the words ‘all’ and ‘is’ and three “terms,” ‘war against neighbors,’ ‘bad,’ and ‘war by Athens against Thebes.’ The form of the argument is shown if we substitute the letters ‘B,’ ‘A,’ ‘C’ respectively for these terms, yielding (i) All B is A; (ii) All C is B; (iii) All C is A. (Mueller 1978:3)
< previous page
page_91
next page >
< previous page
page_92
next page >
Page 92 But, pace Mueller, nothing at all is ‘shown’ by the mere process of alphabetic substitution: whatever the ‘form of the argument’ may be, it remains just as opaque as it was in the original formulation. Or just as opaque as it would be if we substituted, let us say, (i) All 2 is 1 (ii) All 3 is 2 (iii) All 3 is 1. Here numerals are employed in just the same series of substitutions. All this is equally opaque. For a start, it is not clear whether reversing the order of (i), (ii) and (iii) makes any difference. And whether it makes any difference or not, how are we supposed to know? The mere substitution of letters for words, or of numerals for words, does not answer that question. In order to construe the letter-substituted version as an exhibition of logical form, we need to know in advance how we are intended to understand this curious use of the alphabet. But if we already know that, we need no further explanation. To understand the basis of the substitution process presupposes, precisely, that we have grasped the ‘form’ (whatever that is). If we have not understood how the substitution process is supposed to work, then we have not grasped the ‘form’ either. Clearly, Mueller is so familiar with this literate sleight of hand, beloved of logicians, that he cannot see the semiological problems it poses. His demonstration of logical form is actually a case of rendering obscurum per obscurius . What is still unclear, for reasons discussed above, is why anyone should accept any of the lettersubstituted generalizations, (i), (ii) or (iii). We cannot accept them as they stand unless we have already solved the semiological conundrum that they pose. But the two most obvious solutions lead to quite different notions of ‘form’. On one interpretation, to say for instance that all C is A is to make a claim about a certain state of affairs obtaining in the world around us. ( Which state of affairs that is will depend on a hypothetical relationship deemed to hold between the letters in question and (1) certain word-classes, together with (2) certain classes of non-verbal item designated by words in the specified word-classes.) On a different interpretation, to say that all C is A is simply to make a claim about certain conceptual relations that have to hold for anyone who wishes to make sense of (iii). Roug hly, in the former case, this comes dow n to accepting that—in Aristotle’s original example—war against Thebes is de facto a case of war against neighbours (because there is no way of altering the geography that makes Thebans and Athenians neighbours) while in the latter case it comes down to saying that we deceive ourselves if we think it possible to form a general class concept (e.g. ‘war against neighbours’) without eo ipso including all subclasses of ‘neighbours’. (The moment we mentally exclude one of the subclasses we are already dealing with a different concept.) And this psychocentric generalization holds
< previous page
page_92
next page >
< previous page
page_93
next page >
Page 93 regardless of the geography of Greece, and regardless of whether in fact we have any neighbours at all. These are two quite different putative bases for the operation of inference. The trouble is that in Prior Analytics Aristotle never makes it clear which of these two possibilities he is endorsing. The actual wording of his text is problematic, but it is usually translated by phrases such as ‘A applies to B’, or ‘A is predicated of B’, or ‘A belongs to B’. But, given the limitations of Greek alphabetic writing pointed out by Kneale and Kneale, it is by no means obvious what it means to ‘apply’ A to B, or to ‘predicate’ A of B, or for A to ‘belong to’ B. In what sense or senses can one letter ‘be applied to’ or ‘be predicated of’, or ‘belong to’ another? Formulas like ‘All A is B’ can be juggled around indefinitely in various combinations and permutations, or arranged in tables and diagrams, without ever pursuing those questions. A sceptic might suggest that Aristotle resorts to the curious and counterintuitive device of using letters as variables precisely in order to dodge or postpone such questions. The deviousness of any such strategy makes an alternative explanation more likely; namely, that Aristotle’s language myth leaves him with no room for driving a wedge of doubt between the way the world is and the way it is described in Greek and represented via the conventions of Greek writing. This is already reflected in the formalism, to the extent that the alphabetic version of the syllogism does not work unless the letters retain their alphabetic identity as between (i), (ii) and (iii). So A in (i) has to be read as ‘the same A’ we find in (iii)—regardless of what A is supposed to ‘stand for’, and regardless of which order we take (i), (ii) and (iii) in. This is never stated by Aristotle, but it is taken for granted that readers recognize letters of the alphabet as retaining their individual identities from one sentence to the next. The problem bears even on the so-called ‘law of identity’ when it is stated in the form ‘A is A’. For this is a formulation that verges on self-con-tradiction if it is to be taken as asserting that what A stands for is uniquely itself and nothing else. In that case, how can two A’s be needed to make the assertion? As Russell once observed sceptically, ‘We used to be told that the Law of Identity was a law of thought, but now it appears that it is a convention of typography’ (Russell 1950:354). Worse still, Aristotle wants to allow this same A somehow to include C, or be identical with C, so as eventually to permit the conclusion ‘All C is A’. This is trying to have your alphabetic cake and eat it. For letters of the alphabet do not include one another and are not intersubstitut-able. If ‘A is A’ is a typographically conventionalized tautology, it is not a tautology of the same order as ‘Everything white is white’. To suppose otherwise is to conflate the (arbitrary) role of a letter in a writing system with the (necessary) identity of colours that do not differ. (It is perfectly possible to have a writing system in which one letter functions as a substitute for others; but white remains white in all visual worlds, including those of the imagination.) To take that conflation as providing
< previous page
page_93
next page >
< previous page
page_94
next page >
Page 94 a perspicuous illustration of ‘logical form’ is to pile one scriptist muddle on top of another. Thus Aristotle’s ‘formal’ account of rationality holds up only as far as—but no further than—our complicit acceptance of his language myth and his alphabetic variables. But as regards the variables we have little choice. They are presented by Aristotle as a textual fait accompli . Once we question that fait accompli , we are left with no coherent account of rationality at all in the works comprising the Organon. It is, from beginning to end, a complicity between literate minds.
< previous page
page_94
next page >
< previous page
page_95
next page >
Page 95 7 Logic and the Tyranny of the Alphabet CONSTANTS AND VARIABLES Once the ‘all A is B’ stage of alphabetic generalization has been reached, it might seem that the obvious next step for the syllogism to take is to select a couple of letters to stand for all and is—the logical ‘constants’. If that step is taken, and letter substitutes are devised for all the other logical constants, then a single string of letter-forms can represent each and every one of the various types of proposition that it is possible to formulate. In effect, that is exactly what modern ‘symbolic logic’ does (although it supplements the alphabet by adding a few new symbols). When that stage is reached, the syllogism seems to have dispensed with words altogether. We have apparently passed beyond language into the realm of ‘pure rea-son’, and the alphabet is the tool that has made this momentous transition intellectually possible. Aristotle, after his invention of the variable and the unprecedented systematization of thinking that it facilitated, must have contemplated taking this next step. But in the end he does not. Why not? The key to understanding just how far Aristotle takes his logic, and where it stops, is again to be found in the restrictions imposed upon his thinking by the language myth he has set up as the basis for doing philosophy, in conjunction with his scriptist view of literacy. NAMES OF SOUNDS In preliterate Greek culture individual speech sounds had no names. The names alpha , beta , etc. were imported with the alphabet itself: they were the names of the ‘Phoenician letters’ (as the Greeks commonly called them). It was only subsequently that they also became, by extension, names of corresponding sounds in Greek (for the Greek alphabetic adaptation did not always preserve the original phonetic values— alpha being a case in point). This is what leads to the inconsistency we find in Plato’s Cratylus, where the terms grammata and stoikheia are used interchangeably.
< previous page
page_95
next page >
< previous page
page_96
next page >
Page 96 By Aristotle’s day the letter-names can be pluralized (by use of the accompanying article), even though the form of the name remains invariant. This grammatical development is essential to the argument Aristotle presents in Metaphysics 1086b-1087a25 about particulars and universals. Here he needs to be able to talk about ‘all the alphas’. But he does not seem to notice that this now introduces an ambiguity: it is unclear whether this means all the members of a certain class of vowels or all the members of a certain class of letter-forms. Again, his exposition runs up against the dif-ficulty that Greek orthographic practice has no way of differentiating the vowel from the corresponding letter. Nowhere in this discussion of universals does Aristotle seem to acknowledge the incoherence of trying to set up a ‘type/token’ distinction (as it would nowadays be called) across the two disjoint classes of letters and sounds. There can be no universal ‘alpha type’ which somehow embraces simultaneously both written visual shapes and audible segments of utterances. One might as well suppose that there were a universal ‘animal type’, covering both flesh and blood creatures and pictures or statues of these creatures as well. And that, in effect, is exactly what Aristotle rejects in Categories 1a1–5. But he is prevented from seeing this clearly in the case of letters and sounds by his scriptist assumption that at bottom the same entity underlies both. The reason given by Aristotle for rejecting the universal ‘animal type’ is that although Greek as it happens has a word ( zoon ) that can be used both of animals and of inanimate pictures of animals, in the two cases the defini-tions of the word are different. So the logic of the two relevant discourses (about animals and pictures of animals) does not coincide either. In effect, this is an admission that the whole formalization of the syllogism depends on the availability of definitions. The adoption of a letter as a way of designating a class has to be understood as presupposing that the members of that class are, as a class, identifiable in some way as uniquely different from the class associated with any other letter when the letters are being used ‘variably’ in any particular set of statements. So Aristotle, or anyone who wants to use Aristotelian variables, has to come up with some reassurance that this requirement can be met. Aristotle’s response to this requirement is to invoke a doctrine of definition. DEFINITIONS, WORDS AND ESSENCES Definitions are prerequisites for the entire classification set out in the Categories , and hence for the whole of the Organon. Aristotle tackles the question head-on in the opening lines of the Categories : a definition is a ‘statement’ ( logos ) of the essence ( ousia) of something. But one thing we are never told is what the essence of a letter of the alphabet is. For Aristotle, the essence of an animal is not the essence of a picture of an animal, even if the two are called by the same name. It is, precisely, because
< previous page
page_96
next page >
< previous page
page_97
next page >
Page 97 the two are called by the same name that it is important for the logician’s purposes to insist on separating them. This is why Aristotle begins straight away—without any preamble, and rather surprisingly, it might seem at first sight—by drawing attention to the lexical phenomenon known to linguists as homonymy . Homonymy is a commonplace phenomenon in all languages; but Aristotle is the first to grasp its relevance to reasoning. Having focussed attention upon it at the beginning of the Categories , he returns to what he calls homonuma on various occasions in Topic s and Posterior Analytics . It is relevant to note that homonymy is also discussed in Aristotle’s Rhetoric , a treatise which opens with the declaration that ‘rhetoric is the counterpart of dialectic’ and that both studies are concerned with presenting and countering arguments ( Rhetoric 1354a1–6). But in the Rhetoric what is said about homonymy is buried deep in the body of the text (Book II 1401a13–24; Book III 1404b38–9), whereas in the Categories it introduces the whole discussion. Why is this? The concept of homonymy that Aristotle relies on is an everyday, lay concept, familiar in every linguistic community where different individuals may bear the same name. This had been the case in Greece from time immemorial. (Individuals normally had only one name, the many Greeks called Apollodorus or Callimachus being distinguished either as ‘son of so-and-so’ or by means of a patronymic.) In Aristotle’s day, no metalinguis-tic distinction was drawn between proper names and common nouns, the term onoma covering both. (It also did duty as a general term for ‘word’.) This may partly explain Aristotle’s apparent failure to see that the criteria applicable to homonymy in the case of proper names and individuals cannot be applied unproblematically to homonymy in the case of common nouns and classes. It is clear enough in principle what information is being sought when someone asks whether the person Jones mentioned on page 21 is the same as the person Jones mentioned on page 54; and also clear enough what kind of information would enable a reader to decide whether these were different individuals or not. But the case of common nouns is more complicated. The nearest Aristotle comes to dealing with the homonymy problem is in Book I of his Topics . But he never quite solves it, and his failure points again to a basic problem in Aristotelian linguistics. In terms of the linguistic positions discussed in Plato’s Cratylus, Aristotle is certainly a ‘con-ventionalist’ as opposed to a ‘naturalist’. He does not believe that Nature determines what names ( onomata) things should have, or supplies criteria making some names appropriate (i.e. fitting for the thing named) and others not. In short, he subscribes to what would nowadays be called ‘the arbitrariness of the linguistic sign’. At first sight the existence of homonyms poses no threat to the doctrine of arbitrariness: there is no natural ‘reason’ why the same name should—or should not—be given by convention to two quite different individuals or things or classes thereof. Nor is the existence of synonyms a problem: nothing prevents the same person or thing from
< previous page
page_97
next page >
< previous page
page_98
next page >
Page 98 having two (or more) different names. Linguistic convention appears to be tolerant of both states of affairs. There is no overriding linguistic principle that decrees a universal one-to-one correspondence between a name and what is thereby named. (Later theorists in the Western tradition regarded this as a ‘defect’ of ordinary language, and some formal logicians still do.) Where common nouns are concerned, however, homonymy is not always easy to distinguish from polysemy. Or, to put the problem in the form in which it is most frequently raised, it is not always obvious how many different words we are dealing with. Some cases seem intuitively clear. The word tap as in water tap seems to be a different word from tap as in tap on the shoulder . But in other cases doubts may arise. Is bed as in feather bed the same word as bed in river bed ? Is there just one English verb to bear ? Or at least five, as in bear a burden, bear a grudge, bear a name , bear fruit and bear children? (Burdens, grudges, names, fruit and children are all different things. But it is by no means selfevident whether that warrants assigning different meanings to bear .) In Topic s Aristotle recommends a number of criteria for dealing with doubtful cases, but they are not very convincing. He apparently holds that if a word has two opposites—e.g. the adjective light being opposed both to heavy (in weight) and to dark (in colour)—that shows it to be homonymous. But, given the arbitrariness of the linguistic sign, this conclusion does not automatically follow. There is no reason why ‘lighter than’ should not be considered a single abstract relation that distinguishes objects ranked on a scale of weight in the same way as colours ranked on a chromatic scale. Whether people do or do not think of being ‘lighter than’ in this way cannot, in any case, be settled without more ado by pointing to the existence of the words heavy and dark. Aristotle’s appeal to ‘opposites’ begs the question. Nor can it be extended as a general criterion for homonymy, for many words have no ‘opposites’, as Aristotle concedes. But this leaves him in an awkward position as regards definitions. For if there are no general criteria for homonymy, every proposed definition ( logos ) is potentially ambiguous. We now begin to see why the problem of homonymy is brought up straight away in the Organon but is virtually ignored in the Rhetoric . Qua theorist of rhetoric, Aristotle might perhaps have been prepared to entertain the notion that definitions, in the end, are just attempts to persuade other people how to use certain words. But neither Aristotle the logi-cian nor Aristotle the metaphysician can afford to admit that. So we find Aristotle maintaining through thick and thin that ‘correct’ definitions can be given. (It is interesting that in his Rhetoric , although his presentation makes abundant use of definitions (e.g. of happiness), the topic of defini-tion itself is not discussed as a rhetorical device: the omission suggests an avoidance of embarrassment.) Be that as it may, before the work of syllogistic reasoning can begin there seems to be a prior obligation on the logician to establish that the terms in question are free of homonymy. It will not do to say that all As are
< previous page
page_98
next page >
< previous page
page_99
next page >
Page 99 Bs, etc. if the variables are allowed to range over homonymous terms. The elimination of homonymy has to be a practical proposal if the syllogism is to stand on its own two feet. Nowhere does Aristotle attempt to face up to this. Instead, he tacitly assumes that the requirement can somehow be met. The underlying conflict here is between Aristotle’s ‘conventionalist’ view of names and his reocentric semantics. The language myth underlying his philosophy requires him to treat reasoning as a process that maintains contact with the ‘real world’, i.e. the world in which debates with practical consequences are conducted, as in Greek politics. Reasoning cannot be just a verbal game or mental calculus. It must be in some sense ‘truth-preserv-ing’ whenever there is truth to be preserved, i.e. from true premises to true conclusions. The tension comes to a head in his account of what a definition is, a statement of ‘essence’. According to Richard Robinson, Aristotle’s best attempt to explain what essence is is in Chapters 4–6 of Book 7 of his Metaphysics. At the outset Aristotle declares that the essence of each thing is what it is said to be in virtue of itself ( Metaphysics 1029b14). Then straight away he adds the puzzling qualification: ‘But not the whole of this is the essence of a thing’. The example he takes is even more puzzling: a white surface. For, he maintains, it is not the essence of a surface to be white. Here it seems that the original question has already been forgotten; for while it may be true that not all surfaces are white, nevertheless this one is white, and this one is the thing we have in front of us as the thing whose essence we are looking for. At least, however, the white surface does have an essence, even if it is not quite what we might have thought it was ‘in virtue of itself’ (supposing we can make sense of that requirement). A few lines later, on the other hand, Aristotle is asking whether a cloak has an essence at all, and replies that probably it has not. But why he thinks this is obscure. He concludes that ‘nothing which is not a species of a genus will have an essence’ ( Metaphysics 1030a11–12). Robinson comments: In these bewildering chapters we find Aristotle reaching the mysterious conclusions that some things have an essence and others do not, and that of the things that do have an essence some are the same as their essence and others are not. (Robinson 1954:154) Robinson points out that in Aristotle’s exposition the notions of ‘defini-tion’ and ‘essence’ simply chase each other round in a circle, and concludes that ‘there is no such thing as essence in his sense of the word’ (Robinson 1954:154). Robinson is doubtless right to dismiss Aristotle’s concept of essence as incoherent. But that does not explain where it came from. The incoherence in question has all the hallmarks of a literate confusion. From
< previous page
page_99
next page >
< previous page
page_100
next page >
Page 100 the start, what Aristotle takes for granted is that an essence is what is defined by a definition. A definition is given in a form of words . No form of words is self-defining. So the form of words has to capture something else other than its own image. This something else, according to one way of reading Aristotle, is—precisely—the essence of the thing to be defined. But this is where the muddle takes off. For what has not been explained is how the essence can be both (i) what makes the form of words interpretable as a definition (i.e. supplies its meaning) and, at the same time, (ii) whatever it is that makes the thing what it ‘essentially’ is, i.e. ‘in itself’. The gap between (i) and (ii) tends to go unnoticed by the literate mind. For writing encourages the illusion of being able to deal with thought, or the abstractions involved in thinking, ‘directly’, i.e. at a level where the abstractions in question can dispense with any commu-nicational anchorage other than that of the signs visually present before the reader. The very visibility of the written forms, together with their relative permanence—by contrast with the invisibility and ephemerality of the spoken word—combine to bypass doubts about the abstractions themselves; for these latter are being given concrete actuality in front of the reader’s eyes. The existence of the written forms in the here-and-now of the text supplies all the credentials needed for embarking on discussion of the abstractions. Seeing is believing. What is in fact being bypassed is speech; but consciousness of bypassing speech is easily construed subjectively by the reader as bypassing words altogether and thus gaining immediate access to the processes of thought. One does not have to go through the time-consuming physiological transference of putting words back into sounds, and then translating the sounds into thoughts. With practice, the meaning can be ‘read straight off’ from the marks, as schoolchildren soon discover for themselves. Similarly, familiarity with writing promotes the notion that individual words have a first existence as isolable linguistic units, before there is any question of combining them in phrases or sentences , and that in this pre-combinatorial existence they already have a meaning of their own. Goody (Goody 1977:74–111) was the first modern theorist to stress the importance of the fact that only writing makes it possible to list words as individual items, to rearrange them in ordered sequences, to compare them one with another, and in short to give them the status of physical objects on a par with drawings, tokens and other collectable things. The compilation of word lists is a practice that goes back to the scribes of ancient Babylon (Goody 1977:83, Kramer 1959:1, Kramer 1963:232–6): in oral communities no corresponding practice exists. Such lists were the earliest form of grammar; or, at the very least, the immediate precursor of the grammar book. From the same milieu came the earliest known dictionaries, essentially the listing principle applied to the vocabulary as a whole (Kramer 1963:233). Speech does not favour this way of dealing with words at all, since in preliterate communities there is no process by which a spoken word
< previous page
page_100
next page >
< previous page
page_101
next page >
Page 101 may arbitrarily be lifted out of its oral sequence and treated as an independent item. Writing provides the earliest technique that makes anything like this possible. It is no surprise therefore that Aristotle, addressing the educated Athe-nians of his day, takes as the point of departure in the Organon the identi-fication of what he calls ‘uncombined’ words, already having meanings in that supposedly pristine (albeit completely abstract) state. This characteristically literate assumption is simultaneously the basis of Greek grammar (the ‘parts of speech’) and of syllogistic. The contribution of literacy to word-identification has left an unmistakable mark on linguistic thought ever since. Most educated people nowadays find the task of listening to what another person says and repeating every fifth word incredibly difficult; whereas scanning a printed page and copying down every fifth word or every fifth letter is ridiculously easy. Why is this? It would be a facile answer to say that in writing, and printing especially, letters and words are already represented as discrete units. That they are so represented at all reflects the fact that familiarity with writing is familiarity with the process of breaking down sequences deliberately into their component parts, whereas familiarity with speech is familiarity with rattling off sequences and not worrying too much about blurring the boundaries that separate one unit from the next. (In Aristotle’s day there was no such thing as ‘joined up’ writing.) Writing, in brief, encourages attention to separating one segmental unit from the next, whereas speech does not. The Greek identification of stoikheia in the uttered sequence is nothing more than a projection of the alphabet from the visual into the aural domain, where it does not belong. An utterance is not a stringing together of previously separate consonants and vowels; but that process is exactly what we carry out when writing it down in alphabetic script. Greek phonetics was the weakest branch of Greek language-study, as modern historians of linguistics have noted (Robins 1997:29–31), and part of the reason is that preoccupation with the written word diverted attention from the physics and physiology of speech. But it is also true that the Greek adoption of the alphabet was an obstacle to recognizing the facts about Greek speech. In the Classical period an improper analogy was accepted between the relation of discrete letters to a text and that of allegedly discrete sounds to a spoken utterance. This fallacy was not challenged, and it appears explicitly at the end of the classical period in Priscian, writing on Latin: ‘Just as atoms come together and produce every corporeal thing, so likewise do speech sounds compose articulate speech as it were some bodily entity.’ The relations are otherwise: letters actually do compose written sentences; speech may be analysed into speech sounds. (Robins 1997:30)
< previous page
page_101
next page >
< previous page
page_102
next page >
Page 102 To which might be added: ‘And very different analyses may be given if different analysts are not obsessed from the start with matching up oral units with orthographic units.’ Against this background, Aristotle’s mysterious ‘essences’ can be seen as a further scriptist extension of alphabetic thinking about language and its correlates in the external world. The individual letters of which the written form is composed are meaningless. Meaning emerges only when the letters are combined in certain ways (in parallel with the flow of sounds in speech); but even then not every combination is meaningful ( De Interpretatione 16b27ff. ). So somehow meaning in speaking and writing seems to be a product of the combination of smaller meaningless units. It is this meaning-conferring combination which gives words their status as linguistic items and enables them to be brought into correlation with non-linguistic things as ‘names’ of the latter ( Socrates as the name of Socrates, etc.). Once this picture of word-thing correlation is in place, it is tempting to ask what it is in the ‘thing’ (e.g. in Socrates) that corresponds to whatever it is in the combination of sounds or letters which makes that particular combination (e.g. Socrates ) meaningful. ‘Essence’ is Aristotle’s answer. Socrates the individual is a meaningful combination of elements, none of which can function in isolation (an arm, a leg, a tooth, a muscle), but which together form a higher-order unit which is Socrates-as-he-is-in-himself. Socrates is not something extra added to the combination, but an emergent form which that combination somehow takes. There has to be an essence, even if it is hard to pin down, because otherwise it would have to be supposed that every single feature of the whole were equally relevant to the identity of Socrates, which seems implausible. Socrates minus one tooth is still Socrates. How do we know? Because, for one thing, Socrates is still his name. This, again, is what we find to be the case with the written form of the name Socrates . Provided all the letters are in place in the right order, it does not matter whether the name appears in ink or incised in stone, the characters written large or small: it is still recognizable as that unique combination of units which sets it apart from all other names. So there appears to be here a satisfying structural parallel between the name and what it names. The essence of Socrates, elusive though it may prove to be, corresponds to whatever it is—equally elusive—that makes the combination of letters Socrates a meaningful combination, setting it apart from all other such combinations as well as from meaningless ones. That parallel—between essences and meanings—is what underlies Aris-totle’s conception of definitions as stating or designating essences. But it is interesting to note that many philosophers who have seen what is dubious about Aristotelian essences do not seem to have noticed what is dubious about Aristotelian meanings; or that each member of this questionable Aristotelian pair is the exact counterpart of the other. They have failed to see, in other words, that the whole notion of definition and ‘essences’ that we find in Aristotle is no more than an extension of scriptist assumptions about the relationship between speech and writing.
< previous page
page_102
next page >
< previous page
page_103
next page >
Page 103 PROPOSITIONS The literal-mindedness of Aristotle the logician is no less evident in his conception of the proposition, which is the foundation of his syllogistic. Like the Aristotelian word, it is a blatant decontextualization. It stands alone, self-contained, prior to any deployment in discourse. Like the word, it is a composite; but, unlike the word, its components already have meanings independently. And it is this combination of meaningful elements previously uncombined that, according to Aristotle, makes a proposition the potential bearer of a truth value, whereas the uncom-bined elements themselves do not have any truth potential at all. Furthermore, just as not any old combination of stoikheia or grammata makes a word, not any old combination of available meaningful elements makes a proposition either. Of things that are said, some involve combination while others are said without combination. Examples of those involving combination are: man runs, man wins; and of those without combination: man, ox, runs, wins. ( Categories 1a16–19) For every affirmation, it seems, is either true or false; but of things said without any combination none is either true or false (e.g. man, white, runs, wins). ( Categories 2a8–10) The proposition, in short, has its own compositional mystique which matches that of the word. Out of the combination—lo and behold!—comes forth meaning. It was not there before. The trick is spectacular, but the logician is not saying how it was done. For a literate audience, there is no need to: the trick is one they perform themselves every day. Sceptics may say it does not matter how it was done—that is, how the illusion was created. It cannot fool anybody who thinks carefully about it, because it just cannot be the case that a mere combination in some way generates meaning of its own accord . Even if we grant that in ‘man runs’, ‘man’ appears in the subject position and ‘runs’ in the predicate position, the combination ‘man runs’ does not thereby acquire the kind of meaningfulness that makes it susceptible to be judged true or false until someone actually uses it in discourse (either spoken or written). And then the judgment pertains not to the combination as such but to its use in those circumstances. So the whole business of combinations and lack of combinations is a red herring. To suppose otherwise is to confuse the basic distinction between what in modern linguistics are called langue and parole. But Aristotle the logician shows no interest in drawing distinctions of that kind: propositions are identified just by their associated word-forms, whether in-use or out-of-use.
< previous page
page_103
next page >
< previous page
page_104
next page >
Page 104 Moreover, for Aristotle’s purposes, either the spoken form or the written form will do. Every proposition has both. Thus, on the basis of the literate mystique of what happens when letters are combined in the right way, we are led step by step into accepting the proposition as an independent abstraction, separate from the sentence; and an abstraction of which the meaning is the significance common to the spoken words-in-combination and the corresponding written words-in-combination. It is into this wondrous array of abstractions that the alphabetic letter is introduced as a variable, first as ranging over single terms, and later as ranging over whole propositions. The first of these manoeuvres involves another literate illusion. For what the variable ‘stands for’ is not, as might first appear, an arbitrary class of ‘uncombined’ words (items of the kind we are nowadays accustomed to find listed in alphabetical order in a modern dictionary), but a class of words-ascombined-in-propositions. When the logician discusses whether or not ‘all As are Bs’—and, if so, what follows—the variables themselves (A, B) have to be construed as items-in-combination. For that, by Aristotle’s own account, is the only way ‘all As are Bs’ could have a truth value. In short, the alphabetic variable is taken as ranging not over a class of uncombined items but over a class of combinatorial uses. This is something much more mysterious. Uses are not on a par with things used. The existence of a use is not a fact reported by the senses. Are we even in a position to set about identifying members of a class of uses if we are not told into which combinations the items in question are deemed to enter? (The answer could hardly be ‘any and every combination possible’ because presumably at least some combinations are nonsensical or otherwise disqualified.) In any case, the open-endedness of the proposed range is, to say the least, breath-taking; just as it would be if in mechanics, for instance, a theorist were to propose a variable allegedly ranging over ‘all ways of using any simple machine in combination with any of the other simple machines’. What kind of mechanical class is it, exactly, that we are being asked to envisage? And how do we know whether using a lever in combination with a wheel that is smaller in diameter than the length of the lever, or in combination with a wheel that has a larger diameter than the length of the lever, count as two different ‘ways of using it’ or the same way? A similar kind of puzzle arises when the variables range over propositions. The Aristotelian proposition is another dubious decontextualization. It is a kind of abstract pre-verbal version of the sentence, in which context is ignored. Aristotle is not interested, for logical purposes, in the many ways a sentence can be used, depending upon the communication situation in question. Someone who says ‘The earth is round’ may be trying to teach a foreigner something about the English word round ; or giving a geography lesson to a class of small children; or contradicting the dogmatic view expressed a moment ago by a flat-earthist. But all these and many more possible uses of a sentence are of no concern to Aristotle. Aristotle’s proposition is a scriptist abstraction which elides the differences between
< previous page
page_104
next page >
< previous page
page_105
next page >
Page 105 utterances which place emphasis on the words earth, is and round (as in ‘The earth is round’, ‘The earth is round’ and ‘The earth is round ’). The alphabetic transcription of these three utterances collapses all three into a single sequence of letters. Even the introduction of sophisticated scriptorial devices such as the different fonts just used in the examples quoted leaves the sequence of letters unchanged. This invariance cannot do other than create in the reader a powerful impression that the sequence of letters itself somehow captures the ‘essence’ of the proposition, without the frills added by the variation of italics. What other way is there of construing it? Once we see what is actually going on in a case like this, we see something important: alphabetic writing brings its own scriptist metaphysics along with it. It is at this point—and just in time, when awkward doubts are being raised about the whole notion of intuitively identifiable classes of use —that Aristotle rescues his project by invoking the reocentric semantics of ‘names’ and ‘things named’. We are invited to treat the relevant legitimate uses of a noun as being, at least in the first instance, uses of the word as a name identifying the corresponding thing or things under discussion. (So we need only bother with those combinatorial uses of Socrates where that form is being used to identify Socrates: any other uses may safely be ignored, including—it need hardly be added—being used to identify the name of the sage in question, or being used as a blackboard example.) Similarly with propositions, their only ‘use’ recognized in the syllogism is their potential use to state a truth or a falsehood, and it is postulated that in this use they must state either one or the other. Whether a ‘use’ thus circumscribed can ever be isolated in the situated actuality of discourse is a question that is not allowed to arise. (It is certainly difficult to suggest an example of a sentence that owes its existence to having that use as its sole communicational function. Unless it be the very sentence that the logician produces for the sole purpose of identifying ‘the proposition’. But that is a case of the dog chasing its own tail.) LOCKE ON REASON In order to appreciate to what extent Aristotle’s conception of rationality is dictated by his reocentric view of language (and not vice versa), it suffices to compare Aristotle’s account with Locke’s. But God has not been so sparing to men to make them barely two-legged creatures, and left it to Aristotle to make them rational, i.e. those few of them that he could get so to examine the grounds of syllo-gisms, as to see that, in above three score ways that three propositions may be laid together, there are but about fourteen wherein one may be sure that the conclusion is right. (Locke 1706: IV.xviii.4)
< previous page
page_105
next page >
< previous page
page_106
next page >
Page 106 In the chapter on ‘Reason’ in his Essay Concerning Human Understanding , Locke is careful to pay due respect to Aristotle as ‘one of the greatest men among the ancients’. But this ritual homage hardly conceals the sarcasm of the reference to God ‘leaving it to Aristotle’ to make men rational. It is clear that in Locke’s view Aristotle had got it wrong. The reason for this clash of opinions is that Locke does not believe in reocentric semantics. He has a different version of the language myth: words are not ‘given’ as names of things, but stand for ideas in the mind of the speaker. Whether words correlate with the same things for different speakers is therefore, for Locke, an open question. It is a common mistake men make to suppose that their words stand for ‘the reality of things’, and a mistake fraught with all kinds of undesirable consequences: it is a perverting the use of words, and brings unavoidable obscurity and confusion into their signification, whenever we make them stand for anything but those ideas we have in our own minds. (Locke 1706: III.ii.5) But that, precisely, is—for Locke—Aristotle’s mistake. Aristotle believes that the world ‘really is’ the same for all observers ( De Interpretatione 16a). Two thinkers, however ‘literal-minded’, who disagree on an issue as fundamental as the relationship between words and reality will obviously disagree on the nature of human rationality, given the role that words play in the articulation of beliefs and the conduct of debate about them. For anyone who takes Locke’s position, a naive trust in the reliability of the correspondence between language and reality means a failure to grasp the fact that reason is an intrinsic human faculty (Locke 1706: IV.xvii.1), and as such operates in a way that is prior to any ‘formal’ (i.e. verbal) demonstration of connexion between beliefs. Tell a country gentle-woman that the wind is south-west, and the weather lowering, and like to rain, and she will easily understand it is not safe for her to go abroad thin clad in such a day, after a fever: she clearly sees the probable connexion of all these, viz. south-west wind, and clouds, rain, wetting, taking cold, relapse, and danger of death, without tying them together in those artificial and cumbersome fetters of several syllogisms, that clog and hinder the mind, which proceeds from one part to another quicker and clearer without them. (Locke 1706: IV.xvii.4) In other words, the Aristotelian syllogism cannot be considered a device for making explicit the hidden reasoning that the gentle-woman engages in below the level of consciousness. It does not lay bare the concealed workings of the human mind. All the syllogism is good for—and all that can be claimed for it, on a Lockean view—is that it does provide one way of
< previous page
page_106
next page >
< previous page
page_107
next page >
Page 107 summarizing and generalizing (e.g. for pedagogic purposes) certain relations between propositions. Aristotle invented a useful technique relevant to exercises in dialectic. His followers mistook it for an analysis of human rationality, an organon necessary for the development of all branches of knowledge. RATIONALITY AND LANGUAGE Perhaps Aristotle’s most fundamental contribution to the study of rationality was the basic idea that there is no such thing as a rational belief per se. We cannot, by contemplation of the proposition that the earth is round, expect it somehow to emerge from within the proposition itself whether or not it would be rational to believe it. And that is because Aristotle insists on divorcing the proposition from any context. In isolation, the proposition that the earth is round is neither rational nor irrational. The question of rationality—as distinct from truth and falsity—arises only when different, equally isolated propositions are brought together and seen as connected in some way. Rationality is in this sense relational or relativistic. It is only by comparison that we can determine that one proposition ‘follows from’ another or others. Rationality involves grasping a whole network of actual and potential beliefs. In this sense, Aristotle the logician was the original structuralist. Aristotle never explicitly discusses an idea which he takes for granted from start to finish in the Organon. This is the idea that rationality involves mastery of a language: that creatures without a language are automatically incapable of rational thought. Mastery of a language would seem to be required for Aristotelian rationality because without a language it is impossible to compare one proposition with another, or indeed to distinguish one from another. It is difficult to see how the thought that the earth is round can be identified as a proposition except by putting it into words. It does not even seem to be a plausible candidate for belief until it is put into words. We do not need to go into the question of whether this is so for all beliefs, but as far as Aristotle is concerned it evidently holds for most human beliefs, and verbal discourse is typically the way in which those beliefs are expressed and sustained or rejected. The language-dependence of reasoning was not always accepted by later logicians. The authors of the influential Port-Royal logic in the 17th century expressly set out their initial account of the subject in terms of relations between ideas ( idées ), not words. They distinguish four basic operations of the mind that are independent of words and underlie the whole of human reasoning. They regard words as an adjunct to reasoning, but not an essential part of it: if our reflections on our thoughts never concerned others than ourselves, it would suffice to consider them in themselves, without clothing them in words, or any other signs: but because we cannot get others to
< previous page
page_107
next page >
< previous page
page_108
next page >
Page 108 understand our thoughts, except by accompanying them with external signs: and this habit ( accoutumance) is so strong that when we think alone, things do not present themselves to the mind except with the words with which we are accustomed to clothe them when speaking to others, it is necessary in Logic to consider ideas joined to words, and words joined to ideas. (Arnauld and Nicole 1683: Intr.) Words, furthermore, they consider to be worth discussing as one of the sources of confused and defective reasoning. John Stuart Mill in his System of Logic goes much further than this and declares a ‘theory of names’ to be ‘a necessary part of Logic’. He defines a proposition as ‘discourse, in which something is affirmed or denied of something’ (Mill 1872: I.i.2). Mill claims: Whatever can be an object of belief, or even of disbelief, must, when put into words, assume the form of a proposition. All truth and all error lie in propositions. (Mill 1872: I.i.2) Even more explicitly, in his Elements of Logic , Archbishop Whately declines to discuss what he calls the ‘metaphysical question’ of whether ‘any process of reasoning can take place, in the mind, without any employment of language, orally or mentally’, but simply announces that even if it can ‘such a process does not come within the province of the science here treated of’ (Whately 1840:60). So wordless thought is peremptorily expelled from the province of logic. Here the tide of opinion seems to have turned finally against postulating ‘pure’ or pre-linguistic thought as the basis of reason. Evidently, for those logicians who regard language as intrinsic to reasoning, it comes to be of the greatest importance to adopt the right linguistic assumptions from the start. An instructive example of the way linguistic analysis may be relevant to logical analysis is Mill’s dispute with Alex-ander Bain over the question of whether singular propositions belong to the syllogism at all. Here we see what Mill had in mind when he pronounced a theory of names to be an essential part of logic. Bain had argued that in concluding that ‘one poor man is wise’ from ‘Socrates is wise’ and ‘Socrates is poor’ there is no genuine inference, since the latter two propositions merely select from the ‘aggregate of properties making up the whole, Socrates’. Mill rejects this on the ground that Bain’s argument rests upon the supposition that the name Socrates has a meaning; that man, wise, and poor, are parts of this meaning; and that by predicating them of Socrates we convey no information. (Mill 1872: II.ii.1 fn.)
< previous page
page_108
next page >
< previous page
page_109
next page >
Page 109 This Mill refuses to accept, on the ground that Socrates is a proper name, and proper names do not have meanings in the way Bain assumes. As Mill puts it in his chapter ‘Of Names’, a name like John , which is borne by many persons, ‘is not conferred upon them to indicate any qualities, or anything which belongs to them in common’ (Mill 1872: I.ii.3). Here, clearly, what divides the two logicians is not the theory of inference but the theory of names. Arguably, therefore, this idea—the language-dependence of rational-ity—is even more basic than the idea that rationality is a matter of relations between propositions. That is certainly how it has been seen by some philosophers, who tend to treat the key question as hinging on having or not having a language. For Aristotle that is not the question. The question, rather, is ‘What is the relationship between linguistically articulated propositions that makes the movement from belief in some to belief in others, in certain cases, rational?’ Aristotle qua logician is not interested in the slightest in communication between animals bereft of language. Even if there are some grounds for supposing that the cries of animals are ‘meaningful’ as expressions of pleasure or pain ( Politics 1253a10ff.), the fact that such cries are not in any sense linguistic signs seems obvious to Aristotle from the start, because such cries are mostly not ‘articulate’ ( History of Animals 535a29–536b24). And what that means, for Aristotle and his Greek contemporaries, is that they cannot be written down, i.e. decomposed into the stoikheia corresponding to letters. The tyranny of the alphabet extends even to providing criteria for judging the rationality of other living creatures.
< previous page
page_109
next page >
< previous page
page_110
next page >
Page 110 8 Literacy and Numeracy NUMBERS Something that is missing from Aristotle’s syllogistic—at least, if is to be considered as offering a general theory of reasoning—is that it assigns no special place to mathematical reasoning. This is odd on a number of counts. One is that Plato set great store by arithmetic in the educational programme sketched in Republic. The discussion of this in Book VII leaves no doubt about its importance in training the mind of a future philosopher. However, at least since the time of Pythagoras the study of numbers and calculation had been regarded as belonging to a different intellectual domain from either poetry or the arts of disputation. The division of intellectual labour that eventually fossilised into the programmes of the trivium and quadrivium in the medieval universities of Europe treated grammar, logic and rhetoric as falling under the study of words, while arithmetic, geometry, astronomy and music fell under the study of numbers. But the origins of this dichotomy undoubtedly lie in the ancient world. It is a dichotomy that survives in modern education. Mathematics is regarded as fundamentally important for those who will pursue careers in engineering, accounting, medicine, architecture, etc. It is treated as more or less irrelevant for careers in the law, publishing, the theatre, television or journalism. Although there are various areas of overlap ( some lawyers are needed who can scrutinize a balance sheet; some journalists will be employed to write about advances in technology) that is not regarded as invalidating the deeper dichotomy between word-based and number-based intellectual pursuits. However, long before Whitehead and Russell published Principia Math-ematica it was obvious to every schoolteacher that mastery of the elementary operations of arithmetic involves grasping a certain kind of logic. Addition and subtraction are themselves forms of reasoning. It was no less obvious that in order to get very far with arithmetic, a child needed to learn to recognize and write the individual digits, and to be able read the syntax of their sequential combinations. But even without going that far, it seems evident that, given definitions of the numbers 1, 2, 3 and 4, the processes
< previous page
page_110
next page >
< previous page
page_111
next page >
Page 111 of adding 2 to 2 or 1 to 3, and arriving at 4 as the total in both cases, can be seen as paradigm cases of deductive inference. Once this is grasped, it is only a matter of time before someone raises the question of whether writing systems do not all presuppose operations with numbers. According to Denise Schmandt-Besserat, ‘writing was the by-product of abstract counting.’ She leaves it open whether this applies only to the early writing systems of the Middle East, or to all human writing systems (Schmandt-Besserat 1992:199). The broader question, as she formulates it, is: ‘Is numeracy a prerequisite for literacy?’ If we are interested in elucidating the relationship between literacy and rationality, this is a question we need to pursue. No one seems to doubt that our remote ancestors could count before they could write: the numerate mind preceded the literate mind. Even today in markets all over the world there are traders who are adept at bargaining over prices, but who cannot write their own name. Some of them come from preliterate communities which have quite complex numeral systems in the languages they speak. Many such systems are evidently related to finger-counting. In one African language, the expression for ‘99’ translates as ‘tens which bend one finger which have units which bend one finger’ (Zaslavsky 1973:38). According to 19th-century anthropological reports, among the native tribes of North America, the Dakota, Cherokee, Ojibway, Winnebago, Wyandot and Micmac could all count into the millions, the Choctaw and Apache to the hundred thousands, and many other tribes to 1000 or more (Closs 1986:13). If the members of a community can count up to a million and accordingly perform deductive arithmetical inferences without the assistance of a writing system, who can treat their lack of literacy as grounds for denying them rationality? From minimal numeracy to advanced mathematics, however, is a long road. ‘Everything is number,’ Pythagoras is reputed to have maintained; and this, variously interpreted, has remained the central credo of the High Priesthood of Numeracy down to the present day. According to Eric Temple Bell, Pythagoras would have recognized at least some physicists and astrophysicists of the 20th century as his disciples: Facing the past unafraid, they strode boldly back to the sixth century B.C. to join their master. Though the words with which they greeted him were more sophisticated than any that Pythagoras might have uttered, they were still in his ancient tongue. The meaning implicit in their refined symbolisms and intricate metaphors had not changed in twenty-five centuries: “Everything is number.” He understood what they were saying. (Bell 1946:3) Historians of writing pay little if any attention to the development of mathematical notation, even though that is manifestly of no less importance in the story of civilization than the spread of the alphabet. This is an
< previous page
page_111
next page >
< previous page
page_112
next page >
Page 112 unforgiveable omission, especially if it turns out that, in order to stand any chance of understanding the literate mind, we must first understand the numerate mind. Depending on one’s theoretical perspective, there are two contrasting ways of looking at the relationship between figures and numbers. On one view, the meaning of the written Arabic numeral 3 is the same as the meaning of a corresponding number-word or words (e.g. in English the word three, in French trois, in Latin tres, etc.). According to a different view, the boot is on the other foot: the meaning of English three, French trois, Latin tres, etc. is that number denoted in the Arabic system by 3, which is independent of English, French, Latin, Arabic or any other language. What also seems certain is that most—perhaps all—mathematicians known to history have been members of literate communities (even Pythago-ras, although he left nothing in writing). But the kind of thinking involved in advanced mathematics is undoubtedly well beyond the abilities of literate people who have not been specially trained in that discipline. It takes more than knowing how to read and write to produce a proof of Fermat’s last theorem, or even to understand the proof once someone else has produced it (Aczel 1997). That disparity, however, does not in itself resolve the issue. There is evidence that quite remarkable mathematical skills are sometimes found in individuals whose ability to cope with reading and writing is average or limited, and these mathematical skills are often associated with the ability to perform extraordinary mnemonic feats, such as being able to recite the decimal resolution of π to more than 20,000 digits (Tammet 2006:187–200). But even more astonishing, if that were possible, is the following type of case, described by the clinical neurologist Oliver Sacks, in his study of two remarkable twins, John and Michael. A box of matches on their table fell, and discharged its contents on the floor: ‘111,’ they both cried simultaneously; and then, in a mur-mur, John said ‘37’. Michael repeated this, John said it a third time and stopped. I counted the matches—it took me some time—and there were 111. ‘How could you count the matches so quickly?’ I asked. ‘We didn’t count,’ they said. ‘We saw the 111.’ (Sacks 1986:189) Sacks goes on to observe that another ‘number prodigy’, Zacharias Dase, could also say exactly how many peas had been poured out in front of him, immediately and without apparently ‘counting’ them. The twins evidently could give, for good measure, a spontaneous factoring of 111 into 37+37+37. What Sacks does not tell us is whether they also ‘saw’ the groups of 37 in the pile; and, for reasons to be discussed below, that could be information worth having.
< previous page
page_112
next page >
< previous page
page_113
next page >
Page 113 There are various points about the 111 matches worth pondering. The first is that no amount of mathematical training will enable you or me or Joe Soap to recognize immediately that there are just 111 objects in a group of similar objects suddenly presented to us. The highest total we could probably manage to ‘see’ would be in single figures. But does this mean that John and Michael had sharper mathematical minds than we have? What exactly is the connexion between numeracy as we normally deploy it in everyday life and the extraordinary abilities displayed by ‘number prodi-gies’? Do these abilities have very much to do with numeracy at all? This may sound like a very perverse question, if we assume that only a numerate mind can cope with concepts like ‘111’. But it is not as perverse as it might appear. For if there are people who can just ‘see’ the numerical properties of whatever is presented to them (i.e. without recourse to measurement or calculation) then it seems fair to say that they are doing nothing more remarkable than we do when, for instance, we recognize a colour as green or a taste as sour. And certainly there is no temptation to regard the spontaneous recognition of colours or tastes as involving any kind of mental gymnastics at all. This does not mean that we refuse to recognize that Mary has keener taste buds than Martha, or to credit Jim with sharper colour vision than Fred. But these are not powers of discrimination seen as involving any mental effort comparable to addition and subtraction. Now if seeing that there are exactly 111 matches in a pile without counting them is the manifestation of a high level of numeracy, it is interesting to ask what would be a corresponding manifestation of literacy. Perhaps being able, when presented with a random sequence of letters, to reel off immediately all the anagrams it contains. (‘How do you transpose the letters so quickly?’ ‘I don’t: I just see the words leaping out at me from the page.’) Or perhaps being able to list without hesitation all the sentences that can be constructed from some random sequence of printed words. (‘How can you rearrange them so fast?’ ‘I don’t: they just rearrange themselves all at once.’) Insofar as these are literacy/numeracy analogues, they clearly depend on regarding both literacy and numeracy as involving configurations of units (letters in one case and numerals in the other), and more or less complex operations with those units. But are they, strictly speaking, exact analogues at all? No one supposes that ‘being spelt c-a-t ’ is one of the properties of any feline animal, whereas many people do suppose that ‘being 111’ is a property of certain piles of matches (and other objects), irrespective of whether anyone counts them or not. In other words, the issue is about the relationship between numbers and the ‘real world’; and it is one that has divided philosophers for centuries. The basic choice today is between following mathematicians such as Frege, and treating numbers as Platonic or Pythagorean ‘entities’ of some kind, regarding mathematical truths as being somehow built in to the structure of the universe; and, on the other hand, regarding anyone who
< previous page
page_113
next page >
< previous page
page_114
next page >
Page 114 conceptualizes numbers in that way as having fallen victim to a fallacy of reification. Frege’s case against his opponents, the ‘formalists’, has been summed up by Anthony Kenny as follows: If we took seriously the contention that ‘1/2’ does not designate anything, then it is merely a splash of printer’s ink or a splurge of chalk, with various physical and chemical properties. How can it possibly have the property that if added to itself it yields 1? Shall we say that it is given this property by definition? A definition serves to connect a sense with a word: but this sign was supposed to be empty, and therefore to lack content. (Kenny 1995:100) Thus the formalists are made to appear theorists who cannot see their own self-contradictions. Others think that Frege’s opponents were far nearer the mark than was Frege himself. If you take the antiformalist view you may tend to regard numeracy as intrinsically far more important than literacy, because highly numerate but minimally literate persons are in principle in a position to understand more about how and why the physical world works in the ways it does than their highly literate but minimally numerate counterparts. Both sides in this debate will doubtless agree that in practice it makes not a jot of difference to your bank manager’s mathematics whether he is a Fregean or not. But that is not what is at issue. Someone might perhaps object to the dichotomy itself, pointing out that all known languages have counting words of some kind, however rudimentary; hence—it could be argued—the notion of a literate but totally innumerate person is an implausible fiction. This argument, however, lies open to the riposte that if your language has no more than a numerical vocabulary of the ‘one—two—three—many’ type, that is not going to be much use in the kinds of calculation your bank manager needs to do, much less in coming to terms with Boyle’s law or even Galileo’s experiments at the leaning tower of Pisa. So would it be possible for anyone to have the concept ‘111’ without having mastered a language or system of some kind in which that numerical sign appeared? That is, a system in which it was possible to state—without circumlocution of any kind—‘There are 111 in the pile’, ‘Il y en a 111 dans le tas’, etc. That seems highly doubtful: ‘111’ is already a structured concept, i.e. not like the concepts ‘green’ or ‘sour’. Even if John and Michael can immediately recognize a group of 111 matches en bloc, there is no reason for them to identify that number as ‘111’ unless they are acquainted with and understand the internal organization of the particular system we call ‘Arabic numerals’. Otherwise we should have to suppose that although they call this number ‘111’ they do not really understand what that means (as if a child had learnt to apply, say, the word policeman to a man wearing a particular type of uniform, but without realizing what a policeman’s job
< previous page
page_114
next page >
< previous page
page_115
next page >
Page 115 is or the kind of authority invested in it). But this possibility can be dismissed too, since John and Michael clearly know enough about the meanings of number-words to be able to say that ‘111’ is the name of the number that divides into 37+37+37. But that division could be an operation of a different order from the initial recognition of 111 matches. Did John and Michael ‘see’ those groups already in the pile of matches; or did they know beforehand that any ‘111’ divides thus? One suspects the latter. There is another possibility to be considered. Suppose John and Michael were not able to identify the number as ‘111’, but could nevertheless, without ‘counting’, immediately assemble another pile of matches that also contained 111. What would we say about that? Many would say that they could have identified the ‘pattern’ underlying any group of 111 matches, without being numerate. The inquiry must now be taken one stage further. Suppose John and Michael could ‘see’ how many matches there were in any given pile, but had never learnt any number-words, or other numerical signs, would they still be ‘numerate’? Let us grant too that they could spontaneously, but wordlessly, sort any 111 matches into three groups of 37, or other ‘signifi-cant’ groups (such as 100, 10 and 1.) Would this be evidence of preliterate numeracy? Or just evidence of an ability to see certain types of pattern in groups of items? Once the alternatives are posed in this way, it is tempting to compromise and say that numeracy is based on certain kinds of pattern-recognition, or even that numeracy just is a certain facility in those areas of recognition. On the other hand, it is no less tempting to say that numeracy, as distinct from the underlying pattern-recognition per se, only comes into the picture when an individual can deliberately manipulate the patterns (whether mentally, verbally, ‘on paper’, or in some other way) so as to solve some specific numerical problem. But this patently requires the individual to have mastered the systematic use of signs of some kind. In other words, numeracy takes us straight away into the semiological domain. To put it in a nutshell, it seems that there is no way even of posing the question of how many matches there are in the pile, without recourse to a specific set of numerical signs. Otherwise, whatever John and Michael may have ‘seen’ in the pile of matches, it could not have been ‘how many’ there were. What is interesting here is that their explicit denial ‘We didn’t count’ reveals that they already know what counting is, and that someone could arrive at the correct total by following a counting procedure (which they have no need to do). COUNTING AND DENUMERABILITY Counting does not seem to be possible without some prior classification of denumerable items. Furthermore, the denumerable items apparently have to
< previous page
page_115
next page >
< previous page
page_116
next page >
Page 116 belong to homogeneous sets. This seems to have one of the motives behind Aristotle’s set of categories. As one commentator puts it, ‘Socrates and his whiteness do not add up to two of anything’ (Annas 1976:197). Nowadays we might say that one Mozart symphony plus one terrorist attack do not make two of anything; whereas one Mozart symphony plus a second Mozart symphony do make two Mozart symphonies, and one terrorist attack plus a second terrorist attack do make two terrorist attacks. (For linguists, the point was made long ago by Jespersen, using different examples from these: Jespersen 1924:188–9.) Why is this? What does it show about our concept of numeracy? Why not take one Mozart symphony plus one terrorist attack and make ‘two’ from taking one of each sort? The answer seems to be that in order to get to ‘two’ we need to have two somethings ; and there is no class of somethings that subsumes just Mozart symphonies and terrorist attacks. (It would be merely circular to retort that one Mozart symphony plus one terrorist attack add up to two ‘countable items’.) But suppose there were a linguistically identifiable class that included Mozart symphonies and terrorist attacks. Suppose we had a class name symphack that applied indifferently just to members of these two subclasses. Then it would make perfectly good sense to say that one Mozart symphony plus one terrorist attack make two symphacks. To object that in practice there is no conceivable use for such a word as symphack is to miss the point. At one time it might have been supposed that there would be no conceivable use for thousands of terms (from telephone to supermarket ) nowadays to be found in our current vocabulary. A similar observation that reveals something about our lay concept of numeracy is the following. Half a pound of sugar and half a pound of carrots do not make a pound of anything; but their combined weight makes a pound nevertheless. From this we see that whatever differences there may be between sugar and carrots do not stop both being weighed by a common measure. But suppose rheumatism could be cured by applying a poultice compounded of fifty per cent sugar and fifty per cent carrots. Then we should doubtless invent a special name for this combination, and even if we did not it would no longer be the case that half a pound of sugar and half a pound of carrots do not make a pound of anything. They would make a pound of rheumatism poultice. Now let us consider the above examples in relation to the observation that there is no precise numerical answer to the question ‘How many crumbs are there in a loaf of bread?’ That is not merely because loaves of bread vary in size but also because crumbs are indeterminate in respect of physical itemization. The problem cannot be solved empirically because each sample loaf tested will yield a different number of crumbs. The best you can do if you are determined to play the numbers game with crumbs and loaves of bread is to rest content with the statement that a loaf of such-and-such size or such-and-such weight contains on average between so-many and so-many crumbs.
< previous page
page_116
next page >
< previous page
page_117
next page >
Page 117 The conclusion to which all this points may be summed up neatly as follows. The basis of measurement, whether linear, or two-dimensional, or cubic is always equivalence. All counting operations presuppose it. It is even presupposed when counting is somehow bypassed, as in John and Michael ‘seeing’ 111 matches in a pile. For we have to take it that if John and Michael understand what they are saying, then they will agree that this pile is numerically equivalent to another pile of exactly 111 matches, but not to a pile of 110, or another of 112. (And if they do not thus agree, then we are left wondering what their claim to have ‘seen’ 111 matches can possibly mean.) But ‘equivalence’ is not a given. It is not part of Nature, but a value that human beings impose upon Nature in particular cases. To declare this bluntly sounds like an avowal of anti-Pythagorean prejudices. But at least it is possible to give a plausible account of how this value is arrived at, which compares favourably with the Pythagorean dogma that equivalence just ‘exists’ (i.e. as a universal relation obtaining between certain numbers, each of which also just ‘exists’). My account runs as follows. Equivalence is a value that varies according to circumstances, and arises from the way we integrate certain of our activities with other activities, including those of other human beings. The relevant activities are those of exchange . A is willing to exchange something A has for something B has, and B agrees. That is the basic form of integration involved. It happens every time someone goes into a shop and buys something. Without it, the entire edifice of the modern commercial world would collapse. But there are more basic forms of integration than those of the High Street (which depend on there being a system of currency in operation). Farmer Smith can agree to help farmer Jones harvest his wheat if, in exchange, farmer Jones will help farmer Smith with the autumn ploughing. And so on. No money need change hands. All forms of human labour, as Marx saw, are subject to integrational equivalences; and you may get a good deal or a bad deal, depending on a whole variety of factors. In many cases, whether the deal is good or bad, you have little choice. Society, and the survival of the individual in society, depend on the controlled integration of its members’ activities, and society will go to considerable lengths to ensure that these integrations are enforced. Although, for purposes of trade, ad hoc barter often works well enough for establishing integrational equivalences, it is much facilitated in the long term by establishing codified systems which are then imposed on everyone. This is the essential function of currency (whether it be gold bars, or dollar bills, or sea shells, or consignments of tobacco). But the establishment and implementation of such systems give rise to many misconceptions. Paramount among them is the misconception that equivalence value resides in the tokens of equivalence themselves. Hence the obsession with accumulating money, in whatever form that may take. The legend of King Midas bears witness to the recognition of this misguided obsession very early in the Western tradition. The story of Midas tells us that it is no use being
< previous page
page_117
next page >
< previous page
page_118
next page >
Page 118 ‘rich’—i.e. sitting on piles of gold—if, as a result, you cannot eat or drink. This impasse is manifestly a failure to integrate one activity (accumulation of gold) with other activities that are more important for survival (eating and drinking). But there is a more subtle misconception about currency which is more relevant to the present discussion. It is the misconception that a currency system in itself guarantees equivalence values. So whatever your money will buy in the market, you can nevertheless rest assured that your pound is worth one hundred pence exactly, neither more nor less. It will indeed be, so long as that particular codification remains in place. But what is a Roman denarius worth nowadays? If you are hoping to exchange it for ten Roman asses, then you will probably have a hard time arranging any such transaction. There is no practical possibility of integrating these activities, i.e. giving Fred your denarius in return for his giving you ten asses. ABSTRACT COUNTING Schmandt-Bessarat’s account of the difference between ‘concrete’ and ‘abstract’ counting runs as follows: Concrete counting means that, in some cultures, the number words to render “one,” “two,” “three,” etc. were tied to concrete objects, resulting in sets of number words, or numerations, differing according to whether, for instance, men, canoes, or coconuts were being counted. (Schmandt-Besserat 1992:185) Schmandt-Besserat seems to think that if the language you speak uses only systems of concrete counting, then you do not really understand that a group of three canoes contains the same number of counted items as a group of three coconuts. Why anyone should fail to realize this she does not explain, but quotes Bertrand Russell to the effect that ‘It [ … ] required many ages to discover that a brace of pheasants and a couple of days were both instances of the number 2’. What Russell actually says on the page quoted (Russell 1919:3) is not that it did require many ages but that ‘it must have required many ages … ’, which is a somewhat different claim. Indeed, it ‘must have’ if Russell’s theory of numbers is right; but Schmandt-Besserat does not go into that. Russell also says on the same page that ‘the discovery that 1 is a number must have been difficult’. But the difficulty resides in Russell’s number theory, not in the counting procedures devised by our remote ancestors, which were doubtless perfectly adequate for their purposes. Russell’s theory is that ‘a number is anything which is the number of some class’ and that ‘the number of a class is the class of all those classes that are similar to it’ (Russell 1919:18–19). It is relevant to remark that there are many
< previous page
page_118
next page >
< previous page
page_119
next page >
Page 119 people today who have not yet ‘discovered’ that ‘1 is a number’ in Russell’s sense; and furthermore that many of those do not believe it to be the case that a number is ‘anything which is the number of some class’. If something is not the case, then it can hardly be ‘discovered’ by anybody, however mathematically gifted. The question of whether or not literacy in ancient Sumer was a ‘by-product of abstract counting’ cannot be settled, or even clarified, by appeal to the more controversial areas of modern philosophy of mathematics. It is also worth noting in passing that Russell had evidently not considered the possibility that seemed obvious to Jespersen; namely, that deciding what counts as ‘one’ of a kind in any particular instance is not mathematically given from the beginning and for all time, but is at least in part a linguistic matter. A pair is two of something. But if there are three pairs of trousers hanging on the line, it is not the case that here we have six trousers in all. Comparable examples are to be found in many languages, and what counts grammatically as a ‘countable’ item in the first place varies considerably. It is naive to suppose that counting ‘really’ relates to some universal set of class relations which are only imperfectly mapped on to existing languages. Both Russell and Schmandt-Besserat evidently believe that recognizing a brace of pheasants and a couple of days as groups having just two members each required of our ancestors a monumental and unprecedented feat of ‘abstraction’. But neither explains what process of ‘abstraction’ was involved, nor why anyone should have bothered to make it. Comparing pheasants with days is not an enterprise likely to have played an important role in the life of the average cave man. But once days are conceptualized as separate, successive, non-identical periods (and not the same day coming round again for another time), it would take a very dense cave man not to work out that if you eat one of your brace of pheasants today, you still have the other bird left for tomorrow. A pheasant a day keeps philosophers of mathematics at bay. Schmandt-Besserat makes no mention of the fact that, later in his career, Russell had come round to a somewhat different view. In a remarkable paper of 1950, entitled ‘Is mathematics purely linguistic?’, Russell concedes that ‘the propositions of logic and mathematics are purely linguistic’. This represents a capitulation to Wittgenstein; or, more exactly, as Keith Green points out, since Russell still held that logic and mathematics are identical, it followed that ‘if logic was linguistic (as he had come to accept) then so must mathematics be’ (Green 2007:141). But there is still no reference to the fact that what is required, for anything but the most elementary forms of arithmetic, is not just linguistic but literate forms of expression, i.e. an inventory of appropriate written signs. For without them numeracy cannot get very far—certainly not as far as Russell wanted to go. Russell, for his part, had believed that mathematics is ‘the chief source of our belief in eternal and exact truth’. For ‘mathematical objects, such
< previous page
page_119
next page >
< previous page
page_120
next page >
Page 120 as numbers, if real at all, are eternal and not in time’ (Russell 1946:55). Nowhere does he consider that this metaphysics of numbers might be the prerogative and product of a literate society. But by 1950 he was willing to pronounce—at least provisionally—what he called ‘an epitaph on Pythago-ras’. It was worded as follows: ‘All the propositions of mathematics and logic are assertions as to the correct use of a certain small number of words’ (Russell 1950:362). Throughout the whole of Russell’s History of Western Philosophy , writing is mentioned only twice, and on neither occasion is it represented as having any possible influence on human conceptions of the world. Not even when discussing the work of modern mathematicians does Russell accord any importance to the ways in which written notations may facilitate or impede the development of mathematical thinking. It is hard to maintain this indifference in the light of more recent lessons of modern neuroscience, which suggest that the brain of the little Johnny who has copied out his multiplication tables and can use them to solve the harder problems in his school arithmetic book is no longer the same as the brain of the little Johnny of a couple of years ago, when he could not do any of these things. Nor does it seem too bold to say that nowadays, as a result, little Johnny’s mind possesses much improved mental resources for thinking about quantitative aspects of the world around him. An odd thing about Schmandt-Besserat’s claim is that, in the end, the notion of ‘abstract counting’ is an irrelevance. It would make lit tle or no difference to her theory if the Sumerians had always used separate numerical terms for different groups of items. For the ‘evidence’ Schmandt-Besserat adduces boils down to pointing out that many of the incised or impressed numerical signs appearing on early containers turn up again as characters in Sumerian scripts. We are expected to infer from this that the invention of writing occurred when someone had the brilliant idea of extending the marking of quantities on containers to the marking of other information in a similar way and ultimately to the marking of any appropriate words on any appropriate surfaces. The trouble with this scenario is the implausibility of an initial stage in which only quantitative information was thought worth setting down explicitly in any visible form. Information relating to ownership of objects, family relations, rights and prohibitions of all kinds, would seem to have no less pressing a claim to be recorded in the early stages of urban development in the Middle East. The idea that this occurred to no one in that region before numerals had appeared on the scene to provide the general model for writing is unconvincing. The problem in giving an affirmative answer to Schmandt-Besserat’s general question as to whether numeracy is a prerequisite for literacy is this. If it were, then it should be obvious what it is that the literate mind can do that requires prior experience of numeracy, why it is that children have to be first taught to count before they can understand how
< previous page
page_120
next page >
< previous page
page_121
next page >
Page 121 to use an alphabet, and many similar questions relating to the relevant mental operations in both spheres. But none of this is obvious; or, if it is, Schmandt-Besserat does not bother to explain the priorities or even to set up a testable hypothesis. WRITING AND MEMORY One of the assertions commonly made about writing is that its ‘mnemonic function’ played a key role in the development of civilization. Writing is seen as ‘a device extending the human memory’ (Coulmas 1989:11). But it is no use having a written record if no one can remember how to decipher the written signs. Had hieroglyphs remained in use over the centuries in some isolated religious community in Egypt, there would have been no need for the monumental labours of a Champollion. Reading what could subsequently be read as a result of Champollion’s decipherment did not extend the memory of modern scholars. But it did enable them to integrate what they previously knew about ancient Egypt with new evidence from the original source. As Plato pointed out, reliance on writing seems more likely to weaken the memory than to improve it. What is at issue is not whether writing something down helps to ‘fix’ it in the memory of the individual, but the extent to which, with the availability of writing, the exercise of memory becomes increasingly redundant. Plato’s argument is taken up by Gelb in A Study of Writing: All we have to do is compare what we know about our own ancestors beyond our grandparents with what an illiterate Bedouin knows about his, in order to observe the great difference. The average Bedouin has no recourse to written documents to find out about his family or his tribe; he has to keep in his memory knowledge of past happenings [ … ]. (Gelb 1963:222) Coulmas makes the point that preliterate peoples, however good their memories might be, cannot rival ‘a mnemonic device such as, say, the catalogue of a university library’. But one must beware here of confusing two quite separate things: memorization and record-keeping. (The confusion is common in discussions of writing. It is like treating driving a car as an extension of walking. The two activities are entirely different, even though both may serve the same purpose, such as going from one place to another.) Keeping written records, far from extending your memory, dispenses with the necessity of exercising it. Does reciting π to 20,000 decimal places require the availability of a written list of figures as the basis for memorization? How otherwise could anyone committing such a long sequence of numerals to memory check
< previous page
page_121
next page >
< previous page
page_122
next page >
Page 122 whether they had got the sequence right or not? It will not do to reply that the sequence could be learnt orally from someone else. This merely defers the problem, since it now has to be explained how the other person came to be able to recite the sequence without ever seeing it written down either. If this argument is sound, it would seem that at some point there is a limit to the sequence of numbers that can be memorized without the assistance of writing, however good your memory may be. Ong has generalized this type of argument to apply to any sufficiently complex series of mental activities. He writes: In the total absence of any writing, there is nothing outside the thinker, no text, to enable him or her to produce the same line of thought again or even to verify whether he or she has done so or not. (Ong 1982:34) But this can hardly apply to fairly simple cases of remembering. If you are not quite sure whether you have correctly remembered that there are eight pairs of shoes in the cupboard, all you have to do is look in the cupboard and count them again. There is, to be sure, something ‘outside the thinker’ that makes this possible—namely, the shoes—but it is not a text. It would be very odd to believe this ‘verification by inspection’ was just not good enough, and that it needed to be confirmed by writing down the number on a piece of paper before going back to look in the cupboard. If you found that in practice, without the piece of paper, you had always forgotten the number by the time you got to the cupboard to check, there would indeed be something seriously wrong with your memory. (In any case, writing the number down will be of no avail if in the mean time someone else has removed a pair of shoes without your knowing.) Nevertheless, as in the case of memorizing numerical sequences, it seems that somewhere a point is reached beyond which recourse to writing is necessary. Doubtless in some neuropsychologist’s laboratory somewhere in the world, experiments have already been done to establish where this point is located. Exactly where does not greatly matter for purposes of my argument. But wherever this point comes, it provides an empirical terminus a quo in the search for (part of) the answer to the question of what the literate mind can do which is in principle beyond the scope of any numerate-but-non-literate mind. FOUNDATIONS OF MATHEMATICS Exploring further some of the issues raised above would involve drilling into what are sometimes called ‘the foundations of mathematics’. One does not have to drill far to realize that some of these foundations do not go very deep down. In some areas one might even say that the foundations are plainly visible above ground, so no drilling is needed.
< previous page
page_122
next page >
< previous page
page_123
next page >
Page 123 It may seem a pretentiously trivial observation to point out that all those intellectuals engaged in the drilling are highly literate members of literate communities. But perhaps Wittgenstein, for one, might not have thought the point too trivial to bother with, since he says in his Philosophical Remarks : only in our verbal language (which in this case leads to a misunderstanding of logical form) are there in mathematics ‘as yet unsolved problems’ or the problem of the finite ‘solubility of every mathematical problem’. (Wittgenstein 1975: §159) Two comments on Wittgenstein’s observation seem to be in order. The first is that it is not just ‘in our verbal language’ that these illusory mathematical problems arise: it is only in one dialect or register of that language; namely, the written register employed by mathematicians. Wittgenstein’s dictum that ‘arithmetic is the grammar of numbers’ (Wittgenstein 1975: §108) cannot pass muster as it stands. Arithmetic, at least as it is taught in higher education today, is the grammar of numerical notations. A simple example will illustrate the point. Can anyone seriously suppose that the concept of ‘minus numbers’ (as appealed to in notions like that of ‘the square root of minus two’) would ever have seen the light of day except in a literate society which supposes that ‘to the left’ of zero there extends a numeral series (-1, -2, -3, -4 … ) in reverse order, comprising units that are the mirror images of those standing ‘to the right’ of zero? Or that the ‘existence’ of these negative units is not simply the result of prefixing a minus sign to the written signs 1, 2, 3, 4, etc? By no stretch of the imagination would any linguist suppose that these are concepts arising naturally from ‘ordinary language’, and certainly not from the spoken language of any preliterate community. The basic notion of ‘lining up’ the numerals from left to right in this way takes us back to the original sense of the Greek stoikheia , which was—at least, according to Dionysius Thrax (Lallot 1989:43, 98)—that of items ‘standing in a row’. In other words, here we have yet another product of the literate mind. Even if Dionysius had got the etymology wrong, his explanation bears striking witness to the hold of literate thinking over the foundations of grammar. The connexion between this and Saussure’s second principle of general linguistics—the linearity of the linguistic sign (Saussure 1922:103)—needs no elaboration. The quarrel in philosophy of mathematics between Wittgenstein, Russell and various members of the Vienna Circle during the 1930s (Shanker 1987) was ultimately about the priorities and functions to be assigned to different kinds of writing system, although none of the parties involved saw it in that light. They had all, as Ong might well have put it, ‘interiorized’ the writing systems in question so deeply that their minds were deceived into believing that the debate was about such profound matters as the nature of truth and the structure of the universe. Of the participants in the 1930s debate, Wit-tgenstein alone occasionally comes close to recognizing that in the absence
< previous page
page_123
next page >
< previous page
page_124
next page >
Page 124 of a special form of mathematical notation the debate would collapse. For instance, in Philosophical Grammar he observes at one point that if we keep strictly to mathematical signs and thus avoid ‘the equivocal expressions of word-language’ then mathematical relationships become clearer. For example, if I put B right beside A, without interposing any expression of word-language like “for all cardinal numbers, etc.” then the misleading appearance of a proof of A by B cannot arise. We then see quite soberly how far the relationships between B and A and a + b = b + a extend and where they stop. (Wittgenstein 1974:422) This is another version of the ‘Aristotelian’ fallacy that the substitution of alphabetic letters renders logical relations perspicuous. Only to someone whose thinking is as thoroughly imbued with unconscious scriptist assumptions as Wittgenstein’s would it seem obvious that one can ‘see’ from particular written forms how particular relationships stand. (It is no better than saying that you can ‘see’ from the wordorder of Fish swim that fish is the subject.) Even here, however, Wittgenstein seems reluctant to concede that it is only by comparison with a certain kind of notation that there is any question of the numerical expressions of what he calls our ‘word-language’ being regarded as mathematically ‘equivocal’ in the first place. We are reminded again of Aristotle’s unresolved problem with hom-onymy. But the bone of contention in this case would have eluded even Aristotle, who would doubtless have struggled to make sense of it as a re-run of the differences between Plato and the Pythagoreans ( Metaphysics 985b23–988a16). Aristotle would have had the advantage of being—by the standards of Wittgenstein’s day—at least moderately literate and moderately numerate. But in a preliterate society, where it is not possible to distinguish between ‘two’ written two and ‘two’ written 2, such a debate would be inconceivable. Its terms could not even be articulated. The second—related—comment is that, unfortunately, Wittgenstein construes the problem with ‘our verbal language’ as being that it obscures in such cases the correct ‘logical form’. But is this ‘logical form’ itself anything other than one more notational illusion? (The notation in this case being provided by the alphabetical ‘grammar’ of variables and truth-tables, i.e. those modern extensions of literate Aristotle’s revolutionary use of letters in his syllogistic.) Wittgenstein’s celebrated distinction between ‘saying’ and ‘showing’ (Glock 1996:330–6), which Wittgenstein himself declared to be the main point of his Tractatus , is a scriptist extrapolation from start to finish. The very notion that, by courtesy of a suitable notation, the structure of a meaningful proposition can be shown (i.e. overtly displayed or exhibited), even though it cannot be said (i.e. stated, put into words), would never have occurred—let alone seemed plausible—to any mind not thoroughly committed to the literate doctrine of alphabetic variables.
< previous page
page_124
next page >
< previous page
page_125
next page >
Page 125 9 Interlude Constructing a Language-Game OPERATIONAL DISCRIMINATIONS It is difficult to see how either the literate mind or the numerate mind would be in a position to function at all without having a grasp of certain basic distinctions that the operations of both rely on. Since so much controversy has surrounded the word concept, it may be as well to avoid it and speak here of ‘operational discriminations’ or ODs. How I propose to use this term may best be explained by means of an example borrowed from Wittgenstein. In Philosophical Investigations , Wittgenstein describes a ‘primitive’ language of just four words, used for communication between a builder and his assistant: The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones ( Bausteinen): there are blocks, pillars, slabs and beams. B has to pass the stones, and that in the order in which A needs them. For this purpose they use a language consisting of the words “block”, “pillar”, “slab” and “beam”. A calls them out;—B brings the stone he has learnt to bring at such-and-such a call. (Wittgenstein 2001: §2) Wittgenstein adds the rider: ‘Conceive this as a complete primitive language ( vollständige primitive Sprache).’ It is important for purposes of the present discussion to understand how we are to take this. A few paragraphs later Wittgenstein proposes that we could imagine this language as ‘the whole language of A and B; even the whole language of a tribe’. At this point Wittgenstein’s reader may well protest that it seems highly implausible that any community would invent words just for the purpose of building, but for no other activity, and it strains credulity to imagine that a whole tribe would consist exclusively of builders and their assistants. Are there no cooks, farmers, carpenters, weavers or warriors? And if so, what use would the builder’s four words be to them? It would also be a curious linguistic community if only one class of citizens (i.e. the builders) ever spoke. In the light of these obvious objections, it seems necessary to ask
< previous page
page_125
next page >
< previous page
page_126
next page >
Page 126 what exactly Wittgenstein is driving at by describing the builder’s language as a ‘complete’ language. In Philosophical Investigations Wittgenstein’s aim in constructing imaginary ‘primitive’ languages and language-games is to turn an analytic searchlight on features of more complex languages of the kind already familiar to his readers (German, English, etc.). But there is a serious risk that the strategy will backfire. There is inevitably a temptation to read back into these allegedly ‘primitive’ systems certain interpretations derived from our acquaintance with less primitive systems, even when they are not supported by the semiological structure of Wittgenstein’s invented examples. In this case, for instance, Wittgenstein chooses, for the vocabulary of his primitive language, forms identical with those of four ordinary Ger-man words. This already predisposes the reader to treat these four words as nouns and names of classes. This is potentially misleading, inasmuch as the language Wittgenstein describes as a ‘complete’ language has no room for a metalinguistic distinction between nouns and verbs, or between these and any other part of speech. We could tighten up Wittgenstein’s vague notion of ‘completeness’ by stipulating (i) that the items the builder calls for, and his assistant brings, are items having no other function than as materials required in the building operation, and (ii) that A and B have no other language available in which to describe or refer to them. Some such stipulations seem to be necessar y if we are going to set aside—to the extent that we can as ‘outsiders’—any preconceptions about what these four words mean for A and B . We shall also have to set aside Wit-tgenstein’s explanation (Wittgenstein 2001: §6) that this language-game has been learnt by a systematic programme of ‘ostensive teaching’, involving a teacher who utters the words and points to the relevant objects. For this presupposes that there was a prior language-game (the teaching game) on which the language of A and B was based, and that at least one other person (the teacher) could play. And then there is the question of how the teaching game itself was learnt. We seem straight away to be led into a regress of language-games incompatible with the notion that any of them is ‘complete’ in itself. So let us start from scratch and postulate that we are dealing with a language-game already in operation (we don’t know anything about its antecedents) and complete in the sense stipulated above. (Let us for convenience give it a name: Constructionese .) Our concern henceforth is with the internal semantics and semiological structure of Constructionese as seen from the viewpoint of the builder and his assistant . If we now propose to ask what mental apparatus is required for A and B to communicate successfully in Constructionese, we, as literate and numerate investigators, are looking ‘from the outside’ at a communicational world which is quite different from our own. We shall find ourselves constantly in trouble when trying to describe a situation in which A and B, ex hypothesi,
< previous page
page_126
next page >
< previous page
page_127
next page >
Page 127 just do not have the resources that we habitually rely on. (Oddly, Wittgen-stein tacitly credits them with a grasp of the type/token relationship. This is presumably one of the things carried over from the previous teaching game. But we can dispense with it for our present purposes. It simply obscures the relevant issues.) It seems clear that A and B, as thinking creatures, do need quite a number of ODs of some kind, and that these are indispensable to the successful execution of the building programme. But does this include being able to count (as we would call it)? Clearly not, since Constructionese—their only language—has no counting words. Nevertheless, even if they have no numerical concepts, the builder and his assistant need a grasp of what might be called ‘proto-numerical’ ODs, as explained below. A and B will need to grasp that each of them has a role that is complementary to the other’s, but separate from it. They have to understand that—as we might put it from an outsider’s perspective— what they are engaged in is ‘a two-person job’. But we cannot on that account attribute to them any notion of duality—which is an explicitly numerical concept. However, they do need to grasp a discrimination of some kind which corresponds to their perception of the individuality of their different roles as agents, of the fact that the operation divides into two parts accordingly (although again we must not allow that latter description because it lets in the banned numerical concept ‘two’). What we are groping to describe here is a proto-numerical OD (‘proto-two’, if you like) implicit in A and B’s recognition of the bi-partition of roles and the non-identity, non-interchangeability, of the activities which each must perform. A and B will also each need four classificatory ODs, corresponding to the four different kinds of building material they are called upon to han-dle. They must be able to distinguish blocks-from-pillars-fromslabs-from-beams, a quadruple division. But again we must not say that they need the concept ‘four’. Nor, it should be noted, do they both need to have ‘the same concepts’ of the different types of object. How they draw the mental-cum-perceptual discriminations between types of object does not matter. What matters is that in practice B always brings the kind of object that A called for, regardless of whether they are using criteria of size, shape, weight, colour, or any other differentiae. They will also need four classificatory ODs corresponding to the word-forms in their language. Here the same proviso applies. The way these word-forms are differentiated does not have to be ‘the same’. B needs only auditory criteria, since he never speaks. A needs both auditory and articulatory criteria, since he has to utter the words. All that matters for communi-cational purposes is that neither of them ever confuses, say, the call ‘Block!’ with the call ‘Beam!’, or the call ‘Pillar!’ with the call ‘Slab!’. So far all this seems fairly straightforward. Let us now examine their ODs in greater detail. When dealing with slabs, for instance, they seem to need to differentiate between ‘one-slab’ and ‘more-thanone-slab’. This
< previous page
page_127
next page >
< previous page
page_128
next page >
Page 128 is demanded by the requirements of the building operation. (Wittgenstein stipulates that B must fetch the individual items in the order in which A needs them. So it will not do for B to fetch two slabs when A calls ‘Slab!’, since at that point in the proceedings A does not need another slab.) But likewise B must not return empty-handed: so he needs to grasp the OD between ‘at-least-one-slab’ and ‘no-slab’. Bearing in mind Schmandt-Besserat’s distinction between ‘concrete counting’ and ‘abstract counting’, we must be careful not to attribute to A and B discriminations of a higher order of abstraction than are strictly needed in Constructionese. For example, it would already be an over-generous interpretation to say that A and B distinguish in general between ‘one’ and ‘more than one’: all we can say if we take a parsimonious view is that they must distinguish ‘at-least-one-block’ from ‘more-than-one-block’, ‘atleast-one-pillar’ from ‘more-than-one-pillar’, and so on. For they may be using different operational criteria for each class of item. An explanatory note is in order here on the use of hyphens in the formulations given above (‘at-leastone-pillar’, ‘more-than-one-pillar’, etc.). The purpose of these hyphens is to remind us that as soon as they are removed full-blown numerical concepts sneak in (‘one’, ‘more than one’). Ex hypoth-esi , speakers of Constructionese have no such concepts. For them Con-structionese is a complete language and their only language: their grasp of ODs is in every case bound up with the particular operations in question. So ‘more-than-one-pillar’ is not on a par with ‘more-than-one-block’. The difference might be pragmatically realized in a variety of ways, e.g. B finds that whereas he can carry several blocks if need be, he cannot manage more than one pillar at a time. The point is not trivial, since we are focussing here on what is needed; and this makes a difference to the ODs themselves. That is to say, part of the understanding necessary for dealing with blocks will have to include discriminating between ‘at-least-one-block’ and ‘more-than-one-block’, which may in turn involve different criteria from those relevant to pillars. Likewise it is going too far to say that either A or B has the concept ‘one’, which would indeed be a numerical concept. For the concept ‘one’ as we understand it—from the perspective of those accustomed to a far richer language than Constructionese —is part of an extended system of numera-tion (which includes contrasting it with ‘two’, ‘three’, etc. ). All of this is beyond the reach of the resources of Constructionese. But the discriminations A and B need are also tied in with another aspect of the whole building programme. We have not described the situation adequately by indicating what is needed to underpin the quadruple clas-sification of building materials on which the whole collaboration between A and B is based, or the quadruple classification of calls. That is only part of the story. For B has to be able to put A’s calls into appropriate temporal correlation with the fetching and carrying that he is being called upon to perform. If he could not do that—for whatever reason—the system would
< previous page
page_128
next page >
< previous page
page_129
next page >
Page 129 break down. That temporal correlation has nothing to do (from our ‘exter-nal’ perspective) with being able to recognize the differences between the various building materials. When A calls ‘Block!’ he is not only saying—in our terms—that he wants an item of a certain kind, but that he wants it brought now in the sequence of operations. It is a call for immediate action on B’s part. B ‘replies’ by going to fetch a block. This ‘you-then-me’ aspect of the communicational process requires ODs which set up a segmentation of the temporal continuum into potentially denumerable parts. The temporal segment that is identified as ‘now’ at any given point needs to be distinguished from immediately preceding and immediately following segments. So, from an ‘outside’ point of view, there must be at least three such segments (the current one, the preceding one and the following one). They ‘would be’ countable if A and B could keep count; but speakers of Constructionese have no resources for counting. So here protocountability resolves itself into a sequence of ODs involving correlating calls from A and corresponding fetching and carrying by B. It is the succession of these A-B correspondences one after another that structures the concatenation of the communication process. A and B have to grasp that OD structure for their collaborative work to proceed at all. B, for instance, does not ‘save up’ a sequence of calls from A and then fetch those items all in one journey. All that has been said so far might be summed up ‘from the outside’ by saying that this primitive communication system is based on a combination of just two semiological archetypes. One is the sign functioning ‘statically’ as a classifier. The other is the sign functioning ‘dynamically’ as the initiatior of another stage in the building operation. The words in the system have to fulfil both semiological functions simultaneously. That is, every time the builder utters a word, that utterance has to function as an instruction to his assistant to do something : but what the assistant is being required to do depends on which of the words is uttered. The dynamic function anchors the ODs to the here-and-now, alerting the assistant to the need for immediate action. It allocates the utterance (e.g. ‘Slab!’) to a place in a temporal sequence, in which the next place has to be occupied by B going off to fetch a slab. How are these two functions related? Unless we understand this, we shall never make sense from the inside of the system of communication that A and B are using. The answer is that ‘from the inside’ those two functions are indistinguishable. What accomplishes one automatically accomplishes the other. There is no way of separating out the dynamic semiological function from the classifying function. Here at last we can put our finger on what makes Constructionese a semiologically ‘primitive’ system. In all this we found ourselves struggling to describe a ‘primitive’ communication system in its own terms because our viewpoint is that of thinkers habituated to more ‘advanced’ thinking-systems (including systems that are based on familiarity with languages as complex as English, German, etc.). One lesson we might have learnt is that ‘our’ thinking-systems utilize
< previous page
page_129
next page >
< previous page
page_130
next page >
Page 130 ODs in many different ways, irrespective of what inventory of counting-words they may happen to have. What the traditional grammarian recognizes as distinctions of ‘number’ (e.g. ‘singular’, ‘dual’, ‘plural’) is only one of these. Any language that distinguishes syntactically and semantically between ‘John loves Mary’ and ‘Mary loves John’ is exploiting OD relations in another way. Again, such relations underlie the systematic differences between proper names and common nouns, and the nomenclature of days of the week ( Sunday, Monday , Tuesday , Wednesday , etc.). Wherever we analyse lexical and grammatical systems, we find at the bottom an OD structure which has to be grasped if the linguistic contrasts in question are to be effective. Is this the ultimate nexus between words and numbers, between language and arithmetic? If so, the answer to Schmandt-Besserat’s general question ‘Is numeracy a pre-requisite for literacy?’ has to be: ‘No. But operational discriminations are, because they are involved in even the most primitive forms of linguistic communication.’ If this argument is sound, then we have at last identified the infrastructure on the basis of which both the literate mind and the numerate mind operate. OPERATIONAL DISCRIMINATIONS AND REASONING The whole point of developing the Wittgensteinian language-game in this way is that here we have an imaginary situation in which, by Aristotelian standards, the builder and his assistant are neither literate nor numerate. A fortiori, they are incapable of reasoning. They cannot articulate the proposition that this ‘follows from’ that; they have no words for ‘not’, ‘because’, ‘therefore’, etc. Nevertheless, they communicate successfully. Furthermore, their system provides a form of communication radically different from that implied in Aristotle’s language myth. There is no room here for supposing that when the builder calls ‘Block!’ the assistant thinks to himself ‘Ah! That means he needs a block.’ Even less ‘Ah! That means that if I don’t go and get one I shall be breaking the rules.’ Ex hypoth-esi , the assistant cannot think such thoughts, for their articulation in that analytic form presupposes more linguistic resources than Constructionese possesses. The assistant just thinks ‘block’ (where thinking ‘block’ means both recognizing the call in question and initiating the action required to respond). Is then block in Constructionese a kind of homonym? Is it the name of a certain class of building materials plus an instruction to fetch one, both having the same form? No, since a separate identification of those two words is again beyond the resources of Constructionese and the profi-ciency its use requires. The proficiency A and B have is an integrational proficiency, an ability manifested pragmatically by integrating one’s actions systematically with those of another person. The words of Constructionese are integrational
< previous page
page_130
next page >
< previous page
page_131
next page >
Page 131 signs, not Aristotelian sumbola . The latter are deemed to fulfil their semiological function whether or not the hearer takes appropriate action in accordance with the speaker’s utterance. The sumbola have already done their job when the hearer has heard and understood what was said. Not so in the case of A and B: they are not using Aristotelian sumbola , but signs of a different kind. In their world, there is no room for ‘understanding a sign’ as an independent psychological state or event—not even as a fleet-ing ‘Eureka!’ experience that intervenes between B’s hearing the word and taking action. Given all these caveats, we nevertheless recognize that what A and B are engaged in is a rational activity and that A and B are acting as rational agents. But it is a quite different level of rationality from Aristotle’s. It does not depend on the agents being able to give reasons for what they do. It is a rationality which consists in the ability to partake meaningfully in a joint programme of co-ordinated activity. Aristotelian rationality tacitly presupposes that ability, but fails to acknowledge it as rationality until it can be translated into a language fully equipped with and s, ifs and therefores. RATIONALITY AND ‘RULES’ Wittgenstein would not have wished to develop the parable of the builder in the way that has been proposed here, i.e. into an account of rationality that stands in contrast to Aristotle’s. For Wittgenstein it is important to retain, come what may, an appeal to ‘rules’. Without it, he cannot muster a coherent account of what a language is. It is the appeal to ‘rules’ that underpins his (notoriously idiosyncratic) conception of grammar . According to one Wittgenstein scholar, Wittgenstein’s notion of the ‘autonomy’ of grammar has two striking features. First, it implicitly rejects the whole notion of ‘a system or calculus of rules’. Instead, ‘it might be called a motley of rules’ because the rules in question ‘are not uniform in form or application’ (Baker 1986:301). Second, in virtue of this autonomy explanations of meaning cannot be justified (and hence cannot be faulted). They are free-floating creations like the planets. Nothing holds them in place. There is nothing behind the rules of grammar, there is, as it were, no logical machinery. (Baker 1986:301) If this is right, for Wittgenstein ‘rules of grammar’ mark the nec plus ultra of linguistic explanation. For Wittgenstein, logic does not ‘explain’ grammar (as many thinkers in the Western tradition had supposed): grammar just is. Logic itself (e.g. as articulated by Aristotle) presupposes grammar. By the time he wrote Philosophical Investigations , Wittgenstein seems to have abandoned his earlier belief in ‘logical form’ (Glock 1996:212–6). But
< previous page
page_131
next page >
< previous page
page_132
next page >
Page 132 the ghost of logical form survives in his distinction between ‘depth gram-mar’ and ‘surface grammar’ (Wittgenstein 2001: §664), a distinction which seems to anticipate that between ‘deep structure’ and ‘surface structure’ popularized by generative grammarians after Wittgenstein’s death. The earliest Western grammar, that of Dionysius Thrax, lays down no rules at all. Wittgenstein might perhaps have replied that, nevertheless, what Dionysius presents as ‘grammar’ implies rules. The trouble with that reply is that if we go looking for the ‘rules’ that Dionysius might be supposed to be implying or appealing to, we end up not with just one set of rules for ancient Greek, but dozens or hundreds of possible sets of rules, depending on how we interpret the text and analogize from the examples given. This will not do. If ‘grammar’ is to do the job that the later Wittgenstein evidently wants it to do (i.e. to act as the independent bedrock for rational discourse), we have to start at a later point in the Western tradition, when grammar has come to be regarded as intrinsically normative and intrinsically determinate. And that is a literate interpretation par excellence. Only by construing grammar in this overtly prescriptive way is Wittgenstein able to link up its supposed ‘rules’ to those of games. If that link fails, the Wittgensteinian concept of a ‘language-game’ fails, and with it the whole account of language presented in Philosophical Investigations . It is important here not to confuse rules with regularities. Are A and B, illiterate and innumerate, neither of them capable of giving reasons for their actions, none the less capable of following rules? Simply because their behaviour exhibits regularities which they themselves can neither describe nor explain? Aristotle would be turning in his grave. Declaring A and B to be acting rationally is not a conclusion reached on the basis of confusing rules with regularities. But it does require opting for a different interpretation of rationality from Aristotle’s. The rationality of what A and B are doing consists in the reciprocal integration of their activities by means of signs. Furthermore, these signs are based solely on operational discriminations. Nothing more is required, no higher level of mental activity. Someone will doubtless object: does not your ‘integrational’ account of the language-game reduce A and B to mere machines? Not necessarily. B might always decide to withdraw his labour, to ‘go on strike’, thus bringing the building programme to a standstill. Machines cannot voluntarily go on strike: they can only break down. In the scenario sketched above, A and B remain at all times in control of the language-game. According to the integrational account, A’s actions anticipate B’s, which in turn presuppose A’s. That is what makes their signs part of an integrated language-game. What each of the participants does is contextually and systematically relevant to what the other does within the same temporal continuum and the same programme of activities. It has nothing to do with truth. It is a conception of communication which lies beyond the reach of Aristotelian logic altogether. It proposes an account of meaningful human
< previous page
page_132
next page >
< previous page
page_133
next page >
Page 133 interaction that is radically different, in theoretical basics, from any other account that has been proposed in the Western tradition. Some theorists, undeterred by Wittgenstein’s sad example, still go on constructing ‘primitive’ languages and language-games, in an effort to ‘explain’ how more complex languages operate. Invariably they proceed by copying what they take to be simple analogues of ‘real’ linguistic structures, or parts thereof, into the Mickey Mouse models they have set up to throw light on the more profound workings of human communication. What they fail to realize is the complete futility of proceeding in this way. For the minimodels they construct invariably have a semiology which bears no relation to the complex semiology of actual human languages. The error consists in supposing that the semiology of German, English, etc., can be projected back, without distortion, on to those allegedly ‘primitive’ structures that the theorist’s misguided quest for simplification has left standing. This is an intellectual trap that the integrationist approach to communication avoids from the outset. It is not a model-building enterprise. The ODs identified above are not artifacts of invented models, but necessary features of the semiology basic to all verbal communication systems, of whatever level of complexity. How do we know this? We know this because it is possible to state unambiguously the ODs required for any deliberate human activity, and there are no grounds for supposing that any forms of communication would turn out to be an exception to this. Such discriminations are needed as much by preliterate as by literate communities, and as much by numerate as by innumerate communities (if any surviving examples of the latter are ever found), as well as being involved in the elementary processes of language acquisition through which every human child must pass on the road to his or her full membership of those communities.
< previous page
page_133
next page >
< previous page
page_134
next page >
Page 134 10 The Literate Revolution and its Consequences WRITING AND THINKING The circuitous route taken in the preceding chapters now brings me back to the key question of how the advent of writing, and more particularly the entrenchment of writing and reading as habitual practices, eventually effected profound changes in the way (literate) human beings think. The answer I propose is: by setting up new operational discriminations in human behaviour. The literate brain deals with new ODs by adapting its existing neuronal circuitry. The literate mind deals with them by attributing a new dimension to ‘words’. The scribe deals with them by learning to integrate the manual practices of making marks on a surface with the oral practices of speech. Let us focus upon the second of these. With the advent of habitual literacy, language is no longer regarded as being on a par with the use of non-verbal signs or other activities. It becomes something quite ‘special’. That assumption underlies the ancient Greek conception of logos , which is invoked by Greek philosophers to distinguish humanity from other living species. If any parallel assumption is made in preliterate communities, the Greeks of Plato’s day knew nothing about it, and anthropologists in our own day have yet to report it. (On the contrary, a common belief in preliterate societies seems to be that animals have their own language(s), although their ‘speech’ is incomprehensible to human beings.) A word of warning is immediately called for. This literate assumption about the unique status of human language is to be distinguished from the ethnocentric version of it that the Greeks liked to promote, i.e. that a mastery of Greek was the indispensable criterion for having logos . (No Greek philosopher of the Classical period had to spell this criterion out. It sufficed to pretend to be oblivious to the existence of foreign languages and choose all your linguistic examples from Greek. The Romans could not afford to be so narrowly ethnocentric in outlook, because they borrowed much of their intellectual agenda from Greece, and the Roman upper classes brought up their children bilingually anyway: Quintilian, Institutio Ora-toria: I.i.12–14)
< previous page
page_134
next page >
< previous page
page_135
next page >
Page 135 Once you have thoroughly assimilated the idea that everything spoken can be written down (because this is what your education as a literate person has taught you to do) various corollaries tend to ensue. One is that those who cannot do likewise—i.e. cannot read and write—are ‘backward’ or less intelligent. (This conviction, transferred on to a macrosocial level, comes out as the belief that whole communities who cannot read or write must be inferior to yours. This is the origin of the adoption by 19th-century anthropologists of writing as marking the great divide between ‘civilized’ and ‘barbaric’ societies.) Even more important is the corollary that words are, by nature, inscrib-able. Whatever cannot be written down is not a form of words at all. So writing becomes, as it were, the guarantee of verbal authenticity. Hence the unrivalled—even hallowed—status of written texts in law and other forms of verbal transmission. Rosalind Thomas points out that when Heraclitus composed one of the earliest known works in prose, he is said to have deposited it in a temple, and that when the first central archive was established in the city of Athens at the end of the fifth century BC it was housed in a shrine (Thomas R. 1989:31). The major book-based religions such as Islam and Christianity are creations of cultures in which exactly such a reverence for the written word prevails. These are some of the social consequences of assumptions about writing that the literate mind feels no need to justify. But more pertinent to the present discussion are assumptions that the literate mind makes about language itself, once the accepted correlations between speech and the writing system are institutionalized. The first assumption—which has remained more or less constant in the Western tradition over the centuries—is that speech and writing are in some way equivalent forms of linguistic expression; that one can be substituted for the other without affecting the ‘content’ of the message in any way. The basis for this supposed equivalence is nothing more than the facility that a literate person has for reading aloud what is written down and writing down what is said. It is the complementarity of the relevant ODs that gives rise to the belief in their linguistic equivalence. But in assuming that equivalence, the literate mind overlooks the fact that the complementarity is never exact. Perhaps the clearest demonstration of this is to be found in the endless search for ‘narrower’ transcriptions of speech that occupied many phoneticians in the 19th and 20th centuries. Phonetics, it has been said, is ‘probably the least interesting branch of linguistics to a philosopher’ (Blackburn 1994:286). Alas! For if philosophers had paid more attention to phonetics, they might have realized sooner that the distinction between ‘types’ and ‘tokens’ that they readily accepted from one of their own number (i.e. Peirce) has a fatal flaw, at least in its primary application to language, where Peirce first introduced it. Matters were not helped by the fact that many philosophers forthwith proceeded to assimilate the distinction between types and tokens to their more familiar, traditional
< previous page
page_135
next page >
< previous page
page_136
next page >
Page 136 distinction between classes and members of classes. (This is not the place to embark on a detailed critique of that assimilation, but the difference hinges on the fact that the token is assumed to exhibit the form of the type, whereas treating your pet poodle as a member of the class ‘dog’ does not involve supposing that all dogs look like your poodle.) Linguists, on the other hand, recognized as long ago as the 1930s (Chao 1934) that no extensive corpus of oral material from any linguistic community will have a unique, empirically determinable set of types (phonemes) underlying it, although that illusion is fostered by familiarity with alphabetic writing. The truth is that writing, as one linguist sums it up bluntly, ‘cannot handle actual utterances at all’ (Love 1990:110). The literate mind, however, is ever willing to make exactly the opposite assumption; that writing can not only ‘handle actual utterances’ but can and does identify them ‘correctly’. It is this that imposes a normative straitjacket on the entire community’s thinking about speech. An interesting piece of evidence that the Greek literate mind was already conflating grammata with stoikheia as early as the fifth century BC is Herodotus’s statement about the Persians in Book I of his Histories to the effect that, although they did not realize it themselves, the Persians all had names ending in the letter S ( san ). Since this remark could hardly refer to Persepolitan cuneiform, it presumably applies only to Greek transcriptions of Persian names. But Herodotus does not seem to notice the difference between a sibilant consonant and a corresponding alphabetic letter. A for-tiori he does not notice the difference between two quite different interpretations of what a Persian’s ‘name’ is. Once the confusion between complementarity and equivalence is commonplace, it powerfully reinforces the assumption that writing can ‘handle actual utterances’. Why should it not? For if speech and writing are simply alternative forms of the same linguistic activity, it seems absurd to doubt that what is present in writing is also present—in an oral form—in speech. Even Plato, who was the first philosopher to realize clearly that writing cannot handle discourse, probably did not see that writing cannot handle utterances either. On the contrary, he may well have supposed that utterances were exactly what writing could handle, and that the misplaced gen-eral confidence of educated Greeks in the reliability of writing (signalled in Athens by the ‘official’ adoption of the Ionian alphabet in 403 BC?) was due to a failure to recognize the difference between (as we would nowadays put it) speech and speech-acts. But the metalinguistic vocabulary available to Plato (to judge by Phaedrus and Letter 7) is not sufficiently well developed for him to make that point clearly and concisely. His scepticism about writing struggles within the limits imposed by linguistic theorizing that has not yet got as far as recognizing more than two parts of speech and is still conflating grammata with stoikheia . Plato was not the only well-educated Greek of the Classical period to have misgivings about writing: Isocrates voices similar concerns about the
< previous page
page_136
next page >
< previous page
page_137
next page >
Page 137 shortcomings of the written word ( Discourses: To Philip 25–6; Letters I, To Dionysius 2–3). There seems to have been a certain ‘hostility towards writing, or rather towards certain uses of writing’ (Harris W.V. 1989:92) among ‘educated Greeks of the fourth century’. But it had more or less evaporated by the time the Stoics elaborate their linguistic doctrine. The Stoics canonize the scriptist assumption that sounds and letters are just facets of the same underlying unit. They distinguish three properties of this ‘phonetic-orthographic unity’, which ‘continued to be distinguished throughout Antiquity, their Latin names being potestas (power), figura (shape), and nomen (name)’ (Robins 1997:30). Consequently the educational system institutionalized in the curriculum of the medieval universities, which would have been impossible without the transmission and study of written texts, was based all the time on a fundamental misconception about writing itself. How difficult it is to break out of the normative straitjacket that writing imposes, even when one is trying to, is well illustrated by one of the earliest papers attempting to analyse linguistic ‘correctness’ from the point of view of modern linguistics. This was a paper published in 1927 by Leonard Bloomfield under the title ‘Literate and illiterate speech’. It is still of interest as an exemplification of the tangled social, psychological and educational assumptions that surround the issue. Most of them have remained unchanged since Bloomfield’s day. In this paper Bloomfield accepts that it is commonly supposed in literate communities that the currently accepted written language shows how the language ‘should’ be spoken. The popular explanation of “correct” and “incorrect” reduces the matter to one of knowledge versus ignorance. There is such a thing as correct English. An ignorant person does not know the correct forms; therefore he cannot help using incorrect ones. In the process of education one learns the correct forms and, by practice and an effort of will (“careful speaking”), acquires the habit of using them. If one associates with ignorant speakers, or relaxes the effort of will (“careless speak-ing”), one will lapse into the incorrect forms. (Bloomfield 1927:84) Bloomfield has no hesitation in rejecting this account altogether, together with its fundamental assumption. According to Bloomfield, ‘There is no fixed standard of “correct” English.’ The first point to note is that this is presented not as theoretical posit but as a ‘fact’; and that what is wrong with the popular view, according to Bloomfield, is that it does not ‘correspond to the facts’. (The linguistic facts, it goes without saying, are those accessible to linguistic science, not to the public at large.) What has ‘really’ happened, according to Bloomfield, is that in literate communities people are confused into condemning those forms which are not sanctioned in the written language.
< previous page
page_137
next page >
< previous page
page_138
next page >
Page 138 It is the writing of every word for which a single form is fixed and all others are obviously wrong. It is the spelling of words that ignorant people, or better, unlettered people, do not know. It is writing that may be done carefully or carelessly, with evident results as to correctness. (Bloomfield 1927:85) But Bloomfield is already overstating his case here. For there are literate societies in which (some) spelling variants are tolerated, and the English of Bloomfield’s day, with its manifest differences between British and Ameri-can spelling, is an obvious example. The point is worth making, because in condemning the implicit scriptism of the ‘popular’ view Bloomfield is making a tacit scriptist assumption of his own, i.e. that linguistic subcommunities of the global English-speaking community are to be distinguished according to the (rigid) spelling conventions they adopt. Bloomfield goes on to argue that differences between ‘correct’ and ‘incor-rect’ forms are also recognized by speakers of Menomini. The Menomini speakers studied by Bloomfield comprised a community of some 1700 living in Wisconsin. Furthermore, the differences between what is ‘correct’ and what is ‘incorrect’ in Menomini correspond in various respects, according to Bloomfield, to those recognized in English. The point of this comparison, Bloomfield stresses, is that Menomini has no written form and is not taught in schools. So it cannot be the writing system that is responsible in this case for confusing the issue of what is and what is not ‘correct’. At this stage in Bloomfield’s argument, however, something odd begins to emerge. As evidence for the parallel between English and Menomini, Bloom-field introduces some phonetic transcriptions of Menomini utterances. Now since the Menomini are preliterate innocents, uncorrupted by writing, it is relevant to ask what these transcriptions represent. The only answer available seem to be that they represent what a literate investigator, i.e. Bloom-field (who admits honestly that he has only a ‘slight’ acquaintance with the language), hears a non-literate informant as ‘saying’. But ex hypothesi this cannot be what the informant hears, since the informant is not hearing speech through the grid of categories imposed by a writing system. In other words, Bloomfield is just as committed as anyone to the scriptist assumption that writing can ‘handle actual utterances’. He is in fact confusing his own ODs with those of the informant. That confusion is itself a clear illustration of how literacy has affected Bloomfield’s thinking. ANALYSING UTTERANCES The reason why writing cannot ‘handle actual utterances’ is that all known glottic writing systems opt for atomic units, which are then distributed sequentially for ‘representational’ purposes along a hypothesized speech continuum. There are only three such traditional systems in general use.
< previous page
page_138
next page >
< previous page
page_139
next page >
Page 139 In ‘logographic’ writing the atomic units correspond roughly to spoken ‘words’. In ‘syllabic’ writing they correspond roughly to spoken ‘syllables’ (a unit already recognized by Greek grammarians). In ‘alphabetic’ writing they correspond to consonantal or consonantal and vocalic ‘segments’ of the speech chain. There are also ‘mixed’ systems which combine various features of the foregoing. But none of these systems captures with any approximation to accuracy the facts of speech as revealed by modern experimental phonetics. It was not until the invention of the sound spectro-graph that linguists were in any position to realize exactly to what extent and in what detail writing systems ‘misrepresent’ those facts. Once this was apparent, however, there was no longer any excuse for failing to recognize that writing is a mode of communication sui generis and has no intrinsic connexion with speech at all. This should already have been evident from the long history of musical notation, but linguists turned a professional blind eye to music. Instead they devoted a great deal of vain effort to devising an ‘International Phonetic Alphabet’, which would allegedly overcome the inadequacies of all traditional writing systems. In so doing, they revealed their own inability to grasp that the alphabet itself was part of the problem. Since the investigation of language and languages is the professional brief of the linguist, there could hardly be more eloquent testimony to the way scriptism makes itself invisible to those most addicted to scriptist ways of thinking. One reason why linguists felt obliged to fall back on a rejuvenated alphabet was doubtless that, in keeping with a centuries-old tradition, they were academically wedded both to the belief that writing ‘depicted’ speech, and to the more ethnocentric assumption that alphabetic writing was the most ‘advanced’ form of writing possible. But behind this lay another reason. It is one thing to realize that writing cannot ‘handle actual utterances’. But quite another to realize that nor can speech ‘handle actual utterances’; at least, not without a certain amount of help from a writing system. As Nigel Love points out, without writing to provide the requisite support, it would be formidably difficult to construct the kind of metalanguage for discussing speech that we nowadays take for granted as being available. The task is aided enormously by having another medium— not that of speech—to which features of an utterance can be related. This gets round the obstacle of having to have names of utterances which are ‘homophonous with the corresponding utterances themselves’ (Love 1990:109). No preliterate community, as far as is known, ever developed a purely oral way of dealing with this problem (e.g. by inventing a suffix that could be added to any word and meant, roughly, ‘word-form’). Developing a written counterpart to spoken language removes the dif-ficulties attaching to a purely oral practice of metalinguistic discourse. For although type-token ambiguities may arise for writing as for speech, writing provides a firm anchorage for at least one dimension
< previous page
page_139
next page >
< previous page
page_140
next page >
Page 140 of type-token distinctions, by providing a medium for displaying types which is different from the medium in which the corresponding tokens are produced. (Love 1990:110) As noted earlier, it would be nonsensical to suppose that the first letter of the Greek alphabet was the type of which the vocalic sound [a] was a token. But having the same name alpha for both makes it plausible to talk as if the letter unambiguously identified a certain class of vocalic sounds (whereas in fact it does nothing of the kind). DECONTEXTUALIZING THE WORD A second fundamental change is closely related to the first. It involves reinterpreting ‘the word’ as a decontextualized unit with a decontextualized meaning. In preliterate societies, words are part of the vocal activities of speakers, and what words mean is part of what the speaker means on some particular occasion. The hearer, who may or may not understand what the speaker meant, plays only a passive, secondary role. In literate societies, by contrast, words are abstractions that take priority over both their vocal and their scrip-torial manifestations, and over both speakers and hearers simultaneously. The speaker loses his privileged role of being in absolute control of what his utterances mean, for the words used can now be referred for arbitration to a source independent of both speaker and hearer, i.e. the practices of written discourse, assumed to be controlled by those with a superior knowledge of the language. At the same time, it is in virtue of (i) the divorce thus effected between speakers and their words, and (ii) the assumed equivalence between writing and speaking, that ‘what the speaker said’ can supposedly be transmitted to wider audiences and to future generations. In brief, the word becomes autonomous at a level that cannot be adequately conceptualized in any preliterate community, since there is no ‘alternative’ mode of existence for words. It is important to grasp the role that the availability of this alternative plays; or, to put it more explicitly, how the autonomy of the word and its meaning depends on the concurrent duality of modes of expression. The actual forms of expression—oral or visual—matter less. One can imagine a sciencefiction community in which writing was the only form of linguistic communication. In that community, the word would have a role much closer to the spoken word in terrestrial preliterate communities of the kind we are familiar with, since in a community without speech words would be intrinsically written units. There would be no possibility of ‘uttering them aloud’, hence no possibility of relaying the written message by oral means to those unable to read the written signs. Such a community might nevertheless have its own dictionaries. There would be nothing, in principle, to prevent their compilation, since the ODs
< previous page
page_140
next page >
< previous page
page_141
next page >
Page 141 required for listing written words do not depend on the availability of speech. Systematic verbal decontexualization, of which dictionaries represent the ultimate manifestation, does depend on one feature by which writing differs from speech; namely, the greater flexibility of rearrangement afforded by the quasi-permanence of the written sign. If we wish to establish a closer parallel between a preliterate society and our science-fiction community, we shall have to stipulate that, in the latter, writing disappears almost as soon as it written. There is no permanent marking of any surface. That is what would preclude the development of the dictionary as a social institution. Thus in trying to characterize the effects of the ‘literate revolution’ it is important not to confuse two things that are, surprisingly, all too easily confused. One is the difference between auditory and and non-auditory modalities. The other is the difference between relations within the temporal continuum. This point merits elaboration, which may at first sight appear to be a digression. But it is not. All human activities are time-bound. Communication in all its forms is governed by what may be called a ‘principle of cotemporality’. We are obliged to suppose that any act of communication is immediately relevant to—and is to be interpreted by reference to—the current situation, unless there is reason to suppose otherwise. The principle of cotemporality is the basic principle of contextualization. It underpins all integrated activities. If you turn up late and miss the bus, you have failed to integrate your own activities with those of a number of people, among whom the most salient in this instance is the bus driver. This is true in literate and preliterate communities alike. But once writing is available a new range of integrated activities is at the disposal of those who are literate. This is because the existence of the written sign has a time-track that is independent of that of the writer, whereas before the invention of sound-recording this was not true of speech and the speaker. Wells touched obliquely on this point when he claimed that the advent of writing made a historical consciousness possible. But that is only one facet of the more general expansion in the gamut of time-factored communication. A written document is itself a kind of time-capsule. With the assistance of writing it becomes possible to ‘bridge’ gaps in time and space which would otherwise be impossible without continuity of contact or reliance on memory. If literacy brings any ‘restructuring of consciousness’, this is where it lies, i.e. in consciousness of time. (Any concomitant change in the conceptualization of space, and in particular distance, is secondary and depends on the technology contingently available rather than on the written sign itself.) What is true, nevertheless, is that with the advent of the written sign space becomes relevant to communication in a quite different way, and that in itself is a revolution in ‘thinking habits’. Few linguists make any mention of this when discussing the relationship between speech and writing. Writing, unlike speech, has its physical basis
< previous page
page_141
next page >
< previous page
page_142
next page >
Page 142 in the organization of spatial relations, independently of both sight and sound . This far-from-obvious truth was eventually recognized and brilliantly exploited in the early 19th century by Louis Braille, who first made it possible for the blind to ‘read’. It is a truth that seems to have escaped Saussure, who never discusses it or uses it to explain the connexion between writing and the decontextualization of ‘the word’ that is recognized as the basis of linguistic education in a literate society. Nor, by the same token, does Saussure seem fully to realize that the basis he himself proposes for a modern ‘science’ of linguistics involves a reverse decontextualization (i.e. retrospectively extracting a purely vocal ‘word’ from the complex script-based unit on which education in literate societies is founded). Once it is admitted that in a literate society ‘it is impossible to ignore ( faire abstraction de ) this way [sc. writing] in which the language ( la langue ) is constantly represented’ (Saussure 1922:44), it needs a remarkable effort of ‘abstraction’ for members of such a society to ignore writing and regard their own language as nothing more than a system of purely auditory signs. Is it possible? Even Saussure has doubts about that, for he concedes that although linguists might focus their science exclusively on sound recordings of speech, nevertheless they would need writing to discuss and analyse these recordings (Saussure 1922:44)! So straight away the banished written sign comes back into the picture again. There is no recovering the Eden of preliterate thinking about language. The Fall has occurred, and it is irrevocable. THE GRAMMAR OF LITERACY Saussure inherited a tradition in which grammar is ab initio a literate concept. Up to and including the time of Plato and Aristotle, the word grammatikos meant simply one who understood the use of letters, grammata , and could read and write, and techne grammatike was the skill of reading and writing. (Robins 1997:17) In other words, the concept of grammaticality has no place in the thinking of a preliterate community. Homer’s Greece had poets, but no grammarians. And linguists who talk about the ‘grammar’ of languages spoken in preliterate communities are superimposing their own literate terminology in areas where it has no application. Linguists routinely do this. Bloomfield describes some of the utterances produced by one his Menomini informants as ‘ungrammatical’. (It is as if economists talked about about ‘income tax’ with reference to societies that had no system of paid labour.)
< previous page
page_142
next page >
< previous page
page_143
next page >
Page 143 One of the first tasks of European missionaries to preliterate societies was to give the language of these backward communities a ‘grammar’, and thus make it inscribable. This took precedence over ensuring medical supplies or clean water, because it facilitated the learning by Europeans of these previously unknown languages. The immediate beneficiaries were not ‘the natives’ but their would-be teachers. Subsequently, the natives were able to demonstrate their progress from barbarism towards ‘civilization’ by learning the grammar that their teachers had gratuitously imported. The teachers justified this exercise by claiming that it helped the natives to understand what they already knew, but had never known they knew. (One is reminded of Socrates’ supposed demonstration of the slave boy’s unrecognized ‘knowledge’ of geometry in Plato’s Meno .) But it could more simply be described as imposing a literate view of language on populations to whose thinking such a view of language was entirely alien. Even today many linguists imagine that recommending a grammar of English to English-born Englishspeaking students is an exercise in making explicit to the students what they already know, but did not know they knew, because it lay ‘below the level of conscious attention’. This psychological exegesis is endemic in the academic programmes of certain schools of linguistic theory and language teaching. Recognizing the exegesis for the sham that it is constitutes an indispensable first step towards a muchneeded critique of modern concepts of grammaticality. What these programmes actually achieve (when stripped of their pseudo-scientific metalanguage) can perhaps best be brought out by imagining a Swiftian school of physiologists at the grand academy of Lagado devoted to writing a complete grammar of how to climb stairs. We can, of course, already climb a flight of stairs. But do we know what we already know about this activity? Not until it has been reduced to a set of explicit ‘rules’: e.g. 1. Put either foot forward on the bottom step. 2. Lean the torso in the direction of the stairs. 3. Transfer the body weight from the back foot on to the front foot. 4. Lift the back foot. And so on. The Lagadian physiologists might maintain that these abstract ‘rules’ come into play every time anyone goes upstairs. The grammarian of stair-climbing has no need to claim that we actually think about the rules every time we apply them. Or even that we would necessarily analyse what we do in this way, if explicitly called upon to give an analysis. After all, hens must know how to lay eggs, even if they would be hard put to it to explain just how. The grammar of stair-climbing may sound quite ridiculous. And so it is. But it is exactly this Lagadian concept of grammaticality that underpins the metalinguistics not only of discussions of grammar but at the same time discussions of reasoning. For the latter are no more than extensions of the former. (And ‘linguistics’, someone will point out, is what we nowadays call academic reasoning about language.)
< previous page
page_143
next page >
< previous page
page_144
next page >
Page 144 For many logicians the appeal to ‘logical form’ remains indispensable. Logical forms are often represented as ‘underlying’ the superficial verbal expression, rather like the mysterious ‘deep structures’ of generative grammarians (to which, from time to time, some generative grammarians—no less mysteriously—assimilated them). When logical form is allegedly made ‘visible’, it usually emerges as an alternative sentence in a special notation. Thus All swans are white and All ravens are black are said to ‘share a form that can be represented as All S is P’ (Mautner 1997:325). This notion of ‘form-sharing’ between sentences or propositions is hardly intuitively perspicuous, and when we ask what is the underlying logical form of the substitute sentence All S is P, there seems to be no answer. Is this a sentence that actually displays its own logical form? Marvels will never cease. Logicians tie themselves in knots in attempting to avoid the admission that logical form is not an underlying structure but just another notation, extrapolated from traditional orthography and supplied with ‘rules’ by logicians for their own purposes. According to Basson and O’Connor, who devote several pages to explaining what it is, logical form is ‘a metaphorical extension of the notion of form or structure’ that is ‘familiar from other contexts’ (Basson and O’Connor 1959:10). The ‘familiar’ cases they have in mind include sculpture (the form of a statue) and music (the form of a sonata). Form stands opposed to material. The form of a thing, they tell us, ‘is constituted by the way in which its parts are put together and by the mutual relations between the parts’. This explanation makes it all the more difficult to explain in what sense there is any analogy between the form of a piece of sculpture or a piece of music, which is visibly or audibly apparent in praesentia when we are confronted with the work itself, and the relation which supposedly obtains between the logical form All S is P and the words All swans are white. There is no analogy at all. All S is P is simply the outcome of substituting A for swans , is for are, and P for white . There may well be some (unexplained) principle or convention on the basis of which the substitutions are effected. But none of the substitute items are present in the ‘material’ of the original. When we look at a statue or listen to a piece of music we do not have to carry out any substitutions in order for the ‘form’ to emerge. It is sometimes said—even more perplexingly—that the logical form is the form of a proposition in a logically perfect language, determined by the grammatical form of the ideal sentence expressing that proposition. (Corcoran 1995:442) However, where this ‘logically perfect language’ with its ‘ideal sentences’ is supposed to reside is far from clear. Here we see an attempt to validate ‘logical form’ by burying it in the depths of an idealized grammar. Noam Chomsky once recognized a grammatical ‘level of representation’ for less-than-logicallyperfect languages and called it ‘LF’:
< previous page
page_144
next page >
< previous page
page_145
next page >
Page 145 The basic elements we consider are sentences; the grammar generates mental representations of their form and meaning. I will call these representations, respectively, “phonetic form” and “LF” (which I will read, “logical form,” though with a cautionary note). (Chomsky 1980:143) The ‘cautionary note’ turns out to be an opaque disclaimer to the effect that ‘determination of the elements of LF is an empirical matter not to be settled by a priori reasoning or some extraneous concern; for example, codification of inference’ (Chomsky 1980:143). So now you see it, now you don’t. Logical form turns out to have nothing to do with logic after all. A popular glossary of generative terminology published in the late 1970s has the following entry under logical form: The logical form is the expression of a sentence in a form involving quantifiers and other logical notions in such a way as to reveal its logical structure. The logical form of the sentence: “The police think who the FBI discovered that Bill shot” is: the police think for which person x, the FBI discovered that Bill shot x. (Ambrose-Grillet 1978:61) What is here given as the ‘logical form’ is just another English ‘sentence’ of equally dubious syntax, but with the letter x replacing the name of the anonymous victim. Anyone who could have made sense either of the above definition or of the accompanying example, but preferably both, would have been well qualified to set up in business as a generative grammarian (at least in the days before that business fell on hard times). ‘Logical form’ and ‘deep structure’ are both creations of a literate mind, deploying the resources of writing in an attempt to validate ‘abstractions’ that a preliterate mind could not even begin to cope with. That may be counted progress of some sort. But whether it is progress towards or away from ‘knowledge’ is another question. Being able to put something down in writing has always been a powerful disincentive to self-questioning about what exactly is being ‘put down’. More precisely, ‘logical form’ and ‘deep structure’ are scriptist inventions conjured up to support a certain view of rationality favoured by Western epistemologists. Richard Rorty has described it aptly as the view that to be rational, to be fully human, to do what we ought, we need to be able to find agreement with other human beings. To construct an epistemology is to find the maximum amount of common ground with others. (Rorty 1980:316) Where is this ‘common ground’ to be found? As Rorty points out, it is often sought in language:
< previous page
page_145
next page >
< previous page
page_146
next page >
Page 146 Within analytic philosophy, it has often been imagined to lie in language, which was supposed to supply the universal scheme for all possible content. To suggest that there is no such common ground seems to endanger rationality. (Rorty 1980:316–17) But it only endangers rationality if both ‘rationality’ and ‘language’ are construed in the way officially sanctified and promulgated in the Western literate tradition. The paradox is that those who have made such a song-and-dance about ‘logical form’ and ‘deep structure’ (Russell in philosophy and Chomsky in linguistics) are precisely the theorists who seek to drive an epistemological wedge between language and communication (Green 2007:124–5). The aristocratic Russell could not bear to think that ‘ordinary lan-guage’—as used with all its imperfections for day-to-day communcational purposes by an ignorant populace—is the ultimate repository of human reason. Chomsky, for his part, never tires of insisting that communication ‘is only one function of language, and by no means an essential one’ (Chomsky 1975:69). Had they understood the issues better, they would both have been arguing for exactly the opposite view in order to construct their epistemologies. Chomsky and Russell are both theorists who attempt to achieve self-levitation by tugging at their own scriptist bootstraps. Both attempts fail, unsurprisingly. For these bootstraps are firmly secured to ancient assumptions handed down in the Western literate tradition: in the one case of traditional grammar and in the other case of traditional logic.
< previous page
page_146
next page >
< previous page
page_147
next page >
Page 147 11 The Fallout from Literacy RATIONALITY AND POST-LITERACY The West has already entered the first phase of the post-literate era prophesied by Wells in 1895. Television has replaced print as the primary source of information about national and international events. For the pocket-calculator generation of schoolchildren, ‘mental arithmetic’ is a skill of the past. British university libraries now throw out two million books a year, to make room for ‘much-needed PC and laptop facilities’ ( Times Higher Education Supplement 16.11.07, p.1). Many pages have been devoted during the past quarter of a century to lauding the advantages that literacy brought to the Western world. The time is ripe to attempt a review of the fallout. To summarize thus far, the ‘literate revolution’ in Western thinking seems to have been first and foremost a revolution in the way people thought about their own linguistic experience; hence about the mental operations that could be regarded as exemplifying rationality, given their assumption that rationality was somehow manifested in, or might even be equated with, certain operations with words. ‘How to do things with words’ could always have been the epigraph for a Western do-it-yourself manual of rationality. But what you can do with your words depends in part on what other people can do with theirs . That is why rational agents inevitably become involved in integrating their activities with the activities of others. The key to the literate revolution was treating words as having a decontextualized existence of their own, governed by forms of organization not imposed on them arbitrarily ‘from without’. This is what always distinguished grammar from poetry in the Western tradition, where the rules of metre are devised by poets, but the cadences of conversation are not. No one ever spoke or wrote spontaneously in Homeric verse or in sonnet form. In ancient Greece, the recognition of poetry as an ‘art’ ( techne ) long preceded the recognition of grammar as an ‘art’. Nevertheless, grammar was conceptualized from the very start as sensitive to the same kinds of considerations as poetry, i.e. the audible features of speech (the play of sequential alternation between vowels, consonants, syllables, etc.) and their concatenation. That is why the grammarian’s very first job (as Dionysius
< previous page
page_147
next page >
< previous page
page_148
next page >
Page 148 Thrax saw) is to set out a classification of the elements of speech. It is only then that the question arises of describing their concatenations. That cannot be done without having a notation and a terminology in which to do it. Conceivably, a special notation and a special terminology might have been invented just for that purpose. But the alphabet lay ready to hand. One can hardly be surprised that Dionysius (and many subsequent generations of grammarians) regarded it as quite adequate for that elementary expository task. As a result, grammar was inevitably intertwined with—even identified with—the arts of reading and writing. It took many generations for educated Greeks to realize (insofar as they ever did) that grammar might be anything more. That ancient picture, visible only through the spectacles of literacy, was reproduced in the hierarchy of disciplines that became the text-based trivium of the medieval universities. Grammar was basic. Logic was what could be done on the basis of grammar. Rhetoric presupposed both. It would never have occurred to Abelard, any more than to Luria, that one could proceed to the syllogism without the grasp of grammar implied in literacy. Once literacy becomes established, it begins to invent its own myths about preliteracy. The notion that in preliterate Greece there existed a living oral tradition capable of preserving for centuries, from one generation of bards to the next, a single, definitive version of poems thousands of lines long is itself a fabrication of the literate mind. That this could ever have been the normal mode of transmission of lengthy oral compositions (an assumption called in question by the work of Lord 1960 and Parry 1971) is simply to conceive of them as written texts minus writing. The evidence, on the other hand, suggests that in the early centuries of writing, when professional scribes make their first appearance on the scene, it takes a long time even for the exact copying of documents to become established practice (Thomas R. 1989:47–9). The very notion of verbatim replication is the product of writing. In the modern world, however, the literate revolution has been carried one stage further. Nowadays the literate mind believes that its reasoning can safely be handed over to machines. These machines are themselves alphabetically literate: their alphabet consists of just two letters, 0 and 1. Unlike human beings, they have a built-in grammar from which they cannot deviate: that is what makes their pronouncements so reliable. What is totally absent from Aristotle’s thinking—and from Greek thinking in general—is any inkling that rationality might be a historical product of social development, and hence be different in different parts of the world or in different strata of society. The Greeks do not see what the longer hindsight of the 20th-century anthropologist reveals, that ‘criteria of logic are not a direct gift of God, but arise out of, and are only intelligible in the context of, ways of living or modes of social life’ (Winch 1958:94). The Greeks were quite oblivious to this, even though Greek logic is
< previous page
page_148
next page >
< previous page
page_149
next page >
Page 149 so manifestly the product of one particular stage in Greek culture, and even of one particular educational market within that culture. Aristotle tacitly assumes that there is only one universal kind of rationality, and therefore that those who do not share it and act according to its dictates are—either temporarily or permanently (i.e. by nature)—irrational. That is still the assumption of Aristotle’s followers in the ranks of professional logicians today. It is an assumption based on taking an eminently scriptist view of language. Given this assumption, there are not many places in which to look for the source of rationality. Karl Popper identifies four ways of interpreting what he calls the ‘rules’ of logic (Popper 1972:207). One is to treat them as ‘natural laws of thought’; that is, as describing how we actually do think, because we cannot think otherwise. The second is to regard them as ‘normative laws’ which tell us how we ought to think. The third is to identify them with ‘the most general laws of nature’ and thus as holding good ‘for any object whatsoever’. The fourth is to maintain that they are ‘laws of certain descriptive languages—of the use of words and especially of sentences’. It would not, I think, be unfair to describe Aristotle as a philosopher who opted for combining the third and the fourth of Popper’s four possibilities. Aristotle was by temperament a man who sought explanations that made assurance doubly sure, a Greek version of belt and braces. His view of rationality, typically, insists on both the belt and the braces. He pro-pounds a philosophy in which there is a miraculous congruence between Nature and the words available to describe Nature. In this context, the philosopher’s search for truth becomes a search for the points or patterns of concordance between the two. This concordance is the basis of logic. It is what gives logic its role in the semantics of science, a role that it retains today (Harris R. 2005). The concordance is maintained over time by a strategy Aristotle probably would not have approved, i.e. boldly redefining the vocabulary as soon as it beomes a nuisance (i.e. a nuisance to the scientist, not to the hoi pol-loi). Newton’s ‘absolute space’ and Einstein’s ‘simultaneity’ are paradigm examples. More contentious exploitations are mathematical cosmologists’ postulations of parallel universes. On the basis of new definitions, logic then provides the same guarantees as before. Definition is the key to this, as George Boole saw in the 19th century. One of the foremost mathematicians of his day, he was described by Russell as the man who ‘discovered’ pure mathematics. In his classic The Laws of Thought (1854), Boole declared that there exist certain general principles founded in the very nature of language, by which the use of symbols, which are but the elements of scientific language, is determined. (Boole 1854:6)
< previous page
page_149
next page >
< previous page
page_150
next page >
Page 150 There is no room here for Plato’s brand of conventionalism (as revealed in Cratylus). Conventions become redundant if logic and mathematics are founded in ‘the very nature of language’. At the same time, Boole saw that the ‘laws of thought’ presuppose that certain definitions of terms are already in force. Consequently he himself defined the term sign as an arbitrary mark, having a fixed interpretation, and susceptible of combination with other signs in subjection to fixed laws dependent upon their mutual interpretation. (Boole 1854:25) (Contrast this with: ‘a fixed mark, having an arbitrary interpretation … ’.)Boole was clear-sighted enough to recognize that the defini-tion he was offering was no more than a definition of written signs (NB ‘arbitrary mark’) and explained this restriction on the ground that ‘in the present treatise’ it is ‘with written signs that we have to do’. Which is as forthright an admission as any that the refinement of logic is an essentially literate pursuit. What he perhaps did not see so clearly, or at least did not admit, is that his conception of definition is one that only a literate community would regard as plausible, and that his ‘laws of thought’, rooted in the grammar of written sign combinations, themselves presuppose literacy. It is interesting to note that when Boole comes to justifying the use of variables he does no more than refer blandly to the arbitrariness of the written sign. Since the sign is an arbitrary mark, ‘it is permissible to replace all signs of the species described above [sc. words in their everyday orthography] by let ters’ (B oole 1854:28). I n shor t , he i nter pret s arbitrary as meaning ‘changeable at will’. This is the interpretation explicitly rejected by Saussure in setting up ‘the arbitrariness of the linguistic sign’ as the ‘first principle’ of modern linguistics (Saussure 1922:100). From the linguist’s point of view, nothing could be more profoundly mistaken than to suppose that signs can be altered on the basis of an arbitrary decision (Saussure 1922:104–5). The reason for Saussure’s insistence on this shows the great gap between his idea of arbitrariness and Boole’s. Linguistic signs, although themselves arbitrary, cannot be altered arbitrarily, because the existence of a language cannot be divorced from the whole community whose communicational purposes it serves, whose collective usage makes it what it is, and whose fashioning of this complex collective instrument is determined by social forces operating over time (Saussure 1922:112–3). The point is of crucial importance as regards rationality. Boole’s approach shows literate thinking at its most facile: a written sign is just a mark on paper. Any mark will do, because one mark is as good as another. What this reflects is the progressive decontextualization of the sign that Western literacy feeds and feeds upon. It reaches its maximum with the invention of printing, which makes possible in a literate community the mass production of anonymous texts with wide circulation.
< previous page
page_150
next page >
< previous page
page_151
next page >
Page 151 Anonymous authors cannot be tracked down. The text itself appears to contain its own message, irrespective of authorship. It is at this point that churches and governments start to become concerned. For they are suddenly confronted with a new and powerful instrument for moulding public opinion and social change. The conservative reaction—government censorship, passing laws to criminalize the printing of publications not licensed by the state, the institution of the Roman Catholic Index of prohibited books, which ran from 1559 until its abolition in 1966—could not in the end prevent the establishment of a new, unprecedented ‘print culture’ in Europe (Eisenstein 1979). The relevance to rationality of this development of literacy is unmistakable. With the invention of printing came an important parting of the ways. Logicians were slow to realize it, and by the time they had woken up it was too late. They could have chosen to reject Aristotle altogether. Locke, with his scarcely veiled contempt for Aristotle, offered them the opportunity and the theoretical basis for such a revolt. He proposed an empirical study of words and ideas, which he called by the Greek name semeiotike , the ‘doctrine of signs’. The investigation of verbal signs, he believed, could ‘afford us another sort of logic and critic, than what we have hitherto been acquainted with’ (Locke 1706: IV.xxi.4). But neither the term nor the idea was taken up until the pragmatists began to take an interest in the subject in the 19th and 20th centuries. Peirce echoed Locke’s Greek term, but he became too deeply involved in developing logical notation, and failed to follow up Locke’s idea in as radical a spirit as John Dewey. Dewey thought—quite rightly— that formal logic had lost touch with practical reasoning and the experimental sciences. He called his logic ‘the theory of inquiry’ and described it as ‘instrumental’ or ‘experimental’ logic. Its aim was to capture the principles on which successful empirical inquiry is conducted. For formal logicians, this always sounded like a retreat to psychology (which Dewey denied) or sociology. It is interesting to note Dewey’s reason for declining to develop a formal notation for his subject; namely, that it would be premature in the absence of a clearer understanding of linguistic signs, and that itself is a subject of inquiry. Saussure, had he lived to read Dewey, would have warmed to him immediately. Logic, for Dewey, was ‘a social discipline’, because ‘inquiry is a mode of activity that is socially conditioned and that has cultural consequences’ (Dewey 1938:19). Whether Dewey successfully carried out his own programme is a different question: but his insight into the pragmatics of rationality cannot be dismissed out of hand. He recognized that ‘logical relations between propositions themselves depend on social relations between men’ (Winch 1958:118). Which is another way of re-affirming the integrationist thesis that what you can do with your words depends in part on what other people can do with theirs . Winch’s formulation is a useful one. It focuses our attention in retrospect on what was happening in Aristotle’s syllogistic with the introduction of variables. The first steps are being taken towards divorcing ‘logical
< previous page
page_151
next page >
< previous page
page_152
next page >
Page 152 relations’ from ‘social relations’, i.e. removing the concept of rationality from the everyday activities of human beings dealing as best they can with everyday situations, and relocating it in a hypothetical realm of possibilities. Recognizing actual causes and effects, together with the practical con-nexions between them, thus becomes of secondary importance. The driving force in this relocation is the social institution of literacy, which sponsors the conception of words as decontextualized bearers of meanings. That makes it feasible to identify ‘propositions’ as unsponsored combinations of words. The sentence All men are mortal does not need a sponsor: it just ‘exists’ in its own right. Rationality is then reintroduced as a matter of recognizing certain patterns of relationship between propositions, as distinct from attending to detectable relationships between actions and their consequences. The plausibility of this shift is grounded in the parasitic nature of logical formulae, which retain—for the literate mind—an acceptable status as quasi-orthographic linguistic forms, maintaining a tenuous parallel with the latter. At this stage, the training of the would-be logician is primarily a training in the systematic substitution of logical notation for ‘ordinary’ linguistic forms. Once these substitutions have become habitual, the ‘ordinary’ forms can be forgotten and matters can now proceed as if with a higher-order form of writing. Logic has thus, step by step, become a superordinate level of literacy: it requires familiarity with a kind of ‘thought writing’ ( Begriff-schrift , as Frege called it), or at least a belief in the possibility of such a form of communication. The final steps in the relocation come with those extensions of symbolic logic which make it possible to award the accolade of rationality to complex operations far beyond the capacity of any normal human mind to execute. The burden of safeguarding rationality is thus transferred from human beings to machines, which do possess the requisite ODs. The digital computer, as its developers proudly boasted, is the supreme ‘logic machine’. It can tell you ‘what follows from what’ much faster and far more reliably than you can, or any fellow human being. The idea of a logic machine goes back at least as far as Ramón Lull’s Ars Magna in the 13th century, and one built by William Jevons was exhibited at the Royal Society in 1870. What has been happening along the way in this long-term transfer of authority and responsibilities (from Aristotle, to the medieval scholastics, to the authors of Principia Mathematica, to the computer) is a gradual, progressive dehumanization of reason. It is no coincidence that entries for logic and phrases including the word logical take up two and a half pages in a popular Dictionary of Computers . One of these entries declares roundly: ‘Logical entities are derived from physical entries by means of operating system software’ (Chandor 1977:248). A slight rewording of this formula would capture one of the main theses I have been arguing for: logical entities are derived from linguistic signs by means of writing.
< previous page
page_152
next page >
< previous page
page_153
next page >
Page 153 The ‘instructions’ on the basis of which the computer and its programs operate consist of ‘a series of characters subdivided into groups which represent coded commands to the computer’. The computer, it goes without saying, has to be able to ‘read’ them. If these are just metaphors, they are metaphors addressed to a ‘literate’ audience. But understanding them requires a level of literacy far in advance of Aristotle’s, i.e. ‘computer lit-eracy’. The logic machine imposes its own forms of scriptism. It leaves those of us who are not ‘computer-literate’ in a de-rationalized limbo. What the computer can ‘do with words’ is a lot more than you can. Nowadays we cannot even ourselves judge whether the computer ‘got it right’. We should have to employ another computer to check that. And perhaps a third computer to check the second … ? A rationality regress looms. All this is an inevitable consequence of treating language and logic as if they could be divorced, by intellectual fiat, from social reality. Saussure clearly saw what was wrong with that ‘abstraction’, but the formal logicians of Saussure’s day did not. The rational society that formal logic projects for us is a society in which government ministers have been replaced—if they have not already been—by computers. Society’s presumed goals are fed in as major premises, while statistical guestimates and weightings are fed in as minor premises, and the conclusions come out as legislation. All the machinery is indispensable because the calculations are so complicated that no flesh-and-blood politician could manage them, let alone the electorate. It takes a computer to work out whether all the desiderata are ‘logically compatible’, and, if not, what has to be sacrificed to what in order to produce a ‘rational’ outcome (i.e. the best compromise possible). The rational society is, to be sure, an Orwellian nightmare, and no one will vote for it. The snag is that the voters cannot be sure, ‘rationally’ speaking, whether they are voting for it or not. The poor neuronal computers in their brains are just not up to working that out. They have to fall back on voting for a television image, or—more rarely—a ‘policy’ that sounds as if it might make the world a better place if only it could be executed. ANOTHER VIEW OF RATIONALITY Is there no alternative view of rationality? Yes, there is. But it requires us to give up some of the scriptist tenets dearest to the literate mind. Dewey tried to reintegrate logic with society, and the penalty he paid for that was to be passed over in silence by historians of logic. But it seems a small price to pay for the intellectual distinction of promoting such a radical idea. In spite of his disastrous type/token distinction, Peirce too was on the right lines when he insisted that ‘man makes the word’ and that speaking and writing are themselves forms of thought, not imperfect translations of
< previous page
page_153
next page >
< previous page
page_154
next page >
Page 154 forms of thought into audible or visible tokens. He was also right in insisting that meanings do not exist independently of interpreters. The first scriptist assumption that needs to be dropped is the assumption that words somehow carry their meanings invisibly around with them. This is an assumption enshrined in the practices of modern lexicographers. It is a form of the superstition that anthropologists call ‘word magic’ when they detect it in preliterate societies. But word magic somehow escapes their attention when it turns up on page after page between the covers of the Oxford English Dictionary. Part of what is involved in abandoning word magic as a basis for explicating rationality may be illustrated by reference to the philosophical enterprise that Paul Grice embarked upon a few years ago in an influential paper entitled ‘Logic and conversation’. Grice was committed to what one of his admirers calls a ‘view of communication as a reason-governed activity’ (Grandy and Warner 1986:1). He wanted to show that, contrary to the views of many logicians, ordinary conversation was indeed, most of the time, conducted on logical principles. Occasionally, Grice himself goes further than this, declaring that ‘one of my avowed aims is to see talking as a special case or variety of purposive, indeed rational, behavior’ (Grice 1989:28). He confesses that he is enough of a rationalist to want to find a basis that underlies these facts, undeniable though they may be; I would like to be able to think of the standard type of conversational practice not merely as something that all or most do in fact follow but as something that it is reasonable for us to follow, that we should not abandon. (Grice 1989:29. Italics in the original) In short, he is not proposing a programme to investigate what logic there may happen to be in examples of ordinary conversation: he is constructing a set of recommendations for those of us who wish to converse logically (as indeed he thinks we should do). The approach is strikingly reminiscent of that adopted by traditional prescriptive grammarians, and Grice calls his recommendations ‘maxims’. ‘Logic and conversation’ begins with Grice setting out two philosophical views of the supposed divergences between logical notation and the corresponding expressions in what he calls ‘natural language’ (words such as not, and , or , if, all, etc.). What Grice labels the ‘formalist’ view is that logical notation is superior to ‘natural’ language, and that the divergences show up ‘imperfections’ of the latter. By contrast with this, the ‘informalist’ view is that language serves many purposes besides the needs of science, and that many valid everyday arguments are expressed in terms that do not match the devices of formal notation: so there are in fact two logics, a logic of the formal logician and a logic of ‘natural’ language. Grice claims that both sides in this dispute make the same mistake, i.e. both believe that the divergences in question do exist. This mistake arises from ‘inadequate
< previous page
page_154
next page >
< previous page
page_155
next page >
Page 155 attention to the nature and importance of the conditions governing conver-sation’ (Grice 1989:24). In short, both formalists and informalists have an impoverished conception of rationality. In passing, it should be noted that both ‘formalists’ and ‘informalists’, as described by Grice, seem to believe that only in writing (i.e. by courtesy of an invented logical notation) is it possible to represent fully and accurately the way logic works. A preliterate culture presumably could never achieve this, being limited by the oral resources of ‘natural languages’. The assumption wears its scriptist credentials on its sleeve. Grice’s programme is based on recognizing that in conversation speakers normally imply much more than they actually say in so many words. Nevertheless, hearers understand these ‘implicatures’ because they count on speakers following certain maxims. The maxims operative at this level fall into four categories: Quantity, Quality, Relation and Manner. Under Quality fall the maxims ‘Do not say what you believe to be false’ and ‘Do not say that for which you lack adequate evidence’. Under Manner fall such maxims as ‘Avoid obscurity of expression’, ‘Avoid ambiguity’, and so on. The whole battery of maxims gives effect to an overriding principle of conversation, which Grice calls the ‘Cooperative Principle’. This is: ‘Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged’ (Grice 1989:26). Grice’s claim is that, on the basis of the Cooperative Principle and the relevant maxims, hearers are able to work out (rationally) what is implied by what a speaker says. Grice talks of ‘calculating a conversational impli-cature’. His project, in brief, is to supply another level of logic operative in conversation, which has hitherto escaped the notice of both philosophers and grammarians. This is not a different logic, but an extended logic. It is important for Grice’s argument that calculating an implicature requires the application of reasoning. If what is implied is clear without any reasoning, then the implicature is not ‘conversational’ in Grice’s sense. The relevance of Grice’s proposal to the present discussion is this. It is based on a belief in word magic, and once that belief is abandoned the whole construction of an extended conversational logic collapses. Grice would doubtless deny there is any word magic involved, because he takes it for granted, as Aristotle does, that what he calls ‘natural languages’ are already in place and function as fixed codes for purposes of conversation. But without that assumption it becomes impossible to explain how hearers ‘calculate implica-tures’. For in Grice’s conversational scenarios there is no presupposition that those taking part are already acquainted or have previously agreed on various matters under discussion. Words are the sole means employed to establish their conversational relationships. Nor as a matter of course do Grice’s conversationalists start by defining their terms. So the assumption has to be that somehow the words used carry their meanings around with them. On hearing the words, hearers immediately know what they mean. Otherwise there would be no rational basis for hearers ‘calculating’ the implicatures as they do.
< previous page
page_155
next page >
< previous page
page_156
next page >
Page 156 Grice comes close to conceding as much when he slips in qualifications such as ‘given a knowledge of the English language’ and ‘on the assumption that he was speaking standard English’. More explicitly, Grice admits that in calculating a conversational implicature the hearer must know ‘the conventional meaning of the words used’. (He sometimes refers also, but less perspicuously, to their ‘conventional force’.) The ‘conventional meaning’ is even said to be part of the ‘data’ which the hearer has available. Nowhere does Grice seem to recognize that fictions such as ‘the English language’, ‘standard English’ and ‘conventional meaning’ are derived from latter-day versions of Aristotle’s language myth, as developed by 19th-century lexicographers; or if he does, he accepts them as necessary fictions for the purpose of formulating his conversational logic. Redefining rationality by adding an extra tier on to traditional logic, in some such way as Grice proposes, is a move in quite the opposite direction to the move that is being proposed here to restore the link between ‘logical rela-tions’ and ‘social relations’. You cannot reinforce shaky foundations by building another storey on the roof. In spite of his advertised concern with everyday conversation, Grice’s method is scriptist through and through. He makes no attempt to examine and compare recorded conversations, as a linguist might do. Or to describe the social background of various types of conversation, as a sociologist might. Grice invents his own examples and projects on to them his own rationalization of what is going on in the minds of the hypothetical participants. That is not the way to go. It is a way that leads nowhere. WORD MAGIC AND TEXT MAGIC Another extension of word magic is what might be termed ‘text magic’. Here meaning is conceptualized as somehow inhering in the written text. This is an article of faith with many literary theorists. For followers of the late Jacques Derrida, text magic has become holy writ. According to Der-rida any writing, in order to be writing, must remain ‘iterable’ come what may; that is to say ‘in the absolute absence of the receiver or of any empirically determinable collectivity of receivers’. In other words, the meaning is permanently ‘encoded’ in text itself. This, Derrida insists, applies not just to alphabetic writing but to all forms of writing. ‘A writing that was not structurally readable—iterable—beyond the death of the addressee would not be a writing’ (Derrida 1972:375). There are at least three points about Derrida’s version of text magic that deserve to attract the attention of anthropologists. The first is its blatant resurrection of a mystical brand of Aristotelian essentialism. What Derrida is claiming to identify is the ‘essence’ of writing. In the Derridean community of literates, the first property of writing is, in effect, timelessness. ‘All writing, then, in order to be what it is ( pour être ce qu’elle est) [= the Aristotelian thing in virtue of itself], must be able to function ( fonctionner) in the radical absence of any empirically determined addressee in general’
< previous page
page_156
next page >
< previous page
page_157
next page >
Page 157 (Derrida 1972:375). Which is to say that it is intrinsically complete. This is a salvo fired in the direction of Peirce. An interpreter is surplus to requirements. The text needs neither the presence of the author nor the presence of a reader. It must, as it stands, be comprehensible, hence meaningful: otherwise it could not ‘function’ as text . The second point, which leaves Aristotelian scriptism far behind, is the equation of iteration with reading. In other words, it is the activity of reading that ‘repeats’ the text, not the activity of a scribe making another copy, or a printer bringing out a new edition. The third point is that Derrida quite explicitly generalizes these features of writing to all linguistic signs. They are valid ‘for all orders of “sign” and for all languages in general’ (Derrida 1972:377). There could hardly be a clearer example of the addiction of the literate mind to the unabashed projection of its own experience of writing on to an understanding of every form of communication. There is a famous reply to Derrida by John Searle, who takes Derrida to task for misinterpreting Austinian speech-act theory. In this fascinating dialogue of the deaf, Searle allows Derrida no square inch of philosophical ground from which it might be possible to dislodge him. Searle argues that Derrida has conflated iterability with the permanence of the written text, and denies the applicability of what Derrida says to the case of speech. But one philosophical issue on which Searle declines to do battle with Derrida is the issue of iterability itself. What Searle says about this is revealing: any linguistic element written or spoken, indeed any rule-governed ele-ment in any system of representation at all must be repeatable, otherwise the rules would have no scope of application. To say this is just to say that the logician’s type-token distinction must apply generally to all the rule-governed elements of language in order that the rules can be applied to new occurrences of the phenomena specified by the rules. (Searle 1977) In short, Searle regards iterability as a logical requirement for linguistic communication: it owes nothing to writing per se. The fact that this allegedly ‘logical’ requirement (the type/token distinction) is an invention of Western typographic man, having its source in a privileged literate view of the written sign, escapes Searle entirely. Here is the American pot calling the French kettle black. LOGIC AS FALLOUT FROM LITERACY Anyone who takes Searle’s position seriously has to ask the question ‘What exactly is a logical requirement?’ From Aristotle down to the present day, the foundations of logical thinking—including all its elaborations by modern logicians—are supposedly based on the twin ‘laws’ or ‘principles’
< previous page
page_157
next page >
< previous page
page_158
next page >
Page 158 usually referred to nowadays as ‘non-contradiction’ (or sometimes just ‘contradiction’) and the ‘excluded middle’. These too are part of the fallout from literacy that Searle and many other contemporary philosophers do not even recognize as having anything to do with literacy at all. All that needs to be said at this point about the excluded middle (or principle of bivalence) is to acknowledge the fact that modern logicians have developed so-called ‘many-valued’ or ‘nonstandard’ logics, and to ask how this would ever have been possible in a preliterate society. The very term nonstandard suggests both a deviation from orthodoxy and an oblique reference to the ‘standardization’ of linguistic forms that writing sooner or later brings in its train. More specifically, we find the overt admission by exponents of ‘nonstandard’ logics that, for instance, they are appealing to ‘a distinction between logics and their associated calculi ’ (Ackermann 1967:3). How this distinction could ever be articulated for a preliterate audience it is difficult to see. It depends entirely on having inventories and grids of written signs that can be shuffled around in order to display the unlimited possibilities there are for constructing ‘truth tables’ once we abandon the idea that there are only two truth-values. The law of non-contradiction is a rather different kettle of fish. There was always something odd about it, because its champions could never agree on how to apply it. And this is a fatal drawback if it is claimed that the law is ‘self-evidently’ true. For example, in Language, Truth and Logic, A.J. Ayer criticizes Kant’s classic account of the difference between ‘analytic’ and ‘synthetic’ judgments. This distinction hinges, very precisely, on the law of noncontra-diction. Supposedly, we are not allowed to deny the analytic proposition All bodies are extended , under pain of contradicting ourselves (given the meanings of body and extended ). On the other hand, for Kant 7 + 5 = 12 is a synthetic proposition. But Ayer maintains that this does not ‘follow from’ the reasons Kant gives, since Kant has confused a logical criterion with a psychological criterion (Ayer 1946:78). How could this be if the law of non-contradiction is self-evident? Manifestly, its ‘correct’ application was self-evident to Ayer, but not to Kant (who nevertheless lectured on logic throughout his career and had himself attacked Christian Wolff’s interpretation of this same principle). Matters go from bad to worse when we find Ayer committing what looks like the same error of which he accuses Kant. He writes: there is a sense in which analytic propositions do give us new knowledge. They call attention to linguistic usages, of which we might otherwise not be conscious, and they reveal unsuspected implications in our assertions and beliefs. But we can also see that there is a sense in which they may be said to add nothing to our knowledge. For they tell us only what we may be said to know already. (Ayer 1946:79–80)
< previous page
page_158
next page >
< previous page
page_159
next page >
Page 159 This is as clear an example of foisting psychological criteria on to a logical distinction as it would be possible to give. What we may or may not be ‘conscious of’ is a psychological matter. So where does that leave the logical status of the two propositions (1) Analytic propositions give us new knowledge and (2) Analytic propositions give us no new knowledge? Is either of these analytic? If so, why was that not self-evident from the start? If not, it seems curious that the law of non-contradiction depends on synthetic knowledge that is far from self-evident. ‘Propositions’ cannot be identified by contemplating isolated sentences in a decontextualized vacuum (on a blackboard, or a blank sheet of paper, or as near such a vacuum as it is possible to get). There is no law of non-contradiction that is ‘logically’ independent of all circumstances, and the supposition that there is such a law—even more so, that there must be such a law if we are not all talking nonsense—is a scriptist illusion.
< previous page
page_159
next page >
< previous page
page_160
next page >
Page 160 12 Epilogue Rethinking Rationality REPRISE Rationality—or, at least, most of what we often try, overambitiously, to subsume under that generalization—is a product of the sign-making that supports it. That is what makes it possible for societies in which different modes of sign predominate to have different concepts of rationality. These are the differences which have long puzzled modern anthropologists. In fact, it would be altogether less misleading to abandon the singular rationality in favour of the plural rationalities . What stands in the way of doing this is the prejudice prevalent in literate societies that only literate forms of rationality are ‘really’ rational. Tylor and his 19th-century colleagues were right to identify writing as a landmark in the history of civilization, although mainly for reasons other than those they themselves adduced. Similarly, Lévy-Bruhl was right to question whether human thinking is everywhere the same, although wrong to label one kind of thinking ‘prelogical’. A preliterate society has its own forms of rationality. The key factor in the difference between preliterate and literate rationalities is that the written word is a far more complex type of sign than the spoken word. The main reason why this has been overlooked is that spoken and written languages look very similar in respect of morphological and syntactic structures, and what philosophers nowadays call ‘compositional-ity’. (That is hardly unexpected, given that the most widespread systems of writing developed on the basis of prior forms of speech.) Nevertheless, however close the structural correspondence between the spoken word and the written word may superficially appear to be, writing is semiologically far more sophisticated, since its advent introduces a triangulation into the previously binary relationship between sounds and whatever they supposedly ‘stand for’. Moreover, this intervention has no precedent in earlier forms of language-based communication. It introduces something that was previously absent from the speaker’s oral world; namely, the systematic integration of independent spatial configurations.
< previous page
page_160
next page >
< previous page
page_161
next page >
Page 161 AN INTEGRATIONAL VIEW OF MEANING No attempt will be made here to analyse the many psychological studies of reasoning that have been conducted in recent years. Much of this work is surveyed in Keith Stanovich’s book Who is rational? (1999). It is not that efforts to develop a ‘cognitive theory’ of reasoning processes lack interest. In this area of studies it has become commonplace to recognize the phenomena of ‘cognitive decontextualization’ attendant upon reasoning processes. It is often suggested that apparent examples of irrationality are illusory: the human being turns out to be a rational animal after all, as Aristotle said. According to Inhelder and Piaget: The majority of psychologists are not interested in logic, and this means they have a tendency to accept what they regard as logically necessary as somehow “given”, instead of posing a problem. (Inhelder and Piaget 1964:282) The irony is that in Piaget’s work with children, nothing is more evident than the reliance Piaget himself places on the Aristotelian conception of the syllogism, and in particular on class inclusion. This is characteristic of a much more widespread phenomenon. The researchers, whether in the laboratory or in the field, regularly fail to factor in their own literacy as one of the foundational elements determining the thinking behind their psychological tests. Similarly Christopher Hallpike, one of the few anthropologists to give Lévy-Bruhl a fair hearing, attempts to apply Piagetian developmental psychology to the problem of the primitive mind ( The Foundations of Primitive Thought, 1979), but remains unshakeably convinced of the obvious superiority of ‘our’ forms of reasoning: It is mere humbug to claim that, within its context , primitive thought is as effective as formal hypothetico-deductive thinking, since it is a fact that the latter type of thought is more powerful, just as a caterpillar tractor is more powerful than a horse; a horse may be able to do all the work we need, and be more beautiful than the tractor into the bargain, but it would be absurd to maintain that it was stronger. (Hallpike 1979:490. Italics in the original.) Once both word magic and text magic have been rejected, the major requirement for any enterprise that aims to restore a social foundation for human reason is to find an alternative account of meaning. This too is available. But again it involves giving up that cherished idol of the Western literate mind—the Aristotelian language myth, with its languages that are decontextualized fixed codes, and its communication that is a process of code-based thought-transference.
< previous page
page_161
next page >
< previous page
page_162
next page >
Page 162 Instead, what can be proposed is a semiology that treats the sign as involving an integration of human activities, and its meaning as a circumstantial product of that integration. Here there are no decontextualized meanings in circulation, waiting in the wings to be ushered on stage. The integrational approach can accommodate all the levels and modes of semi-osis that are actually found in day-to-day human communication (oral, written, gestural, tactile, pictorial, etc.). The integrational sign renders the Aristotelian sumbolon obsolete. Furthermore, an integrational approach recognizes that literacy induces a new attitude towards words as bearers of meanings. The identification of the word as a self-contained unit has to become habitual for the fluent, effortless practice of writing and reading. Exactly how the word is rendered in any particular script (whether logographic, syllabic or phonetic) is of less importance. What matters is the focus on composing (and decomposing) the message in terms of word-size meaning-bearing segments, however these may be written. This is not by any means to suggest that preliterate speakers are unaware of the word as a unit in speech. Nevertheless, they may not so readily decontextualize such units in a way that facilitates asking specific questions about them. One reason for thinking this is that even when sophisticated literate thinking has reached the stage that we observe in Plato, where gen-eral questions such as ‘What is justice?’, ‘What is truth?’, etc. are the focus of attention and discussion, there is still evidently some difficulty in deciding whether the inquiry is an inquiry about words, or about the things or concepts that words supposedly ‘stand for’, or about both. As long as these possibilities are not clearly distinguished, we are still in the earliest phase of literate thinking about language. SIGNS AND SOCIETY All macrosocial forms of organization require the integration of activities by individuals and teams of individuals. Such organizations—schools, hospitals, companies, government departments—have their own integrational structure. This imposes a local rationality on the conduct of those working in them. It may not be very efficient. It may have grown up as the accumulation of practices that were once convenient but are now hard to justify. It may be swept aside tomorrow by the arrival of a new boss who reorganizes the whole enterprise from top to bottom. The point is that the current inte-grational structure—however ‘good’ or ‘bad’ it may be—is what imposes (some) limits on the ‘meaning’ of (some of) the actions of individuals operating within that framework, i.e. on the signs that it is important to understand if you are working in that organization. The integrational structure of such organizations takes literacy for granted. It is assumed, for instance, that writing (on the appropriate document) ‘The
< previous page
page_162
next page >
< previous page
page_163
next page >
Page 163 accounts department will need a copy of this’ will be understood. On the other hand, it may be also be taken for granted on delivery of the document, without saying so explicitly, that the accounts department will need a copy. The delivery of the document is itself a non-verbal sign to that effect. The category of integrational signs includes signs of both kinds, verbal and non-verbal. Both are required by the way the organization functions. There is no explicit itemization of these signs (although there may be offi-cial ‘rules’ of conduct that in various ways reflect some of them). Nor could there be a definitive list. For the integrational sign is a function of the ongoing activities involved, and these vary according to the way in which each day’s new demands are being met by that organization and its members (including those with years of experience as well as new recruits who joined yesterday, and taking into account that a secretary who ‘usually’ does a certain job may today be off sick, so that different actions or instructions may be needed). Nor is there any pre-ordained divide between ‘actionoriented’ and ‘word-oriented’ meanings. Any procedure may have to be modified in the light of circumstances obtaining when the message is delivered. Nothing, then, is gained by insisting on a distinction between meanings that are made ‘explicit’ (in virtue of the words used) and those that are left implicit in the actions themselves. For both the words and the actions are signs susceptible to indefinitely many misinterpretations, both by intelligent and by unintelligent interpreters. Meaning is always in situ , and never secure. To suppose that the meaning is ‘fixed’, whether by words or by actions or by both in conjunction, is tantamount to believing that signs come with a maker’s guarantee. The integrational sign, which bears no such guarantee, whether social or biological, is thus to be distinguished not only from the sumbolon but also from at least the following. 1. The sign conceived of as a physical object of some kind. (Thus the motorist’s warning triangle is not a sign when lying in the boot of the car.) 2. The sign conceived of as the product of a public ‘rule’ giving meaning to physical objects (as in the Highway Code, where we are given explanations of how to interpret certain visual configurations that appear on public notices erected by local authorities). 3. The sign conceived of as a psychological unit linking a form and a meaning, and carried around in their heads by speakers and hearers (Saussure’s signe). 4. The sign conceived of in Peircean terms, where the ‘token’ is a sign of the ‘type’. (From an integrationist perspective, the sign is neither a type nor a token. Words, as listed and defined in dictionaries, are not first-order linguistic entities but metalinguistic constructs.) Whatever is assigned an integrational function with respect to particular sequences of activities fulfils the role of a sign. What it means can only be stated by reference to the activities in question. Meanings vary with contexts. There are public signs that function outside the organization of particular institutions (schools, hospitals, government departments, etc.).
< previous page
page_163
next page >
< previous page
page_164
next page >
Page 164 The meaning of a well-known work of art such as Michelangelo’s Moses cannot be given once and for all simply by stating that it ‘represents’ a particular Old Testament figure (leaving aside the horrendous problems in explicating such a notion of representation). Nor is it constituted by anyone’s mental image of that statue—even the sculptor’s—or by public images and replications that have the statue as their ‘subject’. Nor by anything that has been written about it, such as Freud’s famous essay. But all these activities—and countless others—that take Michelangelo’s statue as their focal point are activities contributing to its meanings. (The plural is necessary here, because the work of art will have different meanings for different people—including many who have never themselves seen it. And its meaning may change for one person over a period of time.) Furthermore, these integrated activities will themselves often involve the production of further signs, many of them not directly related to Michelangelo and his sculpture in any way. Signs as social facts belong to intricate open-ended networks of communication, and it is at this level of sign-making, in establishing the circumstantially relevant con-nexions between signs, that the individual is called upon in the first place to manifest rationality, i.e. to assign meaning to words and deeds, both his own and those of others. The individual’s contribution to communication must make sense at this elementary level before it can hope to accomplish anything more ambitious. A very astute philosopher once recommended that in order to understand the work of an original thinker we should ask ‘Just what was the conceptual fix that he was in?’ (Ryle 1954:125). What has been argued throughout this book is that Aristotle’s fixed-code view of languages was an attempt to deal with the fix he was in. Aristotle comes close to describing this fix himself when he remarks in Metaphysics 1062a10ff: Those, then, who are to join in argument with one another must to some extent understand one another; for if this does not happen how can they join in argument with one another? Therefore every word must be intelligible and signify something, and not many things but only one; and if it signifies more than one thing, it must be made plain to which of these the word is being applied. It would have been no use insisting that in every debate the participants must first sit down and list every word and every sentence they proposed to use, in order to make sure there was no semantic disagreement between them. The alternative was to suppose that basic agreement is ensured in advance by the existence of a public language that both parties tacitly agree to accept unless challenged on particular details. In short, Aristotle needed (in order to combat the sophists) an epistemological basis for his syllogistic. But by adopting ‘the Greek language’ as that basis, he committed the initial mistake—repeated subsequently by many
< previous page
page_164
next page >
< previous page
page_165
next page >
Page 165 less gifted logicians—of locating rationality at the wrong level. His second mistake was the attempt to link logic umbilically to truth. The two mistakes are quite separate. It is the first mistake that is attributable to his scriptist approach to ‘correct’ language. The second falls into the category of what sports commentators call ‘unforced errors’, but is explicable when we take into account the position he was trying to establish for philosophy in Athens in the fourth century BC. His logic does not allow for the fact that human rationality cannot be reduced to a grid of relations between affirmative and negative sentences (as in the ‘truth tables’ of modern logicians). It does not recognize, for instance, that rationality may also be manifested in asking an intelligent question or laughing at a joke or tying a knot, and in countless other ways that have nothing to do with ‘preserving truth’. By focussing on the formalization of deductive inference, Aristotle in effect slips in as the foundation of ‘reason’ his own linguistic analysis of class inclusion. As a result, the syllogism represents an extrapolation from one possible—and very restrictive —way of representing the semantic relationships between words like man and animal , or green and colour . That move more or less guarantees in advance—for anyone who takes the Organon as the gospel of Western rationality—that only people who look at language from Aristotle’s narrow perspective, and focus on the same semantic relations between words, will end up being counted as fully ‘rational’ creatures; and that is exactly what happens when Western scholars eventually begin to take an interest in the belief-systems of ‘primi-tive’ cultures, and Western psychologists begin detecting the existence of ‘disembedding thinking styles’. Those who developed formal logic into an independent, self-contained discipline—which, for Aristotle, it never was—lost sight of Aristotle’s motivation. Aristotle’s syllogistic was part of an attempt to show the superiority of the philosopher over the sophist. It was important to Aristotle that the philosopher should not be admired simply for his skills in debate, or even for his wisdom, his ceaseless questioning and spirit of inquiry. Philosophy could have made little progress beyond Socrates if those had been its principal objectives, for Socrates had already demonstrated in propria persona how they could be achieved, given the requisite intellectual ability. Syllogistic was important to Aristotle because it concerned matters that were not beyond the reach of anyone—at least, anyone who spoke and wrote Greek. It makes no difference how quick or slow you are to arrive at the conclusion from the premises provided you can get there. In that sense—but in that sense only—logic was latent in what had already been learnt by anyone who had received an elementary education. Syllogistic was an attempt to systematize this latent potential insofar as it bore upon the search for truth. Socrates and Plato may have taken this potential for granted, but they had never tried to make it explicit in the way Aristotle did. If we situate the texts of the Organon in this context, both the ambitions and the weaknesses of Aristotelian syllogistic can be in large measure
< previous page
page_165
next page >
< previous page
page_166
next page >
Page 166 accounted for. Aristotle never explained exactly how the conclusion Socrates is mortal ‘follows from’ the premises All men are mortal and Socrates is a man. Doubtless he took it to be intuitively obvious. But is it obvious? Two possibilities about ‘following from’ come up for consideration. One is that it all follows from the words themselves. This is the psychocentric explanation of rationality. So to confess failure to ‘see it’ seems tantamount to confessing some kind of linguistic ignorance. Can anyone who understands the sentences in which the three propositions are couched fail to grasp how it follows? The alternative is that it all follows from the way the world is. This is the reocentric explanation of rationality. So failure to ‘see it’ amounts to failure to understand something about the mortality of human beings and/ or something about Socrates’ status as a member of the human race. He has to be mortal because human beings just are mortal. Both reocentric and psychocentric accounts can be combined in mutual support, as Aristotle ensured by his stipulated assumptions about the way words relate to the world, including his (disingenuous?) account of truth as correspondence between statement and reality. Allowing Aristotle all this, when we probe the syllogism there are still unanswered questions that begin to emerge. One is: do we need All men are mortal at all? Is it not surplus to requirements? If we accept Socrates is a man, does not Socrates is mortal follow already? Or, more exactly, does not Socrates is mortal simply state one part of what is already assumed in Socrates is a man? (Other parts will include Socrates was born , Socrates had parents, etc.) The reason why Aristotle brings into play the major premise All men are mortal has to do with the ambitious role of syllogistic as part of the justifi-cation of philosophy, rather than its relevance to the particular conclusion Socrates is mortal . All men are mortal becomes important if the claim is that the syllogism, as a philosophical tool, exhibits universal connexions between its propositional components. Otherwise, the link between So-and-so is a man and So-and-so is mortal might be limited to one part of the world, or one linguistic community. Perhaps there are or have been languages with no quantifier corresponding to all. (As far as Aristotle knew, there might have been dozens of such languages spoken in Africa or beyond the Pillars of Her-cules.) But in such languages saying that a named person is mortal might still follow from saying that that individual is a man. (Having words for ‘man’ and ‘mortal’ does not prima facie seem to depend on having a word for ‘all’. But that, in effect, is just what Aristotle’s syllogism implies.) Modern logicians have raised the question of why Aristotle bothers with hypothetical syllogisms at all (Lear 1980:34–53.) One suggestion is that this was the kind of syllogism most commonly appealed to in Plato’s Academy, so it could not be ignored. For either debate or inquiry to proceed some degree of agreement between speakers is necessary. The principles of dialectic are statements
< previous page
page_166
next page >
< previous page
page_167
next page >
Page 167 to which all parties are willing to agree. Unlike Aristotelian methodology which recognizes particular principles specific to each science, in Platonic debate the speakers move back until they reach some principle to which they can all agree. In Aristotle’s hypothetical syllogism, the agreement would be to accept Q , if it can be shown that P. (Lear 1980:39) But this gets us only as far as locating ‘following from’ in the agreement between parties to the debate. It tells us what follows (by common agreement), but not how it follows. (In fact, the parties to the agreement need not agree on how it follows. So the notion of ‘following from’ remains obscure.) A related aspect of the problem of ‘following from’ is brought to light when we consider the so-called law or principle of ‘non-contradiction’, which modern logicians have copied from Aristotle. (For a detailed discussion of Aristotle’s presentation of this principle, see Whitaker 1996:183–203.) It is often said to be, along with the law or principle of the excluded middle, one of the twin foundations of human rationality. (Certainly, its infringement—or alleged infringement—was often adduced as evidence that primitive peoples could not think rationally.) The law is usually interpreted as banning the simultaneous affirmation and negation of the same proposition (i.e. one must not assert ‘ p & not-p’). Usually, again, it is taken for granted that any infringement of this law is obviously ‘wrong’ and if you cannot see that without further argument then you must be weak-minded or in need of counselling of some kind. However, when we examine what Aristotle says on the subject a further dimension emerges. For Aristotle treats it specifically as the foundation of philosophy , as distinct from other subjects. This is in accord with his claim that philosophy is, or should be, the basis on which all other inquiries are built. To justify this supreme hierarchical position for the philosopher, as leader in the human search for wisdom, Aristotle is obliged to make what is clearly not an empirical but a metaphysical claim; to wit, that there is such a thing as the study of ‘being-in-itself’, and that this is the proper domain of philosophy. (It is precisely at this point that he gets into trouble with defini-tion and the statement of ‘essence’.) When the strategy is seen in this light, the role of the law of non-contradic-tion is not, as some of Aristotle’s more recent followers have wished to claim, to provide the psychological underpinning for the whole of human rationality, but to provide the philosopher with ‘the most certain of all principles’ in the study of ‘being qua being’. That is a rather different matter. [ … ] he whose subject is being qua being must be able to state the most certain principles of all things. This is the philosopher, and the most certain principle of all is that regarding which it is impossible to be mistaken [ … ]. ( Metaphysics 1005b10ff)
< previous page
page_167
next page >
< previous page
page_168
next page >
Page 168 This immediately raises the question of the sense in which ‘it is impossible to be mistaken’. Descartes’ search for an answer led him to Cogito ergo sum . But this is not the path Aristotle follows. The passage in Metaphysics continues: such a principle must be both the best known (for all men may be mistaken about things which they do not know), and non-hypothetical. It is relevant here to note the structure of the argument. We are first of all being told what conditions must be satisfied in the search for candidates for ‘the most certain principle’. These requirements are directly related to the philosopher’s claim to be studying ‘being qua being’: they are not being offered as logical requirements for correct reasoning (which would be manifestly circular), and even less as requirements for all thinking. For a principle which everyone must have who knows anything about being, is not a hypothesis; and that which everyone must know who knows anything, he must already have when he comes to a special study. Evidently then such a principle is the most certain of all. As any Buddhist would point out, we are here dealing with someone who has been brainwashed (or brainwashed himself) into believing that there must be such a principle that acts as the foundation of human knowledge. The integrationist claim is that this brainwashing is the product of literacy. At this point Aristotle has still not told us what this principle is, but the ground has been laid for recognizing it when it is revealed (as Aristotle is about to do). What is clear is that the requirements depend on the phi-losopher’s concept of ‘knowledge’. And knowledge in this case is not just knowledge of the common or garden variety, but the special knowledge of ‘being qua being’. It cannot be empirically discovered, for it must be grasped by the inquirer before coming to the special study of anything at all. What is this principle, then? It is, that the same attribute cannot at the same time belong and not belong to the same subject in the same respect; [ … ]. This, then, is the most certain of all principles, since it answers to the definition given above. So what has happened is that Aristotle has proposed his own definition and then brought forward a candidate principle to satisfy it. He has made no attempt to show that there are no other candidates available that would suit the philosopher’s requirements just as well. Nor has he explained how, in the study of ‘being qua being’, one can be sure that this principle is being observed. At this point his logic lapses again into obscurity. For example, it remains unclear whether Aristotle’s principle applies to a proposition like Socrates is fortunate. Someone might well maintain
< previous page
page_168
next page >
< previous page
page_169
next page >
Page 169 that Socrates is both fortunate and unfortunate. Aristotle has already stipulated that the attribute must belong to the same subject in the same respect. So he has already covered himself if it is maintained by the soph-ist that, let us say, Socrates is fortunate in respect of having a wife but unfortunate in respect of being charged with impiety. But suppose it is maintained that the very same respect is what renders Socrates both fortunate and unfortunate. It is common nowadays to say that those who suffer from a certain disability (e.g. being blind or dyslexic) are both fortunate and unfortunate. Their disability explains in what respect they are unfortunate (not being able to do things that others can do). But this very disability, it is often claimed, brings compensatory advantages not enjoyed by those who do not suffer from it. Aristotle might attempt to deal with such cases in three possible ways. One—the least convincing— would be to say that fortunate is homony-mous, and that what the sophist has overlooked is the distinction between two senses. So saying that the same person is fortunate and unfortunate is not a contradiction. Another would be to say that good fortune and misfortune are in reality the very same thing (like Kipling’s ‘success’ and ‘fail-ure’, or as in Bacon’s idols of the market-place), so in this type of case it is linguistic usage that leads human judgment astray. If so, then again there is no ‘real’ contradiction. A third possibility, which is a variant of the second, is to declare that counterexamples of this type fall outside the strict parameters of truth and falsity, i.e. that is not a matter of fact whether someone is fortunate (or unfortunate). The point of raising such examples is not to decide which is the correct ‘solution’ of the problem, but to show that how to apply the law of non-contradiction is not always immediately clear. And if it is not, that already throws some doubt—even by Aristotle’s standards—upon ‘the most certain of all principles’ and what makes one proposition ‘follow from’ another or others. To put it another way, negation (on which the law of non-contra-diction is based) is a metalinguistic concept no more immune from indeterminacy than any other. Or so an integrationist would maintain. Aristotle’s conception of philosophy is one that needs what might be called ‘rigid’ negation; but negation is a linguistic device that in practice is about as rigid as a lump of plasticine. (Anyone who thinks otherwise should study the way parents use the words no and don’t when correcting the behaviour of their offspring.) It was pointed out by Ernest Gellner more than thirty years ago that even those anthropologists who undertook to defend the rationality of ‘primitive’ thinking against Lévy-Bruhl did so by invoking a kind of casu-istry (my term, not Gellner’s) that failed to recognize—or blandly refused to admit—that there may be forms of reasoning that are systematically—not randomly—based on violating the principle of noncontradiction, i.e. through the attribution of conflicting ‘definitions’ to the same word (Gell-ner 1970). Aristotle’s homonyms yet again.
< previous page
page_169
next page >
< previous page
page_170
next page >
Page 170 INTEGRATIONAL SEMIOLOGY Those who are happy with Aristotelian semiology, or who cannot imagine any serious alternative, will doubtless continue to cleave to it, come what may. For those who are not happy, there is integrational semiology, which, by adopting a different conception of meaning, makes it possible to propose an alternative approach to human reason. Under the auspices of integra-tional semiology, rationality hinges on the creative process of sign-making itself, a capacity that human beings exercise in their communications with others every day of their lives. It is at this more fundamental level that rationality links up with social relations. It underpins the basic mechanisms of social organization for all communities, whether literate or preliterate, numerate or innumerate. This is not an argument in support of detecting rationality in all mental processes, as e.g. does Suzanne Langer when she claims that ‘rationality is the essence of mind’ and that it is ‘embodied in every mental act’. Even less is it an argument that rationality ‘pervades the peripheral activities of the human nervous system’ (Langer 1957:99). (I am not even sure what I should expect to detect in the peripheral activities of my own nervous system if that were the case.) Nor is it an argument in support of the position taken by Jonathan Bennett in his book Rationality (1964). For Bennett, the possession of a language—or a sufficiently language-like system of communication—is a necessary (although not a sufficient) condition of rationality for any living creature. So bees are not rational, because their communication system is too limited. They cannot give reasons for what they do. Third, it is an argument that needs to be distinguished from Winch’s (Winch 1970). Winch’s way of restoring the dependence of logical relations on social relations is to claim that any society with a language has eo ipso a concept of rationality. For Winch, the concept of rationality is not like, for instance, the concept of politeness, since if a society had no concept of politeness (and hence no word for ‘polite’) it could still have a language. Whereas rationality, according to Winch, is ‘a concept necessary to the existence of any language’: to say of a society that it has a language is also to say that it has a concept of rationality. There need not perhaps be any word functioning in its language as ‘rational’ does in ours, but at least there must be features of its members’ use of languages analogous to those features of our use of language which are connected with our use of the word ‘rational’. Where there is language it must make a difference what is said and this is only possible where the saying of one thing rules out, on pain of failure to communicate, the saying of something else. (Winch 1970:99. Italics in the original.)
< previous page
page_170
next page >
< previous page
page_171
next page >
Page 171 Here Winch, unlike Bennett, makes having a language a sufficient condition of rationality. But Winch’s argument as it stands is incoherent. A language in which it possible to say one thing does not automatically ‘rule out’ the possibility of saying something else. There is no ‘rule of English’ which makes it impossible for speakers to send conflicting messages to one another: if there were, the world would be a different place. Furthermore, it does not require a mastery of English (or any other language) to realize the impossibility of your being here and half a mile away at the same time, or of walking to the door without moving your legs. What ‘rules out’ these con-comitances is not the structure of English but the structure of the physical world and the human body. If rationality consists in recognizing that such possiblities are ‘ruled out’, then it does not take either language or society to provide the basis for reason. Winch dismisses in a footnote the question which an integrationist semiology thrusts into the forefront of inquiry into rationality: ‘I shall not discuss here what justifies us in saying this in the first place’, i.e. what justifies us in saying of a society ‘that it has a language’ (Winch 1970:99fn1). But until that justification is forthcoming, Winch’s theory of rationality—like Aristotle’s syllogistic—cannot get off the ground. SIGN-MAKING Peirce’s notion of making the word—or, more generally, of making any sign—must be explicated by reference to a human maker and a human context. The constant relevance of context hardly needs to be stressed. Suppose a caller attempts to initiate an episode of communication by pressing your front door bell. That person has ipso facto made a sign— made it, created it: a moment ago there was no sign, and now a sign has been brought into play. It opens up a gamut of integrational possibilities, a communicational process of which no one can foresee the ultimate outcome. If you hear the bell ring and go to open the door, you collaborate in this process by responding to the sound in that way. In so doing, you make the sound a sign. But your sign is not the caller’s sign. It is your interpretation, not the caller’s, that made the sound you hear a sign: your caller cannot respond to it as you do. Nor was it you who made the first sign by pressing the bell. So now the unfolding communicational process involves two signs. (There are two, because the form and meaning of the signs is different for each of you. The caller may not even be able to hear the sound you hear. Nor may you know whether he heard it or not.) Your response as sign-maker is a rational response, given the meaning you attribute to the sign, i.e. that there is someone at the door. It is rational because and insofar as it makes sense of the communication situation. Your caller’s sign-making was rational too, in view of the meaning he gives to his
< previous page
page_171
next page >
< previous page
page_172
next page >
Page 172 sign, i.e. as a summons to open the door. So in both cases the sign-making was rational, given the activities thereby integrated. Unregenerate behav-iourists will doubtless object that both of you could have acted as you did without any reasoning at all; that is, as a result of previous ‘conditioning’. They may be right, up to a point. Where they would be wrong, beyond that point, is in supposing that rationality is something to be contrasted with habitual behaviour. That is not a supposition integrationist semiology would endorse. What made it rational for you to go to the door begins with the meaning you yourself attributed to the sound you heard. Had you been fearing an attempt on your life, it might have been rational to go and hide in a cupboard instead. Rationality, like meaning, is context-sensitive. That is why in so many cases there is no clear answer to the question of whether one course of action would be more rational than another. Sign-making in general (which includes not only one’s own production of signs but one’s attribution of a sign-value to what others produce, or to anything that becomes the focus of attention) is the source of all more complex forms of rationality. Recognition of this source immediately introduces a semiological hierarchy into the study of human behaviour. Recognizing smoke as a sign of fire is (semiologically) one notch above failing to treat smoke as a sign of fire. One may speak here of a hierarchy, because no anthropologist has ever reported a community in which that relationship was reversed. (There are, it seems, no societies which value ignoring the relationship between smoke and fire above recognizing it, or think that smoke is actually the cause of fire.) In a community that failed to recognize smoke as a sign of fire, we would be likely to encounter behaviour in the presence of smoke that would count as ‘irrational’ in societies where that sign-relationship obtained. Mutatis mutandis, this applies right across the semiological spectrum. At any given level, rationality is always called to account at the bar of whatever semiological values are in place in that society. But even if we accept this, can signs be more than the proximate source of rationality in any particular case? For there apparently remains the further question of what made it rational to attribute that meaning to the sign at all. The ‘ultimate’ source—if anyone is obstinate enough to pursue the question that far—will turn out to be a web of beliefs and assumptions derived from previous experience, plus a personal assessment of the current circumstances and consequences. The integration of those components is what confers the status of a sign on some particular object or event in the situation. In the very act of sign-making you are integrating the past, the present and the (anticipated) future. The past is a web so complex that almost certainly it would be beyond both your memory and your powers of analysis to present it in full, let alone demonstrate that that was what provided the ‘rational’ justification for what you did. The future, unless you are clairvoyant, is a closed book
< previous page
page_172
next page >
< previous page
page_173
next page >
Page 173 (if the scriptist metaphor be allowed). So all you are likely to be able say if asked why you acted as you did is that you went to the door because you thought there was a caller, door bells not usually being known to ring unless pressed. (That reply will suffice to silence most questioners, but it is by no means the whole story. It amounts to no more than a short statement of your expectations, plus your conviction that they were rational expectations . Another regress looms.) Neither you nor your caller can guarantee that any expectations will be fulfilled, but that is another matter altogether and does not affect the rationality of what you have done so far. (Perhaps the caller will give up and go away before you have time to go to the door and find out what it is all about. Perhaps he realized he had come to the wrong address. You will never know. He may never know that you slipped and sprained your ankle on the way to the door, and that is why no one answered.) The web of beliefs and assumptions that appears to provide the ‘ulti-mate’ source of your sign-making in this instance will include familiarity with certain patterns of social behaviour, such as common procedures for finding out whether the occupants of residential premises are at home (e.g. by ringing the bell). These are certainly social practices. They were not ordained by God or laid down in the laws of the Medes and Persians. In some parts of the world there are quite different practices. In some parts of the world there are no such ‘residential premises’. Nor are there any door bells. These considerations might be cited if you are pressed further to explain your sign-making in this context . But again the reasons you might give on the spur of the moment are not what ‘ultimately’ grounds the rationality of your actions. (It is very unlikely, for example, that your explanation of why you acted as you did will include producing circuit diagrams showing how a door bell works, or allude to the fact that you have not received any death threats lately.) So why not abandon the futile search for ‘ultimate’ reasons? The rationality of your actions resides not in the reasons that it may occur to you to give (which is not to say that your reasons count for nothing). Nor does it reside in the (much more lengthy and complicated) reasons that might be produced by some hypothetically omniscient observer with access to all the information that you have ever acquired in the past. The rationality of your actions in the here-and-now resides in the local coherence of your sign-making, i.e. its role in the current integration of otherwise unintegrated activities (pressing buttons, opening doors, etc.). Sign-making is always and inevitably in medias res . Sufficient unto the day is the rationality thereof. And what you did rationally on that day would have been no less rational even had you been quite unable to articulate any specific reasons at all. The actions we carry out ‘without stopping to think’ are not to be condemned as irrational on that count. Sometimes, doubtless, it would have been better if we had stopped to think. But stopping to think can also be counterproductive, as suggested by the folk wisdom of the proverb ‘He who hesitates is lost’.
< previous page
page_173
next page >
< previous page
page_174
next page >
Page 174 All this carries over to communication with written signs. Derrida was at least right to insist that reading involves doing something with a text. He was wrong to construe what is done with it as ‘iteration’. On the contrary, what you do when you read a written text is make your own assignments of meaning to the forms it contains. You, the reader, make signs of the forms you see on the page; and unless you do this the text is meaningless. You create your own text. None of this will sound very convincing to those who are reluctant to try thinking of rationality itself as hinging on the very process of sign-making. That will seem too much like claiming, in effect, that we all make our own logic as we go along, depending on the circumstances. They will be right to hesitate. That is exactly what the claim amounts to. And if explanations have to come to an end somewhere, that is as good a place as any—and far better than some—for the explanation of rationality to come to an end. Some anthropologists distinguish between a ‘weak’ and a ‘strong’ rationality. Weak rationality is defined as the kind of rationality attributed to an action if there is a goal to which it is directed. Strong rationality is attributed to a person acting rationally on the basis of rationally held beliefs (Jarvie and Agassi 1970:173). Someone may ask: ‘Is integrationist rationality either of these?’. No, it is neither. Integrationist rationality is actually ‘weaker’ than ‘weak’ rationality, in that it does not require that sign-making be ‘goal-directed’, at least if that weasel term is taken to imply that the actor is deliberately aiming at some specific goal. Much human sign-making occurs quite spontaneously, without any prior consideration of objectives. (Your look of surprise may be either voluntary or involuntary. But other people will doubtless interpret it as a sign, although the meaning they attribute to it may vary according to whether they thought it one or the other. Whether you intended it or not, and sidestepping the conceptual morass of identifying ‘intentions’, your behaviour has already become integrated into an episode of communication. That is the social reality.) Gilbert Ryle drew an important distinction between formal and informal logic (Ryle 1954:111–129). Formal logic, according to Ryle, was begun by Aristotle. (So whatever is involved in formal logic does not antedate literacy.) If Ryle is right, philosophers must have been doing informal logic long before formal logic appeared on the Greek scene: they had been engaged in reasoned debate about such topics as justice, virtue, the origin of the universe, and so on. So far, so good. For Ryle, the informal logic of the philosopher stands to the logic of the formal logician as the work of a businessman to the work of his accountant. Ryle’s scriptist metaphor is revealing. An accountant, in order to exercise his profession, needs both literacy and numeracy. The accountant is only concerned with the balance sheets, not with the goods and services provided. The latter are the concern of the businessman. But without the flourishing enterprise of the businessman, the accountant would be unemployed. There cannot be a society in which
< previous page
page_174
next page >
< previous page
page_175
next page >
Page 175 accountancy is the sole occupation. Ryle’s point seems incontrovertible if we accept his analogy. We might even extend it and say that there cannot be a society in which teaching people to read and write is the sole occupation. If we follow Ryle, it seems that both formal and informal logic depend on a rationality that is not wholly contained in either and is differently exploited in both. In other words, it takes a rational mind to see what is logical about formal logic, i.e. what is common both to formal logic of the kind Aristotle began and to the informal logic that preceded it. The syllogism itself does not explain what is logical about the syllogism . But now there is a problem. We are left with an unanswered question about what makes it possible to appreciate what it is that formal and informal logic share. To this question the integrationist approach supplies an answer. Both formal and informal logic depend on being able to deal with linguistic signs, specifically in the integrated procedures of question and answer that are involved in debate. What Aristotle did was to show how so much in argument can be made to depend on so few linguistic signs (particularly signs for ‘all’, ‘some’ and ‘not’). That would be a message incomprehensible to a mind unable to cope with assigning meanings to linguistic signs in the first place. Philosophers have debated whether actions , as distinct from propositions, can enter into logical relations or be conclusions of arguments. Aristotle, Kant and Wittgenstein seem to be ranged on one side of the argument, against Hume and Austin on the other (Edgley 1969:28–31). While it is true that vulgar mindspeak certainly warrants speaking of a person as acting rationally or irrationally, as if it were the conduct that was being judged, rather than the thinking lying behind it, this is a controversy for which an integrationist can muster little enthusiasm. The reason will be obvious. Consider the case of someone who is so severely handicapped as to have no access to the usual forms of linguistic communication, an illiterate individual who is deaf, cannot speak, and has never learnt one of the official ‘sign languages’. Such a person might nevertheless be sufficiently well acquainted with the local neighbourhood to make a perfectly rational decision about the best way of getting from one place to another, although ex hypothesi unable to ‘give reasons’ for choosing that route. Familiarity with local landmarks would enable such a person to construct, by trial and error, what— pace Wittgenstein—amounts to a private inventory of topographical signs, known to no one else, that is adequate for the integration of various programmes of activity in proceeding from here to there, and to enable such a person to opt between relevant choices. ‘Here’ and ‘there’ would themselves—for such a person—be ‘pre-linguistic’ or ‘non-linguistic’ ODs, realized in terms of the integration of those activities. But none the less rational for that. At this point we must expect protest marches down Whitehall by ser-ried ranks of cognitive psychologists carrying placards bearing the scrip-tist mantra ‘MENTAL REPRESENTATIONS’. How, the protesters will
< previous page
page_175
next page >
< previous page
page_176
next page >
Page 176 demand, can languageless persons find their way around without mental ‘maps’ of the neighbourhood? Is not such a map itself a text, expressed in a language or proto-language, albeit a language without words? A ‘language of thought’? One hopes that when the protest march reaches the Houses of Parliament, one of the MPs will have the wit to make the point that all the demo demonstrates is the protesters’ own inability to think about thinking except as a form of writing in the brain. ENVOI In presenting my argument I have made liberal use of such expressions as language myth, word magic, scriptism , and so forth. These—someone will complain—are all loaded terms. So they are. Or, more exactly, they are polemical terms, and they should be understood in the context of the polemic they are being employed to articulate. To some readers it may appear that far too much attention has been paid to Aristotle in the latter part of this book, and not enough to his successors. The reason why Aristotle appears so frequently in the spotlight is not just that, for centuries in the European intellectual world, he was simply, as Thomas of Erfurt calls him in the 14th century, Philosophus (‘the Philosopher’), and left an imprint on discussions of language and reasoning that turned out to be indelible. Another reason is that Aristotle stands as a paradigm case of a thinker whose deep and long-lasting influence is primarily due to the support it received from a literate tradition of textual study and commentary, to which Thomas of Erfurt himself belonged. Had the texts been lost, the philosophers of the Middle Ages would have had to start again from scratch, and with a quite different background from that of sophistic and political debate which Aristotle had. A third reason is that Aristotle is a remarkable case of a man apparently blind to manifestations of rationality that have no linguistic component. Although adept at informal logic, his primary interest is in logic that can be ‘formalized’, where formalization is a reduction to written formulae of various kinds. The result is a narrowing of the concept of rationality itself, because other manifestations of reason cannot so easily be treated in this scriptist way. Had Aristotle not existed, it would have been necessary to invent him for purposes of the present discussion, because he represents the perfect antithesis to the view of rationality that has been proposed here. Perhaps those who have studied the works of Aristotle in greater depth will say that I have invented him. No matter. The invented ‘Aristotle’ will do as far as I am concerned. The contrast between the ‘Aristotelian’ view and the integrationist view hinges on locating the exact point at which human rationality is seen as linking up with language as social praxis. ‘Aristotle’ sees no problem in
< previous page
page_176
next page >
< previous page
page_177
next page >
Page 177 taking a knowledge of Greek for granted, beginning his analysis of reason at a level where ‘propositions’ can be identified simply by citing Greek sentences, and proceeding to systematize them. Western formal logic has never deviated from this basic strategy; and when Greek and other languages proved to be awkward or inadequate, logicians simply replaced them by inventing written languages or parts of languages, as required. But the ‘Aristotelian’ order of priority remained unaffected. The integrationist view of rationality that has been proposed here rejects that order of priority. It takes neither Greek nor any other language for granted. On the contrary, it sees rationality as being based in the first instance in the ways that human beings attribute meanings to signs of any kind in the pursuit of integrated activities. This is a level of rationality that ‘Aristotle’ never addresses. He merely assumes that a suitable language is available, prior to engaging in rational thought, whether formally or informally. In other words, the way he proceeds is tantamount to beginning a mathematical exposition of calculation at a point where it is assumed that everybody is already familiar with the abacus. How the abacus appeared on the scene in the first place is never explained. For those of us who are literate (however minimally), there is no way of ‘reverting’ to preliterate modes of thought. Nor would most of us wish to, even if that were possible. But that does not mean we have to swallow hook, line and sinker all the flattering misconceptions of rationality that our literacy tempts us to indulge, and that the received history of our own culture projects as progress. What needs to be resisted at the present stage reached in humane studies in Western countries is the notion that reason has its home in the natural sciences, and can always be broken down into a number of steps, each of which can be expressed as a verbal statement, or written down in some kind of notation. There are indeed reasoning processes where that can be done, and the whole enterprise exhibited as a series of such statements or inscriptions. But to insist on reducing reasoning in all its forms to such procedures, and denying ‘rationality’ to whatever cannot be reduced in this way, is to take a very narrow view of the human mind and its day-to-day operations. It is a reduction that has no warrant in human experience, whether it be experience of dealing with the natural environment or dealing with other human beings who co-habit within in. I do not think we are forever ‘trapped’ in that way of thinking about reason. The fault is ours if we treat either language or literacy as a prison (as Nietzsche once put it), from which there is no (logical) possibility of escape. Anyone who believed that we are inmates of such a prison, serving a life sentence, would never have written this book.
< previous page
page_177
next page >
< previous page
page_178
next page >
page_178
next page >
Page 178 This page intentionally left blank.
< previous page
< previous page
page_179
next page >
Page 179 References Ackermann, R. (1967), Introduction to Many-Valued Logics, London, Routledge & Kegan Paul. Aczel, A.D. (1997), Fermat’s Last Theorem, London, Penguin. Allan, D.J. (1952), The Philosophy of Aristotle, Oxford, Oxford University Press. Ambrose-Grillet, J. (1978), Glossary of Transformational Grammar, Rowley Mass., Newbury House. Annas, J. (1976), Aristotle’s Metaphysics Books M and N, Oxford, Clarendon. Aristotle, Complete Works: The Revised Oxford Translation , ed. J. Barnes, Princeton, Princeton University Press, 1984. Arnauld, A. and Nicole, P. (1683), La logique ou l’art de penser , 5me éd., Paris, Desprez. Repr. Paris, Flammarion, 1970. Ayer, A.J. (1946), Language, Truth and Logic, 2nd edn, London, Gollancz. Bacon, F. (1605), The Advancement of Learning, ed. G.W. Kitchin, London, Dent, 1915. Baker, G. (1986), ‘Alternative mind-styles’. In Grandy, R.E. and Warner, R. (eds), Philosophical Grounds of Rationality, Oxford, Clarendon, pp.277–314. Barker, E. (1946), The Politics of Aristotle, Oxford, Clarendon. Basson, A.H. and O’Connor, D.J. (1959), Introduction to Symbolic Logic , 3rd edn, London, University Tutorial Press. Bell, E.T. (1946), The Magic of Numbers , New York, McGraw-Hill. Repr. New York, Dover, 1991. Bennett, J. (1964), Rationality, London, Routledge & Kegan Paul. Bennett M.R. and Hacker, P.M.S. (2003), Philosophical Foundations of Neuroscience, Oxford, Blackwell. Berkeley, G. (1732), An Essay towards a New Theory of Vision , 4th edn. Repr. in Ayers, M.R., George Berkeley, Philosophical Works, rev.edn, London, Dent, 1983. Blackburn, S. (1994), The Oxford Dictionary of Philosophy , Oxford, Oxford University Press. Bloom, A. (1987), The Closing of the American Mind, New York, Simon & Schuster. Bloomfield, L. (1927), ‘Literate and illiterate speech’, American Speech 2, 10 : 432–9. Repr. in Hockett, C.F. (ed.), A Leonard Bloomfield Anthology, Chicago, University of Chicago Press, 1987, pp.84–93. Page references are to the reprint. Bloomfield, L. (1935), Language, rev. edn, London, Allen & Unwin. Boas, F. (1911), ‘Introduction’ to Handbook of American Indian Languages, Washington, Government Printing Office. Boas, F. (1927), Primitive Art, Oslo, Aschehong. Repr. New York, Dover, 1955. Boas, F. (1938), The Mind of Primitive Man, rev. edn, Macmillan. Repr. New York, Collier, 1963, with an introduction by M.J. Herskovits. Quotations are from and page references to this reprint.
< previous page
page_179
next page >
< previous page
page_180
next page >
Page 180 Boole, G. (1854), An Investigation of the Laws of Thought, London, Macmillan. Repr. New York, Dover, 1958. Page references are to the reprint. Carroll, J.B. (ed.) (1956), Language, Thought, and Reality. Selected Writings of Benjamin Lee Whorf, Cambridge Mass., MIT Press. Carr-West, J. (2008), ‘Brain power’, RSA Journal, Summer 2008: 14–19. Cassirer, E. (1944), An Essay on Man. An Introduction to a Philosophy of Human Culture , New Haven, Yale University Press. Chandor, A. (1977), The Penguin Dictionary of Computers , 2nd edn, Harmondsworth, Penguin. Chao, Y.R. (1934), ‘The non-uniqueness of phonemic solutions of phonetic systems’, Bulletin of the Institute of History and Philology, Academica Sinica, Vol. IV, Part 4, pp.363–97. Chomsky, A.N. (1975), Reflections on Language, London, Fontana. Chomsky, A.N. (1980), Rules and Representations , Oxford, Blackwell. Closs, M.P. (ed.) (1986), Native American Mathematics, Austin, University of Texas Press. Comte, A. (1844), Discours sur l’esprit positif , Paris, Carilian-Goeroy & Dalmont. Page references are to the reprint ed. P. Arbousse-Bastide, Paris, Union Générale d’Éditions, 1963. Corcoran, J. (1995), ‘Logical form’. In Audi, R. (ed.), The Cambridge Dictionary of Philosophy , Cambridge, Cambridge University Press, pp.442–3. Cornford, F.M. (1941), The Republic of Plato, Oxford, Clarendon. Coulmas, F. (1989), The Writing Systems of the World , Oxford, Blackwell. Coulmas, F. (1996), The Blackwell Encyclopedia of Writing Systems , Oxford, Blackwell. Curr, E.M. (1886–7), The Australian Race , 4 vols, Melbourne. Daniels, P.T. (1996), ‘Grammatology’. In Daniel, P.T. and Bright, W. (eds), The World’s Writing Systems , New York, Oxford University Press, pp.1–2. Darwin, C. (1874), The Descent of Man, 2nd edn, London, Murray. Derrida, J. (1972), Marges: de la philosophie , Paris, Minuit. Descartes, R. (1644), Principia Philosophiae, Amsterdam, Elzevir. Translated excerpts in The Philosophical Writings of Descartes, trans. J. Cottingham, R. Stoothoff and D. Murdoch, Cambridge, Cambridge University Press, 1984–5, vol.1, pp.177–291. Dewey, J. (1938), Logic. The Theory of Inquiry , New York, Holt. Donald, M. (1991), Origins of the Modern Mind, Cambridge, Mass, Harvard University Press. Douglas, M. (1980), Evans-Pritchard, London, Fontana. Durkheim, E. (1895), Les règles de la méthode sociologique . Page references are to and quotations from the English translation by S.A. Solovay and J.H. Mueller, The Rules of Sociological Method , New York, Macmillan, 1938. Durkheim, E. and Mauss, M. (1903), ‘De quelques formes primitives de classification: contribution à l’étude des représentations collectives’, Année Sociologique 6: 1–72. Trans. R. Needham, Primitive Classification, Chicago, University of Chicago Press, 1963. Quotations are from and page references to this translation. Edgley, R. (1969), Reason in Theory and Practice , London, Hutchinson. Eisenstein, E.L. (1979), The Printing Press as an Agent of Change , Cambridge, Cambridge University Press. Evans-Pritchard, E.E. (1965), Theories of Primitive Religion, Oxford, Clarendon. Evans-Pritchard, E.E. (1981), A History of Anthropological Thought, London, Faber & Faber. Finnegan, R. (1989), ‘Communication and technology’, Language & Communication 9: 107–27. Firth, R. (1975), Human Types, rev. edn, London, Abacus.
< previous page
page_180
next page >
< previous page
page_181
next page >
Page 181 Frazer, J.G. (1922), The Golden Bough, London, Macmillan. Repr. Ware, Wordsworth, 1993. (Frazer’s own abridgement of his earlier twelve-volume work.) Freud, S. (1913), Totem and Taboo, trans. A.A. Brill. Repr. in Brill, A.A. (ed.), The Basic Writings of Sigmund Freud , New York, Random House, 1995. Gardiner, P. (1967), ‘Irrationalism’. In Edwards, P. (ed.), The Encyclopedia of Philosophy , New York, Macmillan, Vol. 3, pp.213–219. Gelb, I.J. (1963), A Study of Writing, 2nd edn, Chicago, University of Chicago Press. Gellner, E. (1970), ‘Concepts and society’. In Wilson, B.R. (ed.), Rationality, Oxford, Blackwell, pp.18– 49. Glock, H-J. (1996), A Wittgenstein Dictionary, Oxford, Blackwell. Goody, J.R. (1977), The Domestication of the Savage Mind, Cambridge, Cambridge University Press. Graham, G. (1998), Philosophy of Mind: An Introduction , 2nd edn., Malden Mass., Blackwell. Grandy, R.E. and Warner, R. (eds) (1986), Philosophical Grounds of Rationality, Oxford, Clarendon. Green, K. (2007), Bertrand Russell, Language and Linguistic Theory , London, Continuum. Greenfield, S. (2000), Brain Story, London, BBC. Greenfield, S. (2008), The Quest for Identity in the 21st Century, London, Hodder & Stoughton. Grice, H.P. (1989), Studies in the Way of Words, Cambridge Mass., Harvard University Press. Hallpike, C.R. (1979), The Foundations of Primitive Thought, Oxford, Clarendon. Hampshire, S. (1971), ‘Critical review of The Concept of Mind’. In Wood, O.P. and Pitcher, G. (eds), Ryle, London, Macmillan, pp.17–51. Harris, R. (2000), ‘Reflections on a real character’. In Asher, R.E. and Harris, R. (eds), Linguisticoliterary , Delhi, Pilgrims, pp.225–235. Harris, R. (2005), The Semantics of Science , London, Continuum. Harris, W.V. (1989), Ancient Literacy, Cambridge, Mass., Harvard University Press. Hartshorne, C. and Weiss, P. (eds) (1931–5), Collected Papers of Charles Sanders Peirce , vols 1–6, Cambridge Mass., Harvard University Press. Havelock, E.A. (1963), Preface to Plato, Cambridge Mass., Harvard University Press. Havelock, E.A. (1982), The Literate Revolution in Greece and its Cultural Consequences, Princeton, Princeton University Press. Havelock, E.A. (1989), ‘Orality and literacy, an overview’, Language & Communication 9: 87–98. Heath, T. (1921), A History of Greek Mathematics. Volume 1. From Thales to Euclid , Oxford, Clarendon. Repr. New York, Dover, 1981. Herodotus, The Histories. Trans. A. de Sélincourt, rev. A.R. Burn, London, Penguin, 1972. Hockett, C.F. (1958), A Course in Modern Linguistics, New York, Macmillan. Hoopes, J. (ed.) (1991), Peirce on Signs. Writings on Semiotic by Charles Sanders Peirce , Chapel Hill, University of North Carolina Press. Humboldt, W. von (1836), On Language. The Diversity of Human LanguageStructure and its Influence on the Mental Development of Mankind , trans. P. Heath, Cambridge, Cambridge University Press, 1988. Pages references are to this translation. Hutton, C.M. (1990), Abstraction and Instance. The Type-Token Relation in Linguistic Theory , Oxford, Pergamon. Inhelder, B. and Piaget, J. (1964), The Early Growth of Logic in the Child, trans. E.A. Lunzer and D. Papert, London, Routledge & Kegan Paul.
< previous page
page_181
next page >
< previous page
page_182
next page >
Page 182 Isocrates, Works, 3 vols, trans. G. Norlin and L.R. van Hook (Loeb Classical Library), Cambridge Mass., Harvard University Press, 1928–45. Jarvie, I.C. and Agassi, J. (1970), ‘The problem of the rationality of magic’. In Wil-son, B.R. (ed.), Rationality, Oxford, Blackwell, pp.172–93. Jespersen, O. (1924), The Philosophy of Grammar, London, Allen & Unwin. Kennedy, G. (1963), The Art of Persuasion in Greece , Princeton, Princeton University Press. Kenny, A. (1995), Frege, London, Penguin. Kenyon, F. (1941), The Myth of the Mind, London, Watts. Kirk, G.S., Raven, J.E. and Schofield, M. (1983), The Presocratic Philosophers, 2nd edn., Cambridge, Cambridge University Press. Kneale, W. and Kneale, M. (1984), The Development of Logic , rev. edn, Oxford, Clarendon. Kramer, S.N. (1959), History Begins at Sumer , New York, Doubleday. Kramer, S.N. (1963), The Sumerians, Chicago, University of Chicago Press. Lallot, J. (1989), La grammaire de Denys le Thrace , Paris, CNRS. Lancelot, C. and Arnauld, A. (1660), Grammaire générale et raisonnée , Paris, Petit. Facsimile repr. Menston, Scolar, 1967. Langer, S.K. (1957), Philosophy in a New Key, 3rd edn., Cambridge Mass., Harvard University Press. Lear, J. (1980), Aristotle and Logical Theory , Cambridge, Cambridge University Press. Lévi-Strauss, C. (1958), Anthropologie structurale, Paris, Plon. Lévi-Strauss, C. (1962), La pensée sauvage , Paris, Plon. Lévy-Bruhl, L. (1910), Les fonctions mentales dans les sociétés inférieures , Paris, Alcan. Trans. L.A.Clare, How Natives Think, Princeton, Princeton University Press, 1985. Quotations are from and page references to this translation. Lévy-Bruhl, L. (1922), La mentalité primitive, Paris, Presses Universitaires de France. Quotations are from and page references to the 15th edn, Paris, 1947. Lienhardt, G. (1966), Social Anthropology, 2nd edn, London, Oxford University Press. Linell, P. (2005), The Written Language Bias in Linguistics. Its nature, origins and transformations, London, Routledge. Littleton, C.S. (1985), ‘Lucien Lévy-Bruhl and the concept of cognitive relativity’. Introduction to L. LévyBruhl, How Natives Think, trans. L.A. Clare, Princ-eton, Princeton University Press. Locke, J. (1706), An Essay Concerning Human Understanding , 6th edn, ed. A.C. Fraser, 1894. Repr. New York: Dover, 1959. Logan, R.K. (1986), The Alphabet Effect , New York, Morrow. Lord, A.B. (1960), The Singer of Tales, Cambridge Mass., Harvard University Press. Love, N. (1990), ‘The locus of languages in a redefined linguistics’. In Davis, H.G. and Taylor, T.J. (eds), Redefining Linguistics, London, Routledge, pp.53–117. Lukes, S. (1970), ‘Some problems about rationality’. In Wilson, B.R. (ed.), Rationality, Oxford, Blackwell, pp.194–213. Luria, A.R. (1976), Cognitive Development. Its Social and Cultural Foundations, trans. M. Lopez Morillas and L. Solataroff, Cambridge Mass., Harvard University Press. Luria, A.R. (1979), The Making of Mind. A Personal Account of Soviet Psychology , ed. M. and S. Cole, Cambridge Mass., Harvard University Press. McLuhan, M. (1962), The Gutenberg Galaxy. The Making of Typographic Man, Toronto, University of Toronto Press. McLuhan, M. (1964), Understanding Media: the Extensions of Man, New York, McGraw-Hill.
< previous page
page_182
next page >
< previous page
page_183
next page >
Page 183 Macintyre, A. (1970), ‘Is understanding religion compatible with believing?’. In Wilson, B.R. (ed.), Rationality, Oxford, Blackwell, pp.62–77. Mair, L. (1964), Primitive Government , rev. edn, Harmondsworth, Penguin. Malefijt, A. de W. (1974), Images of Man. A History of Anthropological Thought, New York, Knopf. Mallery, G. (1893), Picture-Writing of the American Indians, Washington, Government Printing Office. Repr. New York, Dover, 2 vols, 1972. Quotations are from and page references to the reprint. Mautner, T. (ed.) (1997), The Penguin Dictionary of Philosophy , London, Penguin. Mill, J.S. (1872), System of Logic , 8th edn, London, Longman & Green. Repr. London, Longman, 1970. Mueller, I. (1978), ‘An introduction to Stoic logic’. In Rist, J.R. (ed.), The Stoics , Berkeley, University of California Press, pp.1–26. Müller, F.M. (1856), Comparative Mythology . Repr. in Chips from a German Workshop , Vol. 2, 2nd edn, London, Longmans, Green, 1868. Quotations are from and page references to the reprint. Müller, F.M. (1861), Lectures on the Science of Language, London, Longman, Green, Longman, Roberts & Green. Needham, R. (1963), ‘Introduction’ to A. Durkheim and M. Mauss, Primitive Classification, Chicago, Chicago University Press. Olson, D.R. (1994), The World on Paper. The conceptual and cognitive implications of writing and reading, Cambridge, Cambridge University Press. Ong, W.J. (1982), Orality and Literacy. The Technologizing of the Word, London, Methuen. Parry, M. (1971), The Making of Homeric Verse , ed. A. Parry, Oxford, Clarendon. Peirce, C.S. (1868), ‘Some consequences of four incapacities’. In Hartshorne and Weiss (1931–5): 5.264–317. Peirce, C.S. (1873), ‘On the nature of signs’. In Hoopes (1991), pp.141–3. Peirce, C.S. (1897), ‘Ground, object and interpretant’. In Hartshorne and Weiss (1931–5): 2.227–9. Peirce, C.S. (1902), ‘Leading principle’. In Hartshorne and Weiss (1930–5): 2.588–9. Peirce, C.S. (1906), ‘Prolegomena to an apology for pragmatism’. In Hartshorne and Weiss (1930–5): 4.530–72. Peters, F.E. (1967), Greek Philosophical Terms , New York, New York University Press. Plato, Complete Works, ed. J.M. Cooper, Indianapolis, Hackett, 1997. Popper, K.R. (1972), Conjectures and Refutations, 4th edn, London, Routledge & Kegan Paul. Quintilian, Institutio Oratoria , ed. and trans. H.E. Butler, London, Heinemann (Loeb Classical Library), 1920. Reid, T. (1764), Inquiry and Essays, ed. R.E. Beanblossom and K. Lehrer, India-napolis, Hackett, 1983. Renfrew, C. (1994), ‘Towards a cognitive archeology’. In Renfrew C. and Zubrow, E.B.W. (eds), The Ancient Mind. Elements of Cognitive Archeology, Cam-bridge, Cambridge University Press, pp.3–12. Renfrew, C. (2007), Prehistory. The Making of the Human Mind, London, Weiden-feld & Nicolson. Robins, R.H. (1997), A Short History of Linguistics, 4th edn, London, Longman. Robinson, R. (1954), Definition , Oxford, Clarendon. Romaine, S. (1994), Language in Society. An Introduction to Sociolinguistics , Oxford, Oxford University Press. Rorty, R. (1980), Philosophy and the Mirror of Nature , rev. edn, Princeton, Princ-eton University Press.
< previous page
page_183
next page >
< previous page
page_184
next page >
Page 184 Russell, B.A.W. (1919), Introduction to Mathematical Philosophy , London, Allen & Unwin. Russell, B.A.W. (1946), History of Western Philosophy , London, Allen & Unwin. Russell, B.A.W. (1950), ‘Is mathematics purely linguistic?’. Repr. in The Collected Papers of Bertrand Russell, Vol. 11 , ed. J.G. Slater, London, Routledge, 1997, pp.352–64. Ryle, G. (1954), Dilemmas, Cambridge, Cambridge University Press. Ryle, G. (1966), Plato’s Progress, Cambridge, Cambridge University Press. Sacks, O. (1986), The Man Who Mistook His Wife for a Hat , London, Picador. Saussure, F. de (1922), Cours de linguistique générale, 2nd edn, Paris, Payot. Trans. R. Harris, F. de Saussure, Course in General Linguistics, London, Duckworth, 1983. Schmandt-Besserat, D. (1992), Before Writing. Vol.1. From Counting to Cuneiform , Austin, University of Texas Press. Scribner, S. and Cole, M. (1981), The Psychology of Literacy, Cambridge Mass., Harvard University Press. Searle, J.R. (1977), ‘Re-iterating the differences: a reply to Derrida’, Glyph 1. Searle, J.R. (1992), The Rediscovery of the Mind, Cambridge Mass., Massachu-setts Institute of Technology Press. Shanker, S.G. (1987), Wittgenstein and the Turning-Point in the Philosophy of Mathematics, Albany, State University of New York Press. Stanovich, K.E. (1999), Who Is Rational?, Mahwah N.J., Erlbaum. Street, B.V. (1984), Literacy in Theory and Practice , Cambridge, Cambridge University Press. Sutton, J. (2004), ‘Representation, levels, and context in integrational linguistics and distributed cognition’, Language Sciences 26 : 503–524. Tammet, D. (2006), Born on a Blue Day , London, Hodder & Stoughton. Taylor, T.J. (1997), Theorizing Language. Analysis, normativity, rhetoric, history , Oxford, Pergamon. Thomas, I. (1939), ‘Arithmetical notation and the chief arithmetical operations’, Greek Mathematical Works I. From Thales to Euclid , London, Heinemann, pp.41–9. Thomas, R. (1989), Oral Tradition and Written Record in Classical Athens , Cam-bridge, Cambridge University Press. Thomas, R. (1992), Literacy and Orality in Ancient Greece , Cambridge, Cam-bridge University Press. Tylor, E. B. (1871), Primitive Culture, 2 vols, London, Murray. Page references are to the 6th edn, London, 1920. Tylor, E. B. (1881), Anthropology, London. Repr. London, Watts, 2 vols, 1931. Page references are to the reprint. Van der Leeuw, G. (1928), La structure de la mentalité primitive, Strasbourg, Imprimerie Alsacienne. Vico, G. (1744), Scienza Nuova, 3rd edn, trans. T.G. Bergin and M.H. Fish, Ithaca, Cornell University Press, 1984. Quotations are from and page references to this translation. Vygotsky, L.S. (1962), Thought and Language, trans. E. Hanfmann and G. Vakar, Cambridge Mass., MIT Press. Warnock, G.J. (1969), English Philosophy since 1900, 2nd edn, London, Oxford University Press. Watson, J.B. (1924), Behaviorism , People’s Institute Publishing Co. Quotations are from and page references to the reprint by Norton, New York, 1970. Wells, H.G. (1946), A Short History of the World , rev. edn, Harmondsworth, Penguin. Whately, R. (1840), Elements of Logic , 7th edn, London, Fellowes.
< previous page
page_184
next page >
< previous page
page_185
next page >
Page 185 Whitaker, C.W.A. (1996), Aristotle’s De Interpretatione. Contradiction and Dialectic, Oxford, Clarendon. Whitehead, A.N. (1917), The Organisation of Thought, London, Williams & Nor-gate. Whitehead, A.N. and Russell, B.A.W. (1910), Principia Mathematica , Cambridge, Cambridge University Press. Whitney, W.D. (1880), The Life and Growth of Language, New York, Appleton. Whorf, B.L. (1936), ‘A linguistic consideration of thinking in primitive communi-ties’. In Carroll 1956:65– 86. Whorf, B.L. (1939), ‘The relation of habitual thought and behavior to language’. In Carroll 1956: 134– 159. Whorf, B.L. (1940), ‘Science and linguistics’. In Carroll 1956: 207–19. Whorf, B.L. (1942), ‘Language, mind, and reality’. In Carroll 1956: 246–70. Wilson, B.R. (1970), ‘A sociologist’s introduction’. In Wilson, B.R. (ed.), Rationality, Oxford, Blackwell, pp.vii-xviii. Wiedemann, T.E.J. (1996), ‘Barbarian’. In Hornblower, S. and Spawforth, A. (eds), The Oxford Classical Dictionary, 3rd ed., Oxford, Oxford University Press, p.233. Winch, P. (1958), The Idea of a Social Science , London, Routledge & Kegan Paul. Repr. of the 2nd edn. (1990) in Routledge Classics, Abingdon, 2008. Page references are to this reprint. Winch, P. (1970), ‘Understanding a primitive society’. In Wilson, B.R. (ed.), Rationality, Oxford, Blackwell, pp.78–111. Wittgenstein, L. (1974), Philosophical Grammar, ed. R. Rhees, trans. A. Kenny, Oxford, Blackwell. Wittgenstein, L. (1975), Philosophical Remarks , ed. R. Rhees, trans. R. Hargreaves and R. White, Oxford, Blackwell. Wittgenstein, L. (1979), Remarks on Frazer’s Golden Bough, trans. A.C. Miles, Brynmill, Doncaster. Wittgenstein, L. (2001), Philosophical Investigations , trans. G.E.M. Anscombe, 3rd edn, Oxford, Blackwell. Wolf, M. (2008), Proust and the Squid. The Story and Science of the Reading Brain , Cambridge, Icon. Woolley, C.L. (1963), ‘The beginnings of civilization’. In Hawkes, J. and Wool-ley, C.L., Prehistory and the Beginnings of Civilization Vol.1 ( in History of Mankind. Cultural and Scientific Development), UNESCO, London, Allen & Unwin, pp.357–854. Zaslavsky, C. (1973), Africa Counts, Westport, Lawrence Hill.
< previous page
page_185
next page >
< previous page
page_186
next page >
page_186
next page >
Page 186 This page intentionally left blank.
< previous page
< previous page
page_187
next page >
Page 187 Index A Abelard, P., 148 Abrahams, R., 76 Adam, 45 Agassi, J., 28, 174 Aquinas, St T., 14 Allan, D.J., 80 Ambrose-Grillet, J., 145 Annas, J., 116 arbitrariness, 97, 98, 150 Aristotle, xv, 8, 10, 15, 21, 28, 29, 34, 42, 44, 60, 69, 79–110, 116, 124, 130–132, 142, 148, 149, 151– 153, 155–157, 161, 162, 164–171, 174, 175 Arnauld, A., 46, 47, 108 Austin, J.L., 157, 175 Ayer, A.J., 158 B Bacon, F., 12, 169 Baker, G., 131 Bain, A., 108, 109 Basson, A.H., 79, 144 behaviourism, 2, 3, 57, 58, 172 Bell, E.T., 111 Bennett, J., 170, 171 Bennett, M.R., xii, 4 Berkeley, G. 12, 13 Blackburn, S., 135 Bloom, A., 5, 6, 7 Bloomfield, L., xiv, 3, 57, 58, 137, 138, 142 Boas, F., 18, 30, 31, 55, 56, 57 Boole, G., 149, 150 Bopp, F., 13, 14 Boyle’s law, 114 Braille, L., 142 C Carr-West, J., xi Cassirer, E., 32–35 Champollion, J.F., 121 Chandor, A., 152 Chomsky, A.N., 144–146 cognitive archeology, 43 cognitive decontextualization, 161, 165 Cole, M., 41 comparative philology, 44, 47 Comte, A., 23 cotemporality, 141 Corcoran, J., 144 Cornford, F.M., 69, 70 Coulmas, F., 12, 13, 75, 121 D Daniels, P.T., 64 Darwin, C., 22, 23 definitions, 96, 98–100, 102, 110, 114, 145, 149, 150, 155, 167, 169
Derrida, J., 156, 157, 174 Descartes, R., 1, 2, 5, 14, 168 Dewey, J., 151 dictionaries, 100, 140, 141, 154, 163 Dionysius Thrax, 123, 132, 147, 148 Donald, M., 64 Douglas, M., 17 Durkheim, E., 6, 7, 49–53, 71 E Einstein, A., 149 Eisenstein, E.L., 151 essences, 96, 99, 100, 102, 105, 156, 167 Evans-Pritchard, E.E., 18–20, 27, 37 F Fermat’s last theorem, 112
< previous page
page_187
next page >
< previous page
page_188
next page >
Page 188 Finnegan, R., 75, 76 Firth, R., 19, 20, 36, 37 Frazer, J.G., 25–29 Frege, G., 113, 114, 152 Freud, S., 29, 164 G Galileo, 114 Galton, F., 18 Gardiner, P., 14 Gelb, I.J., 121 Gellner, E., 169 Goody, J., 18, 71–73, 77, 100 Graham, G., 15 grammar, 44, 58, 59, 79, 81, 90, 96, 100, 101, 123, 124, 130–132, 139, 142, 143, 145–148, 154 Green, K., 119, 146 Greenfield, S., xi–xiii, 77 Grice, H.P., 154–156 Grote, G., 24 H Hacker, P.M.S., xii, 4 Hallpike, C.R., 18, 161 Hampshire, S., 4 Harris, W.V., 77, 80, 137 Havelock, E.A., 69–71, 77, 80 Heath, T., 88 Hecataeus of Miletus, 21 Heraclitus, 135 Herodotus, xv, 20, 21, 24, 25, 28, 136 Hockett, C.F., 72 Homer, 70, 142, 147 homonymy, 97–99, 124, 130, 169 Humboldt, W. von, 48 Hume, D., 74, 175 Hutton, C.M., 11 I implicatures, 155 Inhelder, B., 161 integration, 53, 117, 121, 130, 132, 134, 141, 147, 151, 160–164, 168–177 International Phonetic Alphabet, 139 Isocrates, 136, 137 J Jarvie, I.C., 28, 174 Jespersen, O., 116, 119 Jevons, W., 152 Johnson, S., 63 K Kant, I., 74, 86, 158, 175 Kenny, A., 114 Kenyon, F., 1 Kipling, R., 169 Kneale, M., 80, 87, 89, 90, 93 Kneale, W., 80, 87, 89, 90, 93
Kramer, S.N., 100 L Lancelot, C., 46, 47 Langer, S., 170 language-games, 125–133 language myth, 79–95, 99, 130, 156, 161, 176 langue 103, 142 law of excluded middle, 158, 167 law of identity, 93 law of non-contradiction, 158, 159, 167–169 Lear, J., 166, 167 Lévi-Strauss, C., 18, 43, 74, 75 Lévy-Bruhl, L., 30–37, 49, 53–57, 60, 160, 161, 169 Lienhardt, G., 17, 27 Linell, P., 12 linguistic relativity, 55–60 linguistic typology, 48, 49 Littleton, C.S., 30, 31 Locke, J., xiv, 105, 106, 151 Logan, R., 76 logical constants, 95 logical form, 91, 92, 94, 123, 124, 131, 132, 144–146 Love, N.L., 136, 139, 140 Lukes, S., 28, 29 Lull, R., 152 Luria, A., 37–42, 68, 148 M McLuhan, M., 73, 74 MacIntyre, A., 31, 32 Mair, L., 19, 20 Malefijt, A. de W., 21, 29 Mallery, G., 63 Marx, K., 117 Marxism, 39 mathematical notation, 88, 124 Mauss, M., 49–53 Mautner, T., 144 memory, 121, 122, 141, 172 Michelangelo, 164 Mill, J.S., 108 Moore, G.E., 3
< previous page
page_188
next page >
< previous page
page_189
next page >
Page 189 Mueller, I., 91, 92 Müller, F.M., 18, 23, 24, 48–50, 53 musical notation, 139 N names, 44–46, 52, 71, 90–98, 102, 105, 106, 108–111, 115, 126 Newton, I., 149 Nicole, P., 46, 108 Nietzsche, F., 14, 177 O O’Connor, D.J., 79, 144 Olson, D.R., 42 Ong, W.J., 64–71, 75, 87, 90, 122 operational discriminations, 125–133, 134, 135, 138, 140, 152, 175 P parole, 103 parts of speech, 44, 48, 101, 126, 136 Pascal, B., 5, 14 Peirce, C.S., 8–11, 17, 135, 151, 153, 157, 163, 171 Piaget, J., 35, 161 Plato, 2, 14, 23, 69, 70, 81–83, 86–88, 95, 110, 113, 121, 124, 134, 136, 142, 143, 150, 162, 165, 166 Popper, K., 149 Port-Royal, 46, 107 prelogicality, 30–43, 50, 54, 160 propositions, 103–105, 107–109, 120, 144, 152, 159, 169, 177 psychocentrism, 45, 46, 49, 52, 53, 92, 106, 107, 166 Pythagoras, 110–113, 117, 120, 124 R Reid, T., 3 Renfrew, C., 4, 43, 64 reocentrism, 45, 53, 60, 99, 105, 106, 166 Robins, R.H., 101, 137, 142 Robinson, R., 28, 99 Rochester, Earl of, xiv Romaine, S., 42 Rorty, R., 145, 146 Rousseau, J-J., 62, 76 Russell, B.A.W., xiii, 93, 110, 118–120, 123, 146, 149 Ryle, G., 70, 164, 174, 175 S Sapir, E., 55 Sacks, O., 112 Saussure, F. de, xiv, 12–14, 58, 59, 74, 123, 142, 150, 151, 153, 163 Schlegel, A.W. von, 48 Schlegel, F. von, 48 Schmandt-Besserat, D., 111, 118–121, 128, 130 Schopenhauer, A., 14 Scribner, S., 41 scriptism, 11–13, 28, 41, 60, 90, 95, 96, 102, 104, 105, 124, 137–139, 145, 146, 149, 153–157, 159, 165, 173, 174, 176 Searle, J.R., 4, 157, 158 Socrates, xii, 23–25, 38, 70, 82, 83, 85, 86, 102, 143, 165, 166, 169 Spinoza, B., 14
Stanovich, K., 161 Street, B.V., 77 structuralism, 58, 59, 74, 107 suppositio materialis, 89–91 Sutton, J., 4 syllogisms, 10, 34, 37, 38, 41, 42, 68, 71, 82–88, 91–93, 95, 96, 98, 99, 101, 105, 106, 108, 110, 148, 151, 161, 164–167, 171, 175 synonymy, 97 T Taylor, T.J., 11, 12 technological determinism, 75, 76 telementation, 84, 161 Thomas, R., 14, 77, 135, 148 Thomas of Erfurt, 176 truth, 99, 103–105, 107, 108, 113, 119, 123, 124, 132, 133, 149, 158, 165, 166, 169 Tylor, E.B., 61–65, 160 type-token distinction, 10, 96, 127, 135, 136, 139, 140, 153, 157, 163 U use-mention distinction 89, 103 V Van der Leeuw, G., 18 variables, 87–96, 104, 124, 150, 151 Vico, G., 50, 62 Vienna Circle, 123 Vygotsky, L., 35, 36, 37 W Warnock, G., xii Watson, J.B., 2
< previous page
page_189
next page >
< previous page
page_190
Page 190 Wells, H.G., 62, 141, 147 Whately, R., 108 Whitehead, A.N., 15, 110 Whitney, W.D., 47 Whorf, B.L., 55–60 Wiedemann, T.E.J., 21 Wilkins, J., 13 William of Ockham, 91 Wilson, B.R., 20 Winch, P., 148, 151, 170, 171 Wittgenstein, L., xiii, 27, 119, 123–133, 175 Wolf, M., xi, xii, xiii, 77 Wolff, C., 158 Woolley, C.L., 64 Z Zaslavsky, C., 111
< previous page
page_190