Knowledge Management Tools
Resources for the Knowledge-Based Economy KNOWLEDGE IN ORGANIZATIONS
Laurence Prusak KNOW...
248 downloads
2787 Views
17MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Knowledge Management Tools
Resources for the Knowledge-Based Economy KNOWLEDGE IN ORGANIZATIONS
Laurence Prusak KNOWLEDGE MANAGEMENT AND ORGANIZATIONAL DESIGN
Paul S. Myers KNOWLEDGE MANAGEMENT TOOLS
Rudy L. Ruggles, 111 THE STRATEGIC MANAGEMENT OF INTELLECTUAL CAPlTAL
David A. KIein
Knowledge Management Tools Rudy L. Ruggles I11
Editor
Butterworth-Heinemann Boston Oxford Johannesburg Melbourne New Delhi Singapore
Copyright 0 1997 by Butterworth-Heinemann
@ A member of the Reed Elsevier group All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, o r transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Recognizing the importance of preserving what has bcen written, Butterworth-
@ Heinemann prints its books on acid-free paper whenever possible.
Library of Congress Cataloging-in-Publication Data Knowledge management tools / Rudy L. Ruggles 111, editor. p. cm.-(Resources for the knowledge-based economy) A collection of articles from iournals and books published between 1964-1 995. Includes index. ISBN 0-7506-9849-7 (pbk.) 1. Information resources management. 2. Information science. I. Ruggles, Rudy L., 1966- . 11. Series. T58.64.K66 1997 00 1-dc21 96-36.542 CIP British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. The publisher offers special discounts on bulk orders of this book. For information, please contact: Manager of Special Sales Butterworth-Heinemann 313 Washington Street Newton, MA 021581626 Tel: 617-928-2500 Fax: 6 17-928-2620
For information on all Business publications available, contact our World Wide Web home page at: http://www.bh.com/bb 109 8 7 6 5 4 3
Transferred to digital printing 2006
Table of Contents Acknowledgments
1 PART O N E
2
vii
Tools for Knowledge Management: An Introduction
1
9
Knowledge and Technology
Information Processing in Computer and Man (1 964)
11
Simon Herbert
3
Why Computers May Never Think Like People
31
Hubert and Stuart Dreyfus
4 How Many Bulldozers for an Ant Colony?
51
Daniel Crevier PART T W O
5
Knowledge Generation
77
Information Systems and the Stimulation of Creativity David Bawden
6
The Light of Discovery
102
George Johnson PART THREE
7
Knowledge Codification
119
Information Retrieval and Cognitive Authority Patrick Wilson
121
79
vi
Table of Contents
8
Humans, Machines, and the Structure of Knowledge
145
Harry M . Collins PART FOUR
9
165
Knowledge Transfer
Collaborative Tools: A First Look
167
Michael Schrage
10
Knowledge Synthesis and Computer-Based Communication 187 Systems: Changing Behaviors and Concepts Kathleen Vian and Robert Johansen
PART FIVE
11
209
Implementation
Implementing and Integrating New Technical Processes and Tools 211 Dorothy Leonard-Barton
12
Learning from Notes: Organizational Issues in Groupware Implementation 231 Wanda J. Orlikowski
13
Cosmos vs. Chaos: Sense and Nonsense in Electronic Contexts 247 Karl Weick
14
PART SIX
15
Future: A Knowledge-Based System for Threat 261 Assessment I! J. delongh, K. J. Carden, and N . A. Rogers
What Next?
273
Webs of Cognition
275
Daniel McNeill and Paul Freiberger
16
Into the Future Stan Franklin
287
Acknowledgments I would like to thank Larry Prusak for blazing the trail in the study of knowledge management, for his interest and guidance in my research, and for inviting me to edit this volume. All at Ernst & Young’s Center for Business Innovationsmhave been very supportive of this project, and I would particularly like to thank Suzanne Connolly for her assistance and hard work in getting permission for us to reprint these selections. Finally, I thank all of the authors and their copyright holders for allowing us to incorporate their ideas into this book, enabling us to share these ideas with others in the context of knowledge management.
vii
This page intentionally left blank
Management: An Introduction Rudy L. Ruggles I11
Man is a tool-using animal. . . Without tools he is nothing, with tools he is all. -Thomas Carlyle (Essayist & historian, 1795-1 88 1)
KNOWLEDGE MANAGEMENT For thousands of years, humans have been discussing the meaning of knowledge, what it is to know something, and how people can generate and share new knowledge. It is interesting to consider, therefore, that despite the pervasiveness of epistemological discussions throughout history, it is only in the past few years that the world of business has begun to recognize the importance of knowledge as a resource. Individual and organizational knowledge has been invisible on balance sheets, overlooked in reward and incentive systems, and allowed to flow out of companies en masse, unrecognized and uncaptured. Recently, however, knowledge has come into its own in the business world. Some companies, like Skandia AFS for example, have begun to produce supplementary annual reports reflecting their intellectual capital. Balanced scorecard performance measurement plans capture the value of intangibles and financials simultaneously to provide a more complete picture of organizational health. Some organizations, Ernst & Young LLP among them, have instituted Chief Knowledge Officers and whole knowledge management processes and infrastructures. Firms have realized that, while managing data and information is important, true competitive advantage lies in leveraging the unique, powerful knowledge of the organization. Knowledge management covers three main knowledge activities: generation, codification, and transfer. Other books in this series of readers cover these areas in more detail, but a brief description of each is in order here. Although there are many definitions of knowledge in the world, with several varieties represented 1
2
KNOWLEDGE MANAGEMENT TOOLS
even within this book, the working definition I use is that knowledge is a fluid mix of contextual information, values, experiences, and rules. It comes in many forms, including process knowledge (how-to), catalog knowledge (what is), and experiential knowledge (what was). All of these types are similarly generated, codified, and transferred. Knowledge generation includes all activities which bring to light knowledge which is “new,” whether to the individual, to the group, or to the world. It includes activities such as creation, acquisition, synthesis, fusion, and adaptation. Knowledge codification is the capture and representation of knowledge so that it can be re-used either by an individual or by an organization. Knowledge transfer involves the movement of knowledge from one location to another and its subsequent absorption. Generation, codification, and transfer all occur constantly, so management itself does not create these actions. The power of knowledge management is in allowing organizations to explicitly enable and enhance the productivity of these activities and to leverage their value for the group as well as for the individual. This compendium is an integral part of this series about knowledge. Combined, the books of this series reflect the many facets of knowledge management, expressed through the works of individuals from a wide variety of fields and disciplines. This volume has a very specific place in this collection: it reflects the wide variety of thoughts and perspectives on the use and usefulness of tools in managing knowledge in organizations. It is my intention, as the editor of this volume, for readers of this book to be able to make better decisions about how to gain greater results in managing their organization’s knowledge by effectively and appropriately leveraging the potential power of tools.
TOOLS Tools are defined, for the purposes of this work, as technologies which support the performance of activities or actions. As reflected by the quotation which begins this chapter, man is a tool-user from way back. According to anthropologist Jane Lancaster, “An estimation of two million years of tool use prior to handaxe cultures and Homo erectus is undoubtedly conservative.”’ In fact, human tool use has often been cited as one of the primary drivers of human evolution! Theorists and scientists from Engels to Darwin maintain that one of the main reasons that humans walk upright is because their hands were specialized over time primarily for the job of using tools, with ambulation relegated to the legs alone. It was mankind’s ability to create and use tools which leveraged the ability of the human mind, the only competitive advantage humans had over animals in the fight for survival. In turn, many scholars also link tool use to the evolution of the human cognitive and behavioral capacities as welL2 If utilizing tools to enable more ‘Lancaster, Jane B., “On the Evolution of Tool-Using Behavior,” Americun Anthropologist, 70, 1968, pp. 56-66. ’Kurland, Jeffrey A. and Stephen J. Beckerman, “Optimal Foraging and Hominid Evolution: Labor and Reciprocity,” American Anthropologist, 87, 1985, pp. 73-92.
Tools for Knowledge Management: An Introduction
3
efficient manual activities has had these effects on humans so far, one wonders at what the evolutionary impact of tools enabling intellectual activities might be as this human-tool co-evolution moves into the knowledge age. In fact, humans have moved tools so far beyond the initial sticks and rocks that they are able to embed a certain amount of intelligence, and some would argue knowledge, into the tools themselves. With this rise in intelligence, comes a new relationship with such tools. “Interactive” has become a prime descriptor of the new generation of technology, indicating a move from humans using tools to humans conversing with tools. It is important to realize however, that this human-tool conversation does not itself create more efficiency, greater effectiveness, or better innovations, the primary objectives of most tools. This is particularly true in the world of computers. Many studies of workplace productivity show no real increase in efficiency or effectiveness due to the use of computers. Often, people spend a great deal of time and energy fitting their computers to their jobs and their jobs to their computers. The fascination with advancing technologies occasionally overshadows the human element in the interaction. This is what happens in the human-tool co-evolution when the humans evolve much more slowly than the tools. As expressed by Donna Haraway, author of A Cyborg Manifesto, “Our machines are increasingly lively, and we are increasingly inert.” Our challenge is to keep pushing the human capacity in parallel with the technological capabilities, so that neither will hold the other back for any length of time. This is the realm of knowledge tools.
KNOWLEDGE TOOLS Knowledge management tools are technologies, broadly defined, which enhance and enable knowledge generation, codification, and transfer. As with any tools, they are designed to ease the burden of work and to allow resources to be applied efficiently to the tasks for which they are most suited. It is important to note that not all knowledge tools are computer-based, as paper and pen can certainly be utilized to generate, codify, and transfer knowledge. For the purposes of this work however, the tools covered are primarily the technological ones due to their quick evolution, dynamic capabilities, and organizational impacts. They are also the most expensive tools, and are worthy of the closest scrutiny. True knowledge management tools are not data or information management tools with a 1990s title. They do different things. Data management tools allow organizations to generate, access, store, and analyze data, usually in the form of facts and figures, which can be considered “raw material.” Examples include data warehouses, data search engines, data modeling, and visualization tools. Information management tools enable the manipulation of information (i.e., data which informs in and of itself). Examples of these tools include automated information search and retrieval agents (or ‘bots), basic decision support technologies, many executive information systems, and document management technology. All may be useful for the jobs they do, but such tools do not capture the complexity of context and the richness of knowledge. While knowledge man-
4
KNOWLEDGE MANAGEMENT TOOLS
agement tools may indeed also handle data and information, the other types are not robust enough to truly facilitate knowledge management. Think about what it is to know a thing, versus simply having information about that thing. It is the difference between reading a description of the Mona Lisa and seeing the painting itself. Knowledge tools can help us see the paintings. In the world of knowledge management, the role of the tool in the work is an even more difficult concept than it would initially appear. The crux of decades of discussion as to whether or not technologies can be used to help manage knowledge was captured as recently as the March 25, 1996 Time magazine cover article: Can machines think? If one answers yes, or possibly yes, the tools themselves can (may) generate, codify, and transfer knowledge. If not, the role of the tool is purely enabler, with the onus on the humans to conduct knowledge activities. The debate over the answer to this question, usually the domain of the field of artificial intelligence (AI), is continued in this volume to the extent that it applies to knowledge management. This book also reflects my belief that no matter what an individual’s answer to this question, focusing on the technology as the key to managing knowledge organizationally or individually neglects the economic, political, and social issues which are the keys to effective knowledge management. Too often people look to technology to solve the hard questions, when in fact the tools are the easiest part. A Stradivarius violin sounds just as terrible as a dimestore fiddle in the hands of a novice. The key is putting the right tools in the hands of people who know how to use them. Anyone who has heard Itzhak Perlman or the London Symphony Orchestra knows the results can be quite incredible. Business is no different.
THE SELECTIONS I have assembled the enclosed articles and chapters from a wide variety of perspectives, including the fields of sociology, economics, computer sciences, and cognitive psychology, to create a discussion about knowledge tools. The first sections present issues and implications in various aspects of using tools in knowledge generation, codification, and transfer. Following these are descriptions of implementation and application considerations, and then a glance at what tools may look like in the future. These selected readings present a wide range of pertinent issues, creating a fertile field for continued discussion and deep and broad exploration.
Knowledge and Technology In the first section, three articles reflect the debate over whether technology can indeed duplicate, or at least replicate, human thought. The first selection comes from Nobel Laureate Herbert Simon, who has long defended the possibility, in fact the actuality, that computers process information in ways which paral-
Tools for Knowledge Management: An Introduction
5
lel human thought. Although the article is over 30 years old, its well thought through reasoning and straightforward approach present an interesting argument for the ability of the machine to duplicate human cognition, even when considering the significantly more primitive computing tools of the 1960s. Simon lays the groundwork for many of the arguments in this collection which stand in support of the ability of technological tools to capture, and in some cases replicate and even generate, knowledge. Hubert and Stuart Dreyfus’ article, “Why Computers May Never Think Like People,” serves as the appropriate counterpoint to the notion that technology can cogitate. Without the ability to make intelligent decisions, without being able to incorporate “know-how” along with “know-what,” computers can be n o more than conduits of human intelligence, devoid of context, distinctions, or true judgment. While technology may enable humans to manage their knowledge better, the Dreyfus’ maintain that knowledge can never reside within the machine. In their view, the answer to Time’s question is no, and the knowledge activities will always rest squarely with the humans. To provide a third perspective in this conversation, I have included a chapter from a book by Daniel Crevier, a chronicler of artificial intelligence’s history of successes and failures, discussing why machines have such a difficult time dealing with knowledge, despite their advancing processing power. By detailing the scale and scope of the human brain’s processing power, he paints a vivid picture of the gap which exists between today’s most sophisticated and powerful computers and the cerebral cortex. However, he balances the views of the section’s previous writers by allowing that this huge gap is being spanned, if slowly. It is this progress which makes the rest of the articles in this volume even more interesting.
Knowledge Generation The section on generation tools begins with an article by David Bawden on whether information systems can actually contribute to knowledge generation. While the direct case used concerns research and development, the concepts apply to all whose job includes generating new ideas. This article squarely addresses when information systems, and more generally information environments, can contribute to knowledge generation. The focus is on technology as enabler. This is a good example of people leading the knowledge work, utilizing the tools at hand to support their own internal generation processes. This is contrasted with the chapter from George Johnson’s book, Machinery of the Mind, which describes interesting cases where the tools, the computers, actually create new objects and ideas themselves. Stories such as these open the discussion as to what sorts of knowledge work can, or should, be automated. This, to me, leads to a deeper discussion important in understanding the human-tool relationship: Will people trust a machine’s creatiodsynthesidperspective? Instead of NIH (not invented here) syndrome, will we hear more about NIBH (not invented by human) syndrome? Many organizations are already trusting neural networks
6
KNOWLEDGE MANAGEMENT TOOLS
and genetic algorithms to solve optimization problems in operations, financial services, and environmental scanning, but how far will this trust extend? As tool sophistication increases it is likely that decisions about their use will focus less on whether technological tools should be used at all, but more on whether they will be used to augment (enable) or to automate (do) the work of people. The power of advanced technological capability can be alluring, but unless the result contributes value accepted by the whole system, the tool is useless.
Knowledge Codification One of the most hotly contested area of knowledge management involves codification, i.e., putting knowledge in various forms that can be leveraged and transferred. The field of artificial intelligence has long involved so-called “knowledge elicitation” and “knowledge engineering” in the development of expert systems. The section on codification begins with a chapter from Patrick Wilson’s book Second-Hand Knowledge describing the interesting question of cognitive authority, i.e., how much trust there is in the validity of the knowledge. Also, is all knowledge captured for later sorting or is it pre-screened? In either case, who determines knowledge quality? Wilson looks at these questions through the eyes of library science, a field which has long dealt with these exact concerns. The activities of the librarian (e.g., establishing information sorting, cataloging, and retrieval mechanisms) are now required far beyond the bounds of the library, coming to rest on the shoulders of most every member of the information economy. Workers today, faced with the internet, Lotus Notes databases, organizational knowledge maps, etc., realize that they have much to learn from library sciences’ experience with codified knowledge. In the second chapter in this section, an article by Harry Collins explains how utilizing the idea of behavior-specific action in captur ing knowledge can help us get closer to striking a balance between symbol-type (explicit) knowledge and so-called “encultured” (tacit) knowledge. While Wilson talks about how to organize what you know you have codified, Collins describes how to codify what you have a hard time even recognizing. Through greater understanding of the differences Collins points out, the place of technology in capturing and handling all types of knowledge becomes clearer.
Knowledge Transfer Knowledge transfer has received a tremendous amount of publicity recently with advances in groupware and networking tools, designed to enable the flow of knowledge among groups and individuals. The goal of such tools is ultimately shared memory and understanding. In fact, this is difficult to achieve because knowledge is “sticky,” alive, and rich. It is “sticky” because it is very tightly bound to the context which gives it meaning; without context it is information. Knowledge can be thought of as being alive in that it must be constantly attended
Tools for Knowledge Management: An Introduction
7
to as it is ever-changing and growing. It also dies, goes out of date, becomes irrelevant and must be discarded, but who is its rightful steward? Lastly, it is rich in
its multi-dimensionality, containing a tremendous amount of content, context, and experience. All three of these factors make it very difficult to distribute knowledge. Tools can help. Advanced knowledge tools allow some context to be captured along with content and they can create global work “spaces.” The sort of virtual work environments these capabilities allow have real advantages, but also major drawbacks. Michael Schrage, in a chapter drawn from his book Shared Minds, examines the idea of collaborative tools from the pen and paper level to the dynamic “groupware” level, examining the underlying strengths and weaknesses of tools built to share knowledge. Although he discusses how advanced technology can enable and enhance collaborative environments, he reminds us of a lesson that should run throughout this discussion of tools: The real purpose of design here is not to build knowledge tools but to build knowledge. Vian and Johansen’s paper reflects on how electronic communications can enhance knowledge transfer and synthesis. I have included this piece to illustrate, consistent with the message above, that more important than the technology itself is the new ways people work together via electronic media. By describing behavioral patterns, Vian and Johansen concentrate on the process of knowledge synthesis through transfer. This selection addresses the strong influence technological tools have on the way people interact with each other and how they perform their work. As they point out, ignoring either side of the human-tool (or socio-technical) co-evolution will likely cause failure, if not of the work itself, certainly of the use of the technology.
Implementation Having outlined the discussions around each of the various aspects of toolsupported knowledge management, the book goes on to deal with the implementation of such tools. This is the area where the most frequent problems with knowledge tools arise. Many times, organizations react to the idea of knowledge management by buying the tools, installing them, and then expecting overnight results. This “tool-centric” approach can waste a great deal of time and money. Groupware packages have seen the lion’s share of such treatment and, usually through no fault of the technology itself, have often been branded as failures. The four selections in this section were chosen to illustrate many of the pitfalls involved in implementing knowledge tools in organizations, offering guidance wherever possible. A chapter from Dorothy Leonard-Barton’s book, The Wellsprings of Knowledge, starts off this section by presenting a framework for managing knowledge tool implementation as an innovation project, and not just an execution of plans. Wanda Orlikowski describes the lessons learned by one organization as it implemented Lotus Notes, the most popular of the groupware packages. Karl Weick’s piece deepens the discussion with a look at how people try to retain meaning in electronic interactions when the “sense-making” tools they
8
KNOWLEDGE MANAGEMENT TOOLS
use in life differ from those needed to make sense in electronic contexts. The Future system story closes this section as an interesting study in the development of a knowledge-based system, its roll-out, and the results it produced. Taken together, these four pieces reflect aspects of implementation ranging from strategic to personal, all of which must be considered if tools are to successfully support knowledge management.
What Next? With the advent of more advanced technology, knowledge tools will become increasingly sophisticated. Two entertaining selections were chosen to represent some ideas for what knowledge tools might be able to do in the future. McNeill and Freiberger start this section with a chapter about how computers are being tasked with sorting through “fuzziness” of human thought. One of the key attributes of knowledge is its context-sensitivity, and action in the face of changing contexts requires judgment, an extremely “fuzzy” capability. Tools which can handle such context-sensitivity, perhaps based on the brain’s own neural mechanisms, may not be too far in the future. The last chapter, from Stan Franklin’s book Artificial Minds,explores the future of the mechanisms of the mind from the perspectives of physicists, roboticists, and biologists. These groups all contribute interesting notions to how the workings of the human mind might be duplicated and utilized. If knowledge is thought of as residing in the mind of a “knower,” producing artificial knowers will have interesting implications for knowledge management.
SUMMARY Tools and technologies are not the answer to the 5000-year-old questions surrounding knowledge. They can certainly facilitate the implementation of knowledge processes-the generation, transfer, and codification of knowledgeand in some cases they may be able to automate some kinds of knowledge work in these areas. Still, they must be taken in context and implemented as a part of the overall effort to leverage organizational knowledge through integration with the business strategy, the culture, the current processes, and the existing technologies. This book represents a dialog about how tools can facilitate the knowledge processes of an organization. These selections should help make discussible the many issues involved in choosing and using knowledge tools. Once aware of the pros and cons, strengths and weaknesses of knowledge tools, each person should be ready to begin to make informed decisions about how to incorporate such tools into his or her organization. Not an easy task, but worth the hard work if done knowledgeably.
PART ONE
Knowledge and Technology
This page intentionally left blank
Computer and Man (1964) Simon Herbert
Organizing a computer to perform complex tasks depends very much more upon the characteristics of the task environment than upon the “hardware”-the specific physical means for realizing the processing in the computer. Thus, all past and present digital computers perform basically the same kinds of symbol manipulations. In programming a computer it is substantially irrelevant what physical processes and devices-electromagnetic, electronic, or what not-accomplish the manipulations. A program written in one of the symbolic programming languages, like ALGOL or FORTRAN,will produce the same symbolic output on a machine that uses electron tubes for processing and storing symbols, one that incorporates magnetic drums, one with a magnetic core memory, or one with completely transistorized circuitry. The program, the organization of symbol-manipulating processes, is what determines the transformation of input into output. In fact, provided with only the program output, and without information about the processing speed, one cannot determine what kinds of physical devices accomplished the transformations: whether the program was executed by a solid-state computer, an electron-tube device, an electrical relay machine, or a room full of statistical clerks! Only the organization of the processes is determinate. Out of this observation arises the possibility of an independent science of information processing. By the same token, since the thinking human being is also an information processor, it should be possible to study his processes and their organization independently of the details of the biological mechanisms-the “hardware”-that implement them. The output of the processes, the behaviour of Homo cogituns, should reveal how the information processing is organized, without necessarily providing much information about the protoplasmic structures or biochemical processes that implement it. From this observation follows the possibility of constructing and testing psychological theories to explain human thinking in terms of Reprinted with the permission of the copyright holder from American Scientist, vol. 52, no. 3, September 1964. Copyrighted 1964, by The Society of the Sigma Xi and reprinted by permission of the copyright ownec
11
12
KNOWLEDGE MANAGEMENT TOOLS
the organization of information processes; and of accomplishing this without waiting until the neurophysiological foundations at the next lower level of explanation have been constructed. Finally, there is a growing body of evidence that the elementary information processes used by the human brain in thinking are highly similar to a sub-set of the elementary information processes that are incorporated in the instruction codes of present-day computers. As a consequence it has been found possible to test information-processing theories of human thinking by formulating these theories as computer programs-organizations of the elementary information processes-and examining the outputs of computers so programmed. The procedure assumes no similarity between computer and brain at the “hardware” level, only similarity in their capacities for executing and organizing elementary information processes. From this hypothesis has grown up a fruitful collaboration between research in “artificial intelligence,” aimed at enlarging the capabilities of computers, and research in human cognitive psychology. These, then, are the three propositions on which this discussion rests: 1. A science of information processing can be constructed that is substantially independent of the specific properties of particular informationprocessing mechanisms. 2. Human thinking can be explained in information-processing terms without waiting for a theory of the underlying neurological mechanisms. 3. Information-processing theories of human thinking can be formulated in computer programming languages, and can be tested by simulating the predicted behaviour with computers.
LEVELS OF EXPLANATION No apology is needed for carrying explanation only to an intermediate level, leaving further reduction to the future progress of science. The other sciences provide numerous precedents, perhaps the most relevant being nineteenth-century chemistry. The atomic theory and the theory of chemical combination were invented and developed rapidly and fruitfully during the first three-quarters of the nineteenth century-from Dalton, through Kekuli, to Mendeleev-without any direct physical evidence for or description of atoms, molecules, or valances. To quote Pauling (1960): Most of the general principles of molecular structure and the nature of the chemical bond were formulated long ago by chemists by induction from the The study of the structure of molecules was great body of chemical facts originally carried on by chemists using methods of investigation that were essentially chemical in nature, relating to the chemical composition of substances, the existence of isomers, the nature of the chemical reactions in which a substance takes part, and so on. From the consideration of facts of this kind Frankland, Kekult!, Couper, and Butlerov were led a century ago to
...
Information Processing in Computer and Man (I 964)
13
formulate the theory of valence and to write the first structural formulas for molecules, van? Hoff and le Be1 were led to bring classical organic stereochemistry into its final form by their brilliant postulate of the tetrahedral orientation of the four valence bonds of the carbon atom, and Werner was led to his development of the theory of the stereochemistry of complex inorganic substances. (pp. 3-4)
The history this passage outlines is worth pondering, because the last generation of psychologists has engaged in so much methodological dispute about the nature, utility, and even propriety, of theory. The vocal methodologically self-conscious, behaviourist wing of experimental psychology has expressed its scepticism of “unobserved entities” and “intermediate construct^."' Sometimes it has seemed to object to filling the thinking head with anything whatsoever. Psychologists who rejected the empty-head viewpoint, but who were sensitive to the demand for operational constructs, tended to counter the behaviourist objections by couching their theories in physiological language.2 The example of atomic theory in chemistry shows that neither horn of this dilemma need be seized. On the one hand, hypothetical entities, postulated because they were powerful and fruitful for organizing experimental evidence, proved exceedingly valuable in that science, and did not produce objectionable metaphysics. Indeed, they were ultimately legitimized in the present century by “direct” physical evidence. On the other hand, the hypothetical entities of atomic theory initially had no physical properties (other than weight) that could explain why they behaved as they did. While an electrical theory of atomic attraction predated valence theory, the former hypothesis actually impeded the development of the latter and had to be discredited before the experimental facts could fall into place. The valence of the mid-century chemist was a “chemical affinity” without any underlying physical mechanism. So it remained for more than half a century until the electron-shell theory was developed by Lewis and others to explain it. Paralleling this example from chemistry, information-processing theories of human thinking employ unobserved entities-symbols-and unobserved processes-elementary information processes. The theories provide explanations of behaviour that are mechanistic without being physiological. That they are mechanistic-that they postulate only processes capable of being effected by mechanism-is guaranteed by simulating the behaviour predicted on ordinary digital computers. . . . Simulation provides a basis for testing the predictions of the theories, but does not imply that the protoplasm in the brain resembles the electronic components of the computer.
A SPECIFIC INFORMATION-PROCESSINGTHEORY: PROBLEM SOLVING IN CHESS Information-processing theories have been constructed for several kinds of behaviour, and undertake to explain behaviour in varying degrees of detail. As a
14
KNOWLEDGE MANAGEMENT TOOLS
first example, we consider a theory that deals with a rather narrow and special range of human problem-solving skill, attempting to explain the macroscopic organization of thought in a particular task environment. Good chess players often detect strategies-called in chess “combinations”-that impose a loss of a piece or a checkmate on the opponent over a series of moves, no matter what the latter does in reply. In actual game positions where a checkmating possibility exists, a strong player may spend a quarter of an hour or more discovering it, and verifying the correctness of his strategy. In doing so, he may have to look ahead four or five moves, or even more.3 If the combination is deep, weaker players may not be able to discover it at all, even after protracted search. How do good players solve such problems? How do they find combinations? A theory now exists that answers these questions in some detail. First, I shall describe what it asserts about the processes going on in the mind of the chess player as he studies the position before him, and what it predicts about his progress in discovering an effective strategy. Then we can see to what extent it accounts for the observed facts. The actual theory is a computer program couched in a list-processing language, called Information Processing Language V (IPL-V). O u r account of the theory will be an English-language translation of the main features of the program.‘ The statement of the theory has five main parts. The first two of these specify the way in which the chess player stores in memory his representation of the chess position, and his representation of the moves he is considering, respectively. The remaining parts of the theory specify the processes he has available for extracting information from these representations and using that information: processes for discovering relations among the pieces and squares of the chess position, for synthesizing chess moves for consideration, and for organizing his search among alternative move sequences. We shall describe briefly each of these five parts of the theory. The theory asserts, first of all, that the human chess player has means for storing internally, in his brain, encoded representations of the stimuli presented to him. In the case of a highly schematized stimulus like a chess position, the internal symbolic structure representing it can be visualized as similar to the printed diagram used to represent it in a chess book. The internal representation employs symbols that name the squares and the pieces, and symbolizes the relations among squares, among pieces, and between squares and pieces. For example, the internal representation symbolizes rather explicitly that a piece on the King’s square is a Knight’s-move away, in a SSW direction, from a piece on the third rank of the Queen’s file. Similarly, if the King’s Knight is on the King’s Bishop’s Third square (KB3), the representation associates the symbol designating the Knight with the symbol designating the KB3 square, and the symbol designating the square with that designating the Knight. On the other hand, the representation does not symbolize directly that two pieces stand on the same diagonal. Relations like this must be discovered or inferred from the representation by the processes to be discussed below.
Information Processing in Computer and Man ( 1 964)
15
Asserting that a position is symbolized internally in this way does not mean that the internal representations are verbal (any more than the diagrams in a chess book are verbal). It would be more appropriate, in fact, to describe the representations as a “visual image,” provided that this phrase is not taken to imply that the chess player has any conscious explicit image of the entire board in his “mind’s eye.” The chess player also has means for representing in memory the moves he is considering. He has symbol-manipulating processes that enable him, from his representations of a position and of a move, to use the latter to modifr the formerthe symbolic structure that describes the position-into a new structure that represents what the position would be after the move. The same processes enable him to “unmake” a move-to symbolize the position as it was before the move was considered. Thus, if the move that transfers the King’s Knight from his original square ( K N l ) to the King’s Bishop’s Third square (KB3) is stored in memory, the processes in question can alter the representation of the board by changing the name of the square associated with the Knight from K N l to KB3, and conversely for unmaking the move. The chess player has processes that enable him to discover new relations in a position, to symbolize these, and to store the information in memory. For example, in a position he is studying (whether the actual one on the board, or one he has produced by considering moves), he can discover whether his King is in check-attacked by an enemy man; or whether a specified piece can move to a specified square; or whether a specified man is defended. The processes for detecting such relations are usually called perceptual processes. They are characterized by the fact that they are relatively direct: they obtain the desired information from the representation with a relatively small amount of manipulation. The chess player has processes, making use o f the perceptual processes, that permit him to generate or synthesize for his consideration moves with specified properties-for example, to generate all moves that will check the enemy King. To generate moves having desired characteristics may require a considerable amount of processing. If this were not so, if any kind of move could be discovered effortlessly, the entire checkmating program would consist of the single elementary process: DISCOVER CHECKMATING MOVES. An example of these more complex, indirect processes is a procedure that would discover certain forking moves (moves that attack two pieces simultaneously) somewhat as follows. Find the square of the opposing Queen. Find all squares that lie a Knight’smove from this square. Determine for each of these squares whether it is defended (whether an opposing piece can move to it). If not, test all squares a Knight’smove away from it to see if any of them has a piece that is undefended or that is more valuable than a Knight. Finally, the chess player has processes for organizing a search for mating combinations through the “tree” of possible move sequences. This search makes use of the processes already enumerated, and proceeds as follows. The player generates all the checking moves available to him in the given position, and for each checking move, generates the legal replies open to his oppo-
16
KNOWLEDGE MANAGEMENT TOOLS
nent. If there are no checking moves, he concludes that no checkmating combination can be discovered in the position, and stops his search. If, for one of the checking moves, he discovers there are no legal replies, he concludes that the checking move in question is a checkmate. If, for one of the checking moves, he discovers that the opponent has more than four replies, he concludes that this checking move is unpromising, and does not explore it further. Next, the player considers all the checking moves (a) that he has not yet explored and (b) that he has not yet evaluated as “CHECKMATE,” or “ N O MATE.” He selects the move that is most promising-by criteria to be mentioned presently-and pushes his analysis of that move one move deeper. That is, he considers each of its replies in turn, generates the checking moves available after those replies, and the replies to those checking moves. He applies the criteria of the previous paragraph to attach “CHECKMATE” or “NO MATE” labels to the moves where he can. He also “propagates” these labels to antecedent moves. For example, a reply is labelled CHECKMATE if at least one of its derivative checking moves is CHECKMATE; it is labelled NO MATE if all the consequent checking moves are so labelled. A checking move is labelled CHECKMATE if all of the replies are so labelled; it is labelled N O MATE if at least one reply is so labelled. The most promising checking move for further exploration is selected by these criteria: that checking move to which there are the fewest replies receives first prioritys If two or more checking moves are tied on this criterion, a double check (check with two pieces) is given priority over a single check. If there is still a tie, a check that does not permit a recapture by the opponent is given priority over one that does. Any remaining ties are resolved by selecting the check generated most recently. A number of details have been omitted from this description, but it indicates the theory’s general structure and the kinds of processes incorporated. The theory predicts, for any chess position that is presented to it, whether a chess player will discover a mating combination in that position, what moves he will consider and explore in his search for the combination, and which combination (if there are several alternatives, as there often are) he will discover. These predictions can be compared directly with published analyses of historical chess positions or tape recordings of the thinking-aloud behaviour of human chess players to whom the same position is presented. Now it is unlikely that, if a chess position were presented to a large number of players, all of them would explore it in exactly the same way. Certainly strong players would behave differently from weak players. Hence, the informationprocessing theory, if it is a correct theory at all, must be a theory only for players of a certain strength. On the other hand, we would not regard its explanation of chess playing as very satisfactory if we had to construct an entirely new theory for each player we studied. Matters are not so bad, however. First, the interpersonal variations in search for chess moves in middle-game positions appear to be quite small for players at a common level of strength as we shall see in a moment. Second, some of the differences that are unrelated to playing strength appear to correspond to quite simple
Information Processing in Computer and Man (1 964)
17
variants of the program altering, for example, the criteria that are used to select the most promising checking move for exploration. Other differences, on the other hand, have major effects on the efficacy of the search, and some of these, also, can be represented quite simply by variants of the program organization. Thus, the basic structure of the program, and the assumptions it incorporates about human information-processing capacities, provide a general explanation for the behaviour, while particular variants of this basic program allow specific predictions to be made of the behavioural consequences of individual differences in program organization and content. The kinds of information the theory provides, and the ways in which it has been tested, can be illustrated by a pair of examples. Adrian de Groot (1964) has gathered and analysed a substantial number of thinking-aloud protocols, some of them from grand masters. He uniformly finds that, even in complicated positions, a player seldom generates a “tree” of more than 50 or 75 positions before he chooses his move. Moreover, the size of the tree does not depend on the player’s strength. The thinking-aloud technique probably underestimates the size of the search tree somewhat, for a player may fail to mention some variations he has seen, but the whole tree is probably not an order of magnitude greater than that reported. In 40 positions from a standard published work on mating combinations where the information-processing theory predicted that a player would find mating strategies, the median size of its search tree ranged from 13 positions for twomove mates, to 53 for five-move mates. A six-move mate was found with a tree of 95 positions: and an eight-move mate with a tree of 108. (The last two mates, as well as a number of the others, were from historically celebrated games between grand masters, and are among the most “brilliant” on record.) Hence, we can conclude that the predictions of the theory on amount of search are quite consistent with de Groot’s empirical findings o n the behaviour of highly skilled human chess players. The second example tests a much more detailed feature of the theory. In the eight-move mate mentioned above, it had been known that by following a different strategy the mate could have been achieved in seven moves. Both the human grand master (Edward Lasker in the game of Lasker-Thomas, 1912) and the program found the eight-move mate. Examination of the exploration shows that the shorter sequence could only have been discovered by exploring a branch of the tree that permitted the defender two replies before exploring a branch that permitted a single reply. The historical evidence here confirms the postulate of the theory that players use the “fewest replies” heuristic to guide their search. (The evidence was discovered after the theory was constructed.) A second piece of evidence of the same sort has been found in a game between experts reported in Chess Life (December 1963). The winner discovered a seven-move mate, but overlooked the fact that he could have mated in three moves. The annotator of the game, a master, also overlooked the shorter sequence. Again, it could only have been found by exploring a check with two replies before exploring one with a single reply.
18
KNOWLEDGE MANAGEMENT TOOLS
The “fewest replies” heuristic is not a superficial aspect of the players’ search, nor is its relevance limited to the game of chess. Most problem-solving tasks-for example, discovering proofs of mathematical theorems-require a search through a branching “tree” of possibilities. Since the tree branches geometrically, solving a problem of any difficulty would call for a search of completely unmanageable scope (numbers like arise frequently in estimating the magnitude of such searches), if there were not at hand powerful heuristics, or rules of thumb, for selecting the promising branches for exploration. Such heuristics permit the discovery of proofs for theorems (and mating combinations) with the limited explorations reported here. The “fewest replies” heuristic is powerful because it combines two functions: it points search in those directions that are most restrictive for the opponent, giving him the least opportunity to solve his problems; at the same time, it limits the growth of the search tree, by keeping its rate of branching as low as possible. The “fewest replies” heuristic is the basis for the idea of retaining the initiative in military strategy, and in competitive activities generally, and is also a central heuristic in decision making in the face of uncertainty. Hence its appearance in the chess-playing theory, and in the behaviour of the human players, is not fortuitous.
PARSIMONIOUS AND GARRULOUS THEORIES Granting its success in predicting both some general and some very specific aspects of human behaviour in chess playing, like the examples just described, the theory might be confronted with several kinds of questions and objections. It somehow fails to conform to our usual notions of generality and parsimony in theory. First, it is highly specific-the checkmating theory purports to provide an explanation only of how good chess players behave when they are confronted with a position on the board that calls for a vigorous mating attack. If we were to try to explain the whole range of human behaviour, over all the enormous variety of tasks that particular human beings perform, we should have to compound the explanations from thousands of specific theories like the checkmate program. The final product would be an enormous compendium of “recipes” for human behaviour at specific levels of skill in specific task environments.6 Second, the individual theories comprising this compendium would hardly be parsimonious, judged by ordinary standards. We used about a thousand words above to provide an approximate description of the checkmate program. The actual program-the formal theory-onsists of about three thousand computer instructions in a list-processing language, equivalent in information content to about the same number of English words (It should be mentioned that the program includes a complete statement of the rules of chess, so that only a small part of the total is given over to the description of the player’s selection rules and their organization.)
Information Processing in Computer and Man ( 1 964)
19
Before we recoil from this unwieldy compendium as too unpleasant and unaesthetic to contemplate, let us see how it compares in bulk with theories in the other sciences. With the simplicity of Newtonian mechanics (why is this always the first example to which we turn?), there is, of course, no comparison. If classical mechanics is the model, then a theory should consist of three sentences, or a couple of differential equations. But chemistyg and particularly organic chemistry, presents a different picture. It is perhaps not completely misleading to compare the question “How does a chess player find a checkmating combination?” with a question like “How do photoreceptors in the human eye operate?” or “How is the carbohydrate and oxygen intake of a rabbit transformed into energy usable in muscular contraction?’ The theory of plant metabolism provides a striking example of an explanation of phenomena in terms of a substantial number of complex mechanisms. Calvin and Bassham (1962), in their book on The Photosynthesis of Carbon Compounds, introduce a figure entitled “carbon reduction pathways in photosynthesis” with the statement: “We believe the principal pathways for photosynthesis of simple organic compounds from COz to be those shown in Figure 2.” (pp 8-1 1, italics ours.) The figure referred to represents more than 40 distinct chemical reactions and a corresponding number of compounds. This diagram, of course, is far from representing the whole theory. Not only does it omit much of the detail, but it contains none of the quantitative considerations for predicting reaction rates, energy balances, and so on. The verbal description accompanying the figure, which also has little to say about the quantitative aspects, or the energetics, is over two pages in length-almost as long as our description of the chess-playing program. Here we have a clearcut example of a theory of fundamental importance that has none of the parsimony we commonly associate with scientific theorizing. The answer to the question of how photosynthesis proceeds is decidedly long-winded-as is the answer to the question of how chess players find mating combinations. We are often satisfied with such long-winded answers because we believe that the phenomena are intrinsically complex, and that no brief theory will explain them in detail. We must adjust our expectations about the character of information-processing theories of human thinking to a similar level. Such theories, to the extent that they account for the details of the phenomena, will be highly specific and highly complex. We might call them “garrulous theories” in contrast with our more common models of parsimonious theories.
ELEMENTARY INFORMATION PROCESSES We should like to carry the analogy with chemistry a step further. Part of our knowledge in chemistry-and a very important part of the experimental chemist-consists of vast catalogues of substances and reactions, not dissimilar in bulk to the compendium of information processes we are proposing. But, as we come to understand these substances and their reactions more fully, a second level of theory emerges that explains them (at least their general features) in a more parsi-
20
KNOWLEDGE MANAGEMENT TOOLS
monious way. The substances, at this more basic level, become geometrical arrangements of particles from a small set of more fundamental substances-atoms and sub-molecules-held together by a variety of known forces whose effects can be estimated qualitatively and, in simple cases, quantitatively. If we examine an information-processing theory like the checkmating program more closely, we find that it, too, is organized from a limited number of building blocks-a set of elementary information processes-and some composite processes that are compounded from the more elementary ones in a few characteristic ways. Let us try to describe these building blocks in general terms. First, we shall characterize the way in which symbols and structures of symbols are represented internally and held in memory. Then, we shall mention some of the principal elementary processes that alter these symbol structures.n
Symbols, Lists and Descriptions The smallest units of manipulable information in memory are symbol tokens9 or symbol occurrences. It is postulated that tokens can be compared, and that comparison determines that the tokens are occurrences of the same symbol (symbol type), or that they are different. Symbol tokens are arranged in larger structures, called lists. A list is an ordered set, a sequence, of tokens. Hence, with every token on a list, except the last, there is associated a unique next token. Associated with the list as a whole is a symbol, its name. Thus, a list may be a sequence of symbols that are themselves names of lists-a list of lists. A familiar example of a list of symbols that all of us carry in memory is the alphabet. (Its name is “alphabet.”) Another is the list of days of the week, in order-Monday is next to Sunday, and so on. Associations also exist between symbol types. An association is a twotermed relation, involving three symbols, one of which names the relation, the other two its arguments. “The colour of the apple is red” specifies an association between “apple” and “red” with the relation “colour.” A symbol’s associations describe that symbol.
Some Elementary Processes A symbol, a list and an association are abstract objects. Their properties are defined by the elementary information processes that operate on them. One important class of such processes are the discrimination processes. The basic discrimination process, which compares symbols to determine whether or not they are identical, has already been mentioned. Pairs of compound structures-lists and sets of associations-are discriminated from each other by matching processes that apply the basic tests for symbol identity to symbols in corresponding positions in the two structures. For example, two chess positions can be discriminated by a matching process that compares the pieces standing on corresponding
Information Processing in Computer and Man (1 964)
21
squares in the two positions. The outcome of the match might be a statement that “the two positions are identical except that the White King is on his Knight’s square in the first but on his Rook’s square in the second.” Other classes of elementary information processes are those capable of creating or copying symbols, lists and associations. These processes are involved, for example, in fixating or memorizing symbolic materials presented to the sense organs-learning a tune. Somewhat similar information processes are capable of modifying existing symbolic structures by inserting a symbol into a list, by changing a t e r n of an association (from “its colour is red” to “its colour is green”), or by deleting a symbol from a list. Still another class of elementary information processes finds information that is in structures stored in memory. We can think of such a process, schematically, as follows: to answer the question, “What letter follows ‘g’ in the alphabet?,” a process must find the list in memory named ”alphabet.” Then, another process must search down that list until (using the match for identity of symbols) it finds a “g.” Finally, a third process must find the symbol next to “g” in the list. Similarly, to answer the question, “what colour is the apple?” there must be a process capable of finding the second term of an association, given the first term and the name of the relation. Thus, there must be processes for finding named objects, for finding symbols on a list, for finding the next symbol on a list, and for finding the value of an attribute of an object. This list of elementary information processes is modest, yet provides an adequate collection of building blocks to implement the chess-playing theory as well as the other information-processing theories of thinking that have been constructed to date: including a general problem-solving theory, a theory of rote verbal learning, and several theories of concept formation and pattern recognition, among others.’O
Elementary Processes in the Chess Theory A few examples will show how the mechanisms employed in the chess-playing theory can be realized by symbols, lists, associations and elementary information processes. The player’s representation of the chess board is assumed to be a collection of associations: with each square is associated the symbol representing the man on that square, and symbols representing the adjoining squares in the several directions. Moves are similarly represented as symbols with which are associated the names of the squares from which and to which the move was made, the name of the piece moved, the name of the piece captured, if any, and so on. Similarly, the processes for manipulating these representations are compounded from the elementary processes already described. To make a move, for example, is to modify the internal representation of the board by deleting the association of the man to be moved with the square on which he previously stood, and creating the new association of that man with the square to which he moved; and, in case of a capture, by deleting also the association of the captured man with
22
KNOWLEDGE MANAGEMENT TOOLS
the square on which he stood. Another example: testing whether the King is in check involves finding the square associated with the King, finding adjoining squares along ranks, files and diagonals, and testing these squares for the presence of enemy men who are able to attack in the appropriate direction. (The latter is determined by associating with each man his type, and associating with each type of man the directions in which such men can legally be moved.) We see that, although the chess-playing theory contains several thousand program instructions, these are comprised of only a small number of elementary processes (far fewer than the number of elements in the periodic table). The elementary processes combine in a few simple ways into compound processes and operate on structures (lists and descriptions) that are constructed, combinatorially, from a single kind of building block-the symbol. There are two levels of theory: an “atomic” level, common to all the information-processing theories, of symbols, lists, associations and elementary processes, and a “macro-molecular” level, peculiar to each type of specialized human performance, of representations in the form of list structures and webs of associations, and of compound processes for manipulating these representations.
Processes in Serial Pattern Recognition A second example of how programs compounded from the elementary processes explain behaviour is provided by an information-processing theory of serial pattern recognition. Consider a sequence like: ABMCDMEFM
-.
An experimental subject in the laboratory, asked to extrapolate the series, will, after a little thought, continue: GHM, etc.
To see how he achieves this result, we examine the original sequence. First, it makes use of letters of the Roman alphabet. We can assume that the subject holds this alphabet in memory stored as a list, so that the elementary list process for finding the NEXT item on a list can find B, given A, or find S, given R, and so on. Now we note that any letter in the sequence, after the first three, is related to previous letters by relations NEXT and SAME. Specifically, if we organize the series into periods of three letters each: ABM CDM EFM
we see that:
Information Processing in Computer and Man ( 1 964)
23
1. The first letter in each period is NEXT in the alphabet to the second letter in the previous period. 2. The second letter in each period is NEXT in the alphabet to the first letter in that period. 3. The third letter in each period is the SAME as the corresponding letter in the previous period. The relations of SAME and NEXT also suffice for a series like: AAA CCC
EEE
or for a number series like: 1 7 2 8 3 9 4 0
In the last case, the “alphabet” to which the relation of NEXT is applied is the list of digits, 0 to 9, and NEXT is applied circularly, i.e. after 9 comes 0 and then 1 again. Several closely related information-processing theories of human pattern recognition have been constructed using elementary processes for finding and generating the NEXT item in a list (see Feldman, Tonge and Kanter 1964; Laughery and Gregg 1962, and Simon and Kotovsky 1963). These theories have succeeded in explaining some of the main features of human behaviour in a number of standard laboratory tasks, including so-called binary choice tasks, and series-completion and symbol-analogy tasks from intelligence tests. The nature of the series-completion task has already been illustrated. In the binary choice experiment, the subject is confronted, one by one, with a sequence of tokens-each a “+” or “V,” say. As each one is presented to him, he is asked what the next one will be. The actual sequence is, by construction, random. The evidence shows that, even when the subjects are told this, they rarely treat it as random. Instead, they behave as though they were trying to detect a serial pattern in the sequence and extrapolate it. They behave essentially like subjects faced by the series-completion task, and basically similar information-processing theories using the same elementary processes can explain both behaviours.
A BROADER VIEW OF THINKING PROCESSES A closer look at the principal examples now extant of information-processing theories suggests that another level of theory is rapidly emerging, intermediate between the “atomic” level common to all the theories and the “macro-molecular” level idiosyncratic to each. It is clear that there is no prospect of eliminating all idiosyncratic elements from the individual theories. A theory to explain chessplaying performances must postulate memory structures and processes that are completely irrelevant to proving theorems in geometry, and vice versa.
24
KNOWLEDGE MANAGEMENT TOOLS
On the other hand, it is entirely possible that human performances in different task environments may call on common components at more aggregative levels than the elementary processes. This, in fact, appears to be the case. The first information-processing theory that isolated some of these common components was called the General Problem Solver (Newell and Simon 1964).
Means-End Analysis The General Problem Solver is a program organized to keep separate (1) problem-solving processes that, according to the theory, are possessed and used by most human beings of average intelligence when they are confronted with any relatively unfamiliar task environment, from (2) specific information about each particular task environment. The core of the General Problem Solver is an organization of processes for means-end analysis. The problem is defined by specifying a given situation (A), and a desired situation (B). A discrimination process incorporated in the system of means-end analysis compares A with B, and detects one or more differences (D) between them, if there are any. With each difference, there is associated in memory a set of operators, (0,)or processes, that are possibly relevant to removing differences of that kind. The means-end analysis program proceeds to try to remove the difference by applying, in turn, the relevant operators. Using a scheme of means-end analysis, a proof of a trigonometric identity like cos 8 tan 8 = sin 0 might proceed like this: The right-hand side contains only the sine function; the left-hand side other trigonometric functions as well. The operator that replaces tan by sinkos will eliminate one of these. Applying it we get cos 8 (sin Bkos 8 ) = sin 8. The left-hand side still contains an extraneous function, cosine. The algebraic cancellation operator, applied in the two cosines, might remove this difference. We apply the operator, obtaining the identity sin 8 = sin 8.
Planning Process Another class of general processes discovered in human problem-solving performances and incorporated in the General Problem Solver are planning processes. The essential idea in planning is that the representation of the problem situation is simplified by deleting some of the detail. A solution is now sought for the new, simplified, problem, and if one is found, it is used as a plan to guide the solution of the original problem, with the detail reinserted. Consider a simple problem in logic. Given: (1)“A,” (2) “not A or B,” ( 3 ) “if not C then not B”; to prove “C.” To plan the proof, note that the first premise contains A, the second A and B, the third, B and C, and the conclusion, C. The plan might be to obtain B by combining A with (AB),then to obtain C by combining B with (BC).The plan will in fact work, but requires (2) to be transformed
lnformation Processing in Computer and Man ( I 964)
25
into “ A implies B” and (3) into “B implies C,” which transformations follow from the definitions of “or” and “ i f . . . then.”
Problem-Solving Organization The processes for attempting sub-goals in the problem-solving theories and the exploration processes in the chess-playing theory must be guided and controlled by executive processes that determine what goal will be attempted next. Common principles for the organization of the executive processes have begun to appear in several of the theories. The general idea has already been outlined above for the chess-playing program. In this program the executive routine cycles between an exploration (search) phase and an evaluation (scan) phase. During the exploration phase, the available problem-solving processes are used to investigate sub-goals. The information obtained through this investigation is stored in such a way as to be accessible to the executive. During the evaluation phase, the executive uses this information to determine which of the existing sub-goals is the most promising and should be explored next. An executive program organized in this way may be called a search-scan scheme, for it searches an expanding tree of possibilities, which provides a common pool of information for scanning by its evaluative processes.” The effectiveness of a problem-solving program appears to depend rather sensitively on the alternation of the search and scan phases. If search takes place in long sequences, interrupted only infrequently to scan for possible alternative directions of exploration, the problem solver suffers from stereotypy. Having initiated search in one direction, it tends to persist in that direction as long as the sub-routines conducting the search determine, locally, that the possibilities for exploration have not been exhausted. These determinations are made in a very decentralized way, and without benefit of the more global information that has been generated. On the other hand, if search is too frequently interrupted to consider alternative goals to the one being pursued currently, the exploration takes on an uncoordinated appearance, wandering indecisively among a wide range of possibilities. In both theorem-proving and chess-playing programs, extremes of decentralized and centralized control of search have shown themselves ineffective in comparison with a balanced search-scan organization.
Discrimination Trees Common organizational principles are also emerging for the rote memory processes involved in almost all human performance. As a person tries to prove a theorem, say, certain expressions that he encounters along the way gradually become familiar to him and his ability to discriminate among them gradually improves. An information-processing theory (EPAM) was constructed several years
26
KNOWLEDGE MANAGEMENT TOOLS
ago to account for this and similar human behaviour in verbal learning experiments (e.g. learning nonsense syllables by the serial anticipation or paired associate methods) (see Feigenbaum 1964). This theory is able to explain, for instance, how familiarity and similarity of materials affect rates of learning. The essential processes in EPAM include processes for discriminating among compound objects by sorting them in a “discrimination tree”; and familiarization processes for associating pairs or short sequences of objects. Discrimination processes operate by applying sequences of tests to the stimulus objects, and sorting them on the basis of the test results-a sort of “twenty questions” procedure. The result of discrimination is to find a memory location where information is stored about objects that are similar to the one sorted. Familiarization processes create new compound objects o u t of previously familiar elements. Thus, during the last decade, the letter sequence “IPL” has become a familiar word (to computer programmers!) meaning “information processing language.” The individual letters have been associated in this word. Similarly, the English alphabet, used by the serial pattern-recognizing processes, is a familiar object compounded from the letters arranged in a particular sequence. All sorts of additional information can be associated with an object, once familiarized (for example, the fact that IPLs organize symbols in lists can be associated with “IPL”). Because discrimination trees play a central role in EPAM, the program may also be viewed as a theory of pattern detection, and EPAM-like trees have been incorporated in certain information-processing theories of concept formation. It also now seems likely that the discrimination tree is an essential element in problem-solving theories like GPS, playing an important role in the gradual modification of the subject’s behaviour as he familiarizes himself with the problem material.
CONCLUSION Our survey shows that within the past decade a considerable range of human behaviours has been explained successfully by information-processing theories. We now know, for example, some of the central processes that are employed in solving problems, in detecting and extrapolating patterns, and in memorizing verba I ma teria Is. Information-processing theories explain behaviour at various levels of detail. In the theories now extant, at least three levels can be distinguished. At the most aggregative level are theories of complex behaviour in specific problem domains: proving theorems in logic or geometry, discovering checkmating combinations in chess. These theories tend to contain very extensive assumptions about the knowledge and skills possessed by the human beings who perform these activities, and about the way in which this knowledge and these skills are organized and represented internally. Hence each of these theories incorporates a rather extensive set of assumptions, and predicts behaviour only in a narrow domain.
Information Processing in Computer and Man ( I 964)
27
At a second level, similar or identical information-processing mechanisms are common to many of the aggregative theories. Means-end analysis, planning, the search-scan scheme, and discrimination trees are general-purpose organizations for processing that are usable over a wide range of tasks. As the nature of these mechanisms becomes better understood, they, in turn, begin to serve as basic building blocks for the aggregative theories, allowing the latter to be stated in more parsimonious form, and exhibiting the large fraction of machinery that is common to all, rather than idiosyncratic to individual tasks. At the lowest, “atomic,” level, all the information-processing theories postulate only a small set of basic forms of symbolic representation and a small number of elementary information processes. The construction and successful testing of large-scale programs that simulate complex human behaviours provide evidence that a small set of elements, similar to those now postulated in information-processing languages, is sufficient for the construction of a theory of human thinking. Although none of the advances that have been described constitute explanations of human thought at the still more microscopic, physiological level, they open opportunities for new research strategies in physiological psychology. As the information-processing theories become more powerful and better validated, they disclose to the physiological psychologist the fundamental mechanisms and processes that he needs to explain. He need no longer face the task of building the whole long bridge from microscopic neurological and molecular structures to gross human behaviour, but can instead concentrate on bridging the much shorter gap from physiology to elementary information processes. The work of Lettvin, Maturana, McCulloch and Pitts on information processing in the frog’s eye (1959), and the work of Hubel and Wiesel on processing of visual information by the cat (1962) already provide some hints of the form this bridging operation may take.
NOTES 1. The best-known exponent of this radical behaviourist position is Professor B.F. Skinner. He has argued, for example, that “an explanation is the demonstration of a functional relationship between behavior and manipulable or controllable variables,” in Wann (1964), p. 102. 2. A distinguished example of such a theory is Hebb‘s (1949) formulation in terms of “cell assemblies.” Hebb does not, however, insist on an exclusively physiological base for psychological theory, and his general methodological position is not inconsistent with that taken here. See also Hebb (1958), Chap. 13. 3. A “move” means here a move by one player followed by a reply by his opponent. Hence to look ahead four of five moves is to consider sequences of eight or ten successive positions. 4. A general account of this program, with the results of some hand simulations, can be found in Simon and Simon (1962), pp. 425-9. The theory described there has subsequently been programmed and the hand-simulated findings confirmed on a computer.
28
KNOWLEDGE MANAGEMENT TOOLS
5. This is perhaps the most important element in the strategy. It will be discussed further later. 6. The beginnings of such a compendium have already appeared. A convenient source for descriptions of a number of the information-processing theories is the collection by Feigenbaum and Feldman (1964). 7. Of course, even Newtonian mechanics is not at all this simple in structure. See Simon (1947), pp. 888-905. 8. Only a few of the characteristics of list-processing systems can be mentioned here. For a fuller account, see Newell and Simon (1963), especially pp. 273-376,380 84,419-24. 9. Evidence as to how information is symbolized in the brain is almost non-existent. If the reader is assisted by thinking of different symbols as different macromolecules, this metaphor is as good as any. A few physiologists think it may even be the correct explanation. See Hyden (1961), pp. 18-39. Differing patterns of neural activity will d o as well. See Adey, Kador, Didio and Schindler (1963), pp. 259-81. 10. For examples, see Feigenbaum and Feldman (1964), Part 2. 11. Perhaps the earliest use of the search-scan scheme appeared in the Logic Theorist, the first heuristic theorem-proving program. See Newell and Simon (1964).
REFERENCES Adey, W. R., Kador, R. T., Didio, J. and Schindler, W. J. (1963) “Impedance Changes in Cerebral Tissue Accompanying a Learned Discriminative Performance in the Cat,” Experimental Physiology, 7. Calvin, M. and Bassharn, J. A. (1962) The Photosynthesis of Carbon Compounds. New York: W. A. Benjamin. De Groot, A. (1964) Thought and Choice in Chess. The Hague: Mouton. Feigenbaum, E. A. (1964) “The Simulation of Verbal Learning Behavior” in Feigenbaum and Feldman (1964). Feigenbaum, E. A. and Feldman, J. (eds) (1964) Computers and Thought. New York: McGraw-Hill. Feldman, J., Tonge, E and Kanter, H. (1963) “Empirical Explorations of Hypothesis-Testing Model of Binary Choice Behavior” in Hoggatt, A. C. and Balderston, E E. (eds) Symposium on Simulation Models. Cincinnati: South-Western Publishing. Hebb, D. 0. (1949) The Organization of Behavior. New York: Wiley. Hebb, D. 0. (1958) Textbook of Psychology. Philadelphia: Saunders. Hubel, D. H. and Wiesel, T. N. (1962) “Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex,” Journal of Physiology, 160, pp. 106-54. Hyden, Holger (1961) “Biochemical Aspects of Brain Activity” in Farber, S. M. and Wilson, R. H. L. (eds) Control of the Mind. New York: McGraw-Hill. Laughery, K. R. and G r e g , L. W. (1962) “Simulation of Human Problem-Solving Behaviour,” Psychometrika, 27. Lettvin, J. Y., Maturana, H. R., McCulloch, W. S. and Pitts, W. H. (1959) “What the Frog’s Eye Tells the Frog’s Brain,” Proceedings of the Institute of Radio Engineers, 47, pp. 1940-51.
Information Processing in Computer and Man (1 964)
29
Newell, A. and Simon, H. A. (1963) “Computers in Psychology” in Luce, Bush and Galanter (eds) Handbook ofMathematical Psychology, vol. 1 . New York: Wiley. Newell, A. and Simon, H. A. (1964) “Empirical Explorations with the Logic Theory Machine: A Case Study in Heuristics” in Feigenbaum and Feldman (1964). Newell, A. and Simon, H. A. (1964) “GPS, A Program that Simulates Human Thought” in Feigenbaum and Feldman (1964). Pauling (1960) The Nature of the Chemical Bond. Ithaca: Cornell University Press, 3rd ed. Simon, H. A. (1947) “The Axioms of Newtonian Mechanics,” Phil. Mag., Ser. 7, 38 (December). Simon, H. A. and Kotovsky, K. (1963) “Human Acquisition of Concepts for Sequential Patterns,” Psychological Review, 70 (November), pp. 534-46. Simon, H. A. and Simon, P. A. (1962) “Trial and Error Search in Solving Difficult Problems: Evidence from the Game of Chess,” Behavioral Science, 7 (October). Wann, T. W. (ed.) (1964) Behaviorism and Phenomenology. Chicago: University of Chicago Press.
This page intentionally left blank
3 Why Computers May Never Think Like People Hubert and Stuart Dreyfus
Scientists who stand at the forefront of artificial intelligence (AI) have long dreamed of autonomous “thinking” machines that are free of human control. And now they believe we are not far from realizing that dream. As Marvin Minsky, a well-known A1 professor at MIT, recently put it: “Today our robots are like toys. They do only the simple things they’re programmed to. But clearly they’re about to cross the edgeless line past which they’ll do the things we are programmed to.” Patrick Winston, Minsky’s successor as head of the MIT A1 Laboratory, agrees: “Just as the Wright Brothers at Kitty Hawk in 1903 were on the right track to the 747s of today, so artificial intelligence, with its attempt to formalize common-sense understanding, is on the way to fully intelligent machines.” Encouraged by such optimistic pronouncements, the U.S. Department of Defense (DOD) is sinking millions of dollars into developing fully autonomous war machines that will respond to a crisis without human intervention. Business executives are investing in “expert” systems whose wisdom they hope will equal, if not surpass, that of their top managers. And A1 entrepreneurs are talking of “intelligent systems” that will perform better than we can-in the home, in the classroom, and at work. But no matter how many billions of dollars the Defense Department or any other agency invests in AI, there is almost no likelihood that scientists can develop machines capable of making intelligent decisions. After 25 years of research, A1 has failed to live up to its promise, and there is no evidence that it ever will. In fact, machine intelligence will probably never replace human intelligence simply because we ourselves are not “thinking machines.” Human beings have an intuitive intelligence that “reasoning” machines simply cannot match. Military and civilian managers may see this obvious shortcoming and refrain from deploying such “logic” machines. However, once various groups have From “Why Computers May Never Think Like People,” Dreyfus, Hubert and Stuart, Technology Review. v. 89, p. 42 (20) January, 1986. Reprinted with permission from MIT’s Technology Review Magazine, copyright 1986.
31
32
KNOWLEDGE MANAGEMENT TOOLS
invested vast sums in developing these machines, the temptation to justify this expense by installing questionable A1 technologies will be enormous. The dangers of turning over the battle-field completely to machines are obvious. But it would also be a mistake to replace skilled air-traffic controllers, seasoned business managers, and master teachers with computers that cannot come close to their level of expertise. Computers that “teach“ and systems that render “expert” business decisions could eventually produce a generation of students and managers who have no faith in their own intuition and expertise. We wish to stress that we are not Luddites. There are obvious tasks for which computers are appropriate and even indispensable. Computers are more deliberate, more precise, and less prone to exhaustion and error than the most conscientious human being. They can also store, modify, and tap vast files of data more quickly and accurately than humans can. Hence, they can be used as valuable tools in many areas. As word processors and telecommunication devices, for instance, computers are already changing our methods of writing and our notions of collaboration. However, we believe that trying to capture more sophisticated skills within the realm of electronic circuits-skills involving not only calculation but also judgment-is a dangerously misguided effort and ultimately doomed to failure.
ACQUIRING HUMAN KNOW-HOW Most of us know how to ride a bicycle. Does that mean we can formulate specific rules to teach someone else how to do it? How would we explain the difference between the feeling of falling over and the sense of being slightly off-balance when turning? And do we really know, until the situation occurs, just what we would d o in response to a certain wobbly feeling? No, we don’t. Most of us are able to ride a bicycle because we possess something called “know-how,” which we have acquired from practice and sometimes painful experience. That know-how is not accessible to us in the form of facts and rules. If it were, we could say we “know that” certain rules produce proficient bicycle riding. There are innumerable other aspects of daily life that cannot be reduced to “knowing that.” Such experiences involve “knowing how.” For example, we know how to carry on an appropriate conversation with family, friends, and strangers in a wide variety of contexts-in the office, at a party, and on the street. We know how to walk. Yet the mechanics of walking on two legs are so complex that the best engineers cannot come close to reproducing them in artificial devices. This kind of know-how is not innate, as is a bird’s skill at building a nest. We have to learn it. Small children learn through trial and error, often by imitating those who are proficient. As adults acquire a skill through instruction and experience, they do not appear to leap suddenly from “knowing that”-a knowledge guided by rules-to experience-based know-how. Instead, people usually pass through five levels of skill: novice, advanced beginner, competent, proficient, and expert. Only when we understand this dynamic process can we ask how far the computer could reasonably progress.
Why Computers May Never Think Like People
33
During the novice stage, people learn facts relevant to a particular skill and rules for action that are based on those facts. For instance, car drivers learning to operate a stick shift are told at what speed to shift gears and at what distancegiven a particular speed-to follow other cars. These rules ignore context, such as the density of traffic or the number of stops a driver has to make. Similarly, novice chess players learn a formula for assigning pieces point values independent of their position. They learn the rule: “Always exchange your pieces for the opponent’s if the total value of the pieces captured exceeds that of pieces lost.” Novices generally do not know that they should violate this rule in certain situations. After much experience in real situations, novices reach the advanced-beginner stage. Advanced-beginner drivers pay attention to situational elements, which cannot be defined objectively. For instance, they listen to engine sounds when shifting gears. They can also distinguish between the behavior of a distracted or drunken driver and that of the impatient but alert driver. Advanced-beginner chess players recognize and avoid overextended positions. They can also spot situational clues such as a weakened king’s side or a strong pawn structure. In all these cases, experience is immeasurably more important than any form of verbal description. Like the training wheels on a child’s first bicycle, initial rules allow beginners to accumulate experience. But soon they must put the rules aside to proceed. For example, at the competent stage, drivers no longer merely follow rules; they drive with a goal in mind. If they wish to get from point A to point B very quickly, they choose their route with an eye to traffic but not much attention to passenger comfort. They follow other cars more closely than they are “supposed” to, enter traffic more daringly, and even break the law. Competent chess players may decide, after weighing alternatives, that they can attack their opponent’s king. Removing pieces that defend the enemy king becomes their overriding objective, and to reach it these players will ignore the lessons they learned as beginners and accept some personal losses. A crucial difference between beginners and more competent performers is their level of involvement. Novices and advanced beginners feel little responsibility for what they do because they are only applying learned rules; if they foul up, they blame the rules instead of themselves. But competent performers, who choose a goal and a plan for achieving it, feel responsible for the result of their choices. A successful outcome is deeply satisfying and leaves a vivid memory. Likewise, disasters are not easily forgotten.
THE INTUITION OF EXPERTS The learner of a new skill makes conscious choices after reflecting on various options. Yet in our everyday behavior, this model of decision making-the detached, deliberate, and sometimes agonizing selection among alternatives-is the exception rather than the rule. Proficient performers d o not rely on detached deliberation in going about their tasks. Instead, memories of similar experiences in
34
KNOWLEDGE MANAGEMENT TOOLS
the past seem to trigger plans like those that worked before. Proficient performers recall whole situations from the past and apply them to the present without breaking them down into components or rules. For instance, a boxer seems to recognize the moment to begin an attack not by following rules and combining various facts about his body’s position and that of his opponent. Rather, the whole visual scene triggers the memory of similar earlier situations in which an attack was successful. The boxer is using his intuition, or know-how. Intuition should not be confused with the reenactment of childhood patterns or any of the other unconscious means by which human beings come to decisions. Nor is guessing what we mean by intuition. To guess is to reach a conclusion when one does not have enough knowledge or experience to do so. Intuition or know-how is the sort of ability that we use all the time as we go about our everyday tasks. Ironically, it is an ability that our tradition has acknowledged only in women and judged inferior to masculine rationality. While using their intuition, proficient performers still find themselves thinking analytically about what to do. For instance, when proficient drivers approach a curve on a rainy day, they may intuitively realize they are going too fast. They then consciously decide whether to apply the brakes, remove their foot from the accelerator, or merely reduce pressure on the accelerator. Proficient marketing managers may intuitively realize that they should reposition a product. They may then begin to study the situation, taking great pride in the sophistication of their scientific analysis while overlooking their much more impressive talent-that of recognizing, without conscious thought, the simple existence of the problem. The final skill level is that of expert. Experts generally know what to do because they have a mature and practiced understanding. When deeply involved in coping with their environment, they do not see problems in some detached way and consciously work at solving them. The skills of experts have become so much a part of them that they need be no more aware of them than they are of their own bodies. Airplane pilots report that as novices they felt they were flying their planes, but as experienced pilots they simply experience flying itself. Grand masters of chess, engrossed in a game, are often oblivious to the fact that they are manipulating pieces on a board. Instead, they see themselves as participants in a world of opportunities, threats, strengths, weaknesses, hopes, and fears. When playing rapidly, they sidestep dangers as automatically as teenagers avoid missiles in a familiar video game. One of us, Stuart, knows all too well the difference between expert and merely competent chess players; he is stuck at the competent level. He took up chess as an outlet for his analytic talent in mathematics, and most of the other players on his college team were also mathematicians. At some point, a few of his teammates who were not mathematicians began to play fast five- or ten-minute games of chess, and also began eagerly to replay the great games of the grand masters. But Stuart and his mathematical colleagues resisted because fast chess didn’t give them the time to figure out what to do. They also felt that they could learn nothing from the grandmaster games, since the record of those games seldom if ever provided specific rules and principles.
Why Computers May Never Think Like People
35
Some of his teammates who played fast chess and studied grand-mastergames absorbed a great deal of concrete experience and went on to become chess masters. Yet Stuart and his mathematical friends never got beyond the competent level. Students of math may predominate among chess enthusiasts, but a truck driver is as likely as a mathematician to be among the world’s best players. Stuart says he is glad that his analytic approach to chess stymied his progress because it helped him to see that there is more to skill than reasoning. When things are proceeding normally, experts do not solve problems by reasoning; they do what normally works. Expert air-traffic controllers do not watch blips on a screen and deduce what must be going on in the sky. Rather, they “see” planes when they look at their screens and they respond to what they see, not by using rules but as experience has taught them to. Skilled outfields do not take the time to figure out where a ball is going. Unlike novices, they simply run to the right spot. In The Bruin, Richard Restak quotes a Japanese martial artist as saying, “There can be no thought, because if there is thought, there is a time of thought and that means a flaw. . . If you take the time to think, ‘I must use this or that technique,’ you will be struck while you are thinking.” We recently performed an experiment in which an international chess master, Julio Kaplan, had to add numbers at the rate of about one per second while playing five-second-a-move chess against a slightly weaker but master-level player. Even with his analytical mind apparently jammed by adding numbers, Kaplan more than held his own against the master in a series of games. Deprived of the time necessary to see problems or construct plans, Kaplan still produced fluid and coordinated play. As adults acquire skills, what stands o u t is their progression from the analytic behavior of consciously following abstract rules to skilled behavior based on unconsciously recognizing new situations as similar to remembered ones. Conversely, small children initially understand only concrete examples and gradually learn abstract reasoning. Perhaps it is because this pattern in children is so well known that adult intelligence is so often misunderstood. By now it is evidence that there is more to intelligence than calculative rationality. In fact, experts who consciously reason things out tend to regress to the level of a novice or, at best, a competent performer. One expert pilot described an embarrassing incident that illustrates this point. Once he became an instructor, his only opportunity to fly the four-jet KC-135s at which he had once been expert was during the return flights he made after evaluating trainees. He was approaching the landing strip on one such flight when an engine failed. This is technically an emergency, but an experienced pilot will effortlessly compensate for the pull to one side. Being out of practice, our pilot thought about what to do and then overcompensated. He then consciously corrected himself, and the plane shuddered violently as he landed. Consciously using rules, he had regressed to flying like a beginner. This is not to say that deliberative rationality has no role in intelligence. Tunnel vision can sometimes be avoided by a type of detached deliberation. Focusing on aspects of a situation that seem relatively unimportant allows another perspective to spring to mind. We once heard an Israeli fighter pilot recount how
36
KNOWLEDGE MANAGEMENT TOOLS
deliberative rationality may have saved his life by rescuing him from tunnel vision. Having just vanquished an expert opponent, he found himself taking on another member of the enemy squadron who seemed to be brilliantly eluding one masterful ploy after another. Things were looking bad until he stopped following his intuition and deliberated. He then realized that his opponent’s surprising manuevers were really the predictable, rule-following behavior of a beginner. This insight enabled him to vanquish the pilot.
IS INTELLIGENCE BASED ON FACTS? Digital computers, which are basically complicated structures of simple onoff switches, were first used for scientific calculation. But by the end of the fifties, researchers such as Allen Newell and Herbert Simon, working together at the Rand Corp., began to exploit the idea that computers could manipulate general symbols. They saw that one could use symbols to represent elementary facts about the world and rules to represent relationships between the facts. Computers could apply these rules and make logical inferences about the facts. For instance, a programmer might give a computer rules about how cannibals like to eat missionaries, and facts about how many cannibals and missionaries must be ferried across a river in one boat that carries only so many people. The computer could then figure out how many trips it would take to get both the cannibals and the missionaries safely across the river. Newell and Simon believed that computers programmed with such facts and rules could, in principle, solve problems, recognize patterns, understand stories, and indeed do anything that an intelligent person could do. But they soon found that their programs were missing crucial aspects of problem solving, such as the ability to separate relevant from irrelevant operations. As a result, the programs worked in only a very limited set of cases, such as in solving puzzles and proving theorems of logic. In the late sixties, researchers at MIT abandoned Newell and Simon’s approach, which was based on imitating peoples’ reports of how they solved problems, and began to work on any processing methods that could give computers intelligence. They recognized that to sole “real-world” problems the computer had to somehow simulate real-world understanding and intuition. In the introduction to Semantic Information Processing, a collection of his students’ Ph.D. theses, Marvin Minsky describes the heart of the MIT approach: “If w e . . . ask. . . about the common everyday structures-that which a person needs to have ordinary common sense-we will find first a collection of indispensable categories, each rather complex: geometrical and mechanical properties of things and of space; uses and properties of a few thousand objects; hundreds of ‘facts’ about hundreds of people; thousands of facts about tens of people; tens of facts about thousands of people; hundreds of facts about hundreds of organizations . . . I therefore feel that a machine will quite critically need to acquire on the order of a hundred thousand elements of knowledge in order to be-
Why Computers May Never Think Like People
37
have with reasonable sensibility in ordinary situations. A million, if properly organized, should be enough for a very great intelligence.” However, Minsky’s students encountered the same problem that had plagued Newell and Simon: each program worked only in its restricted specialty and could not be applied to other problems. Nor did the programs have any semantics-that is, any understanding of what their symbols meant. For instance, Daniel Bobrow’s STUDENT program, which was designed to understand and solve elementary algebraic story problems, interpreted the phrase “the number of times I went to the movies” as the product of the two variables “number of” and “I went to the movies.” That’s because, as far as the program knew, “times” was a multiplicative operator linking the two phrases. The restricted, ad hoc character of such work is even more striking in a program called ELIZA, written by MIT computer science professor Joseph Weizenbaum. Weizenbaum set out to show just how much apparent intelligence one could get a computer to exhibit without giving it any real understanding at all. The result was a program that imitated a therapist using simple tricks such as turning statements into questions: it responded to “I’m feeling sad” with “Why are you feeling sad?” When the program couldn’t find a stock response, it printed out statements such as “Tell me about your father.’’ The remarkable thing was that people were so easily fooled by these tricks. Weizenbaum was appalled when some people divulged their deepest feelings to the computer and asked others to leave the room while they were using it. One of us, Hubert, was eager to see a demonstration of the notorious program, and he was delighted when Weizenbaum invited him to sit at the console and interact with ELIZA. Hubert spoiled the fun, however. He unintentionally exposed how shallow the trickery really was by typing, “I’m feeling happy,” and then correcting himself by typing, “No, elated.” At that point, the program came back with the remark, “Don’t be so negative.” Why? Because it had been programmed to respond with that rebuke whenever there was a “no” in the input.
MICROWORLDS VERSUS THE REAL WORLD It took about five years for the shallowness of Minsky’s students’ programs to become apparent. Meanwhile, Hubert published a book, What Computers Can’t Do, which asserted that A1 research had reached a dead end since it could not come up with a way to represent general common-sense understanding. But
just as What Computers Can’t Do went to press in 1970, Minsky and Seymour Papert, also a professor at MIT, developed a new approach to AI. If one could not deal systematically with common-sense knowledge all at once, they asked, then why not develop methods for dealing systematically with knowledge in isolated sub-worlds and build gradually from that? Shortly after that, MIT researchers hailed a computer program by graduate student Terry Winograd as a “major advance” in getting computers to understand human language. The program, called SHRDLU, simulated on a TV screen a ro-
38
KNOWLEDGE MANAGEMENT TOOLS
bot arm that could move a set of variously shaped blocks. The program allowed a person to engage in a dialogue with the computer, asking questions, making statements, and issuing commands within this simple world of movable blocks. The program relied on grammatical rules, semantics, and facts about blocks. As Winograd cautiously claimed, SHRDLU was a “computer program which ‘understands’ language in a limited domain.” Winograd achieved success in this restricted domain, or “microworld,” because he chose a simple problem carefully. Minsky and Papert believed that by combining a large number of these microworlds, programmers could eventually give computers real life understanding. Unfortunately, this research confuses two domains, which we shall distinguish as “universe” and “world.” A set of interrelated facts may constitute a “universe’’ such as the physical universe, but it does not constitute a “world” such as the world of business or theater. A “world” is an organized body of objects, purposes, skills, and practices that make sense only against a background of common human concerns. These “sub-worlds” are not isolable physical systems. Rather, they are specific elaborations of a whole, without which they could not exist. If Minsky and Papert’s microworlds were true subworlds, they would not have to be extended and combined to encompass the everyday world, because each one would already incorporate it. But since microworlds are only isolated, meaningless domains, they cannot be combined and extended to reflect everyday life. Because scientists failed to ask what a “world” is, another five-year period of A1 research ended in stagnation. Winograd himself soon gave up the attempt to generalize the techniques SHRDLU used. “The A1 programs of the late sixties and early seventies are much too literal built up of the bricks and mortar provided by the words.” From the late seventies to the present, A1 has been wrestling unsuccessfully with what is called the common-sense knowledge problem: how to store and gain access to all the facts human beings seem to know. This problem has kept A1 from even beginning to fulfill the predictions Minsky and Simon made in the mid-sixties: that within 20 years computers would be able to do everything humans can.
CAN COMPUTERS COPE WITH CHANGE? If a machine is to interact intelligently with people, it has to been endowed with an understanding of human life. What we understand simply by virtue of being human-that insults make us angry, that moving physically forward is easier than moving backward-all this and much more would have to be programmed into the computer as facts and rules. As A1 workers put it, they must give the computer our belief system. This, of course, presumes that human understanding is made up of beliefs that can be readily collected and stored as facts. Even if we assume that this is possible, an immediate snag appears: we cannot program computers for context. For instance, we cannot program a computer to know simply that a car is going “too fast.” The machine must be programmed
Why Computers May Never Think Like People
39
in a way free of interpretation-we must stipulate that the car is going “20 miles a n hour,” for example. Also, computers know what to d o only by reference to precise rules, such as “shift to second at 20 miles an hour.” Computer programmers cannot use common-sense rules, such as “under normal conditions, shift to second a t about 20 miles an hour.” Even if all the facts were stored in a context-free form, the computer still couldn’t use them because it would be unable to draw on just the facts or rules that are relevant in each particular context. For example, a general rule of chess is that you should trade material when you’re ahead in the value of the pieces on the board. However, you should not apply that rule if the opposing king is much more centrally located than yours, or when you are attacking the enemy king. And there are exceptions to each of these exceptions. It is virtually impossible to include all the possible exceptions in a program and d o so in such a way that the computer knows which exception to use in which case, In the real world, any system of rules has to be incomplete. The law, for instance, always strives for completeness but never achieves it. “Common law” helps, for it is based more on precedents than on a specific code. But the sheer number of lawyers in business tells us that it is impossible to develop a code of law so complete that all situations are unambiguously covered. To explain our own actions and rules, humans must eventually fall back on everyday practices and simply say, “This is what one does.” In the final analysis, all intelligent behavior must hark back to our sense of what we are. We can never explicitly formulate this in clear-cut rules and facts; therefore, we cannot program computers to possess that kind of know-how. Nor can we program them to cope with changes in everyday situations. A1 researchers have tried to develop computer programs that describe a normal sequence of events as they unfold. One such script, for instance, details what happens when someone goes to a restaurant. The problem is that so many unpredictable events can occur-one can receive an emergency telephone call or run into an acquaintance-that it’s virtually impossible to predict how different people will respond. It all depends on what else is going on and what their specific purpose is. Are these people there to eat, to hobnob with friends, to answer phone calls, or to give the waiters a hard time? To make sense of behavior in restaurants, one has to understand not only what people typically d o in eating establishments but why they d o it. Thus, even if programmers could manage to list all that is possibly relevant in typical restaurant dining, computers could not use the information because they would have no understanding of what is actually relevant to specific customers.
THINKING WITH IMAGES, NOT WORDS Experimental psychologists have shown that people actually use images, not descriptions as computers do, to understand and respond to some situations. Humans often think by forming images and comparing them holistically. This proc-
40
KNOWLEDGE MANAGEMENT TOOLS
ess is quite different from the logical, step-by-step operations that logic machines perform. For instance, human beings use images to predict how certain events will turn out. If people know that a small box is resting on a large box, they can imagine what would happen if the large box were moved. If they see that the small box is tied to a door, they can also imagine what would result if someone were to open the door. A computer, however, must be given a list of facts about boxes, such as their size, weight, and frictional coefficients, as well as information about how each is affected by various kinds of movements. Given enough precise information about boxes and strings, the computer can deduce whether the small box will move with the large one under certain conditions. People also reason things out in this explicit, step-by-step way-but only if they must think about relationships they have never seen and therefore cannot imagine. At present, computers have difficulty recognizing images. True, they can store an image as a set of dots and then rotate the set of dots so that a human designer can see the object from any perspective. But to know what a scene depicts, a computer must be able to analyze it and recognize every object. Programming a computer to analyze a scene has turned o u t to be very difficult. Such programs require a great deal of computation, and they work only in special cases with objects whose characteristics the computer has been programmed to recognize in advance. But that is just the beginning of the problem. The computer can make inferences only from lists of facts. It’s as if to read a newspaper you had to spell out each word, find its meaning in the dictionary, and diagram every sentence, labeling all the parts of speech. Brains do not seem to decompose either language or images this way, but logic machines have no choice. They must break down images into the objects they contain-and then into descriptions of those objects’ features-before drawing any conclusions. However, when a picture is converted into a description, much information is lost. In a family photo, for instance, one can see immediately which people are between, behind, and in front of which others. The programmer must list all these relationships for the computer, or the machine must go through the elaborate process of deducting these relationships each time the photo is used. Some A1 workers look for help from parallel processors, machines that can do many things at once and hence make millions of inferences per second. But this appeal misses the point: that human beings seem to be able to form and compare images in a way that cannot be captured by any number of procedures that operate on descriptions. Take, for example, face recognition. People can not only form an image of a face, but they can also see the similarity between one face and another. Sometimes the similarity will depend on specific shared features, such as blue eyes and heavy beards. A computer, if it has been programmed to abstract such features from a picture of a face, could recognize this sort of similarity. However, a computer cannot recognize emotions such as anger in facial expressions, because we know of no way to break down anger into elementary sym-
Why Computers May Never Tbink Like People
41
bols. Therefore, logic machines cannot see the similarity between two faces that are angry. Yet human beings can discern the similarity almost instantly. Many A1 theorists are convinced that human brains unconsciously perform a series of computations to perceive such subtleties. While no evidence for this mechanical model of the brain exists, these theorists take it for granted because it is the way people proceed when they are reflecting consciously. To such theorists, any alternative explanation appears mystical and therefore anti-scientific. But there is another possibility. The brain, and therefore the mind, could still be explained in terms of something material. But it does not have to be an information-processing machine. Other physical systems can detect similarity without using any descriptions or rules at all. These systems are known as holograms.
IS THE MIND LIKE A HOLOGRAM? An ordinary hologram works by taking a picture of an object using two beams of laser light, one of which is reflected off the object and one of which shines directly onto film. When the two beams meet, they create an interference pattern like that produced by the waves from several pebbles thrown into a pond. The light waves form a specific pattern of light and dark regions. A photographic plate records this interference pattern, thus storing a representation of the object. In ordinary light, the plate just looks blurry, a uniform silvery gray. But if the right frequency of light is projected into it, the recorded pattern of light and dark shapes the light into a replica of the object. This replica appears three-dimensional: we can view different sides of it as we change position. What first attracted neuropsychologists to the hologram was that it really is holistic: any small piece of the blur on the photographic plate contains the whole scene. For example, if you cut one corner off a hologram of a table and shine a laser beam through what remains, you do not see an image of a table with a corner missing. The whole table is still there but with fuzzier edges. Certain areas of the brain also have this property. When a piece is cut out, a person may lose nothing specific from vision, for example. Instead, that person may see everything less distinctly. Holograms have another mind like property: they can be used for associative memory. If one uses a single hologram to record two different scenes and then bounces laser light off one of the scenes, an image of the other will appear. In our view, the most important property of holograms is their ability to detect similarity. For example, if we made a hologram of this page and then made a hologram of one of the letters on the page, say the letter F, shining a light through the two holograms would reveal an astonishing effect: a black field with bright spots wherever the letter F occurs on the page. Moreover, the brightest spots would indicate the Fs with the greatest similarity to the F we used to make our hologram. Dimmer spots would appear where there are imperfect or slightly rotated versions of the E Thus, a hologram can not only identify objects; it can also recognize similarity between them. Yet it employs no descriptions or rules.
42
KNOWLEDGE MANAGEMENT TOOLS
The way a hologram can instantly pick out a specific letter on a page is reminiscent of the way people pick out a familiar face from a crowd. It is possible that we distinguish the familiar face from all the other faces by processing rules about objectively identifiable features. But we would have to examine each face in the crowd, detect its features, and compare them with lists of our acquaintances’ features. It is much more plausible that our minds work on some variation of the holistic model. While the brain obviously does not contain lasers or use light beams, some scientists have suggested that neurons could process incoming stimuli using interference patterns like those of a hologram. However, the human mind seems to have an ability that far transcends current holographic techniques: the remarkable ability to recognize whole meaningful patterns without decomposing them into features. Unlike holography, our mind can sometimes detect faces in a crowd that have expressions unlike any we have previously seen on those faces. We can also pick out familiar faces that have changed dramatically because of the growth of a beard or the ravages of time. . We take no stand on the question of whether the brain functions holographically. We simply want to make clear that the information-processing computer is not the only physical system that can exhibit mind like properties. Other devices may provide closer analogies to the way the mind actually works. Given the above considerations, what level of skill can we expect logic machines to reach? Since we can program computers with thousands of rules combining hundreds of thousands of features, the machines can become what might be thought of as an expert novices in any well-structured and well-understood domain. As long as digital computers’ ability to recognize images and reason by analogy remains a vague promise, however, they will not be able to approach the way human beings cope with everyday reality. Despite their failure to capture everyday human understanding in computers, A1 scientists have developed programs that seem to reproduce human expertise within a specific, isolated domain. The programs are called expert systems. In their narrow areas, such systems perform with impressive competence. In his recent book on “fifth-generation” computers, Edward Feigenbaum, a professor at Stanford, spells out the goal of expert systems: “In the kind of intelligent systems envisioned by the designers of the Fifth Generation, speed and processing power will be increased dramatically. But more important, the machines will have reasoning power: they will automatically engineer vast amounts of knowledge to serve whatever purpose human beings propose, from medical diagnosis to product design, from management decisions to education.” The knowledge engineers claim to have discovered that all a machine needs to behave like an expert in these restricted domains are some general rules and lots of very specific knowledge. But can these systems really be expert? If we agree with Feigenbaum that “almost all thinking that professionals do is done by reasoning,” and that each expert builds up a “repertory of working rules of thumb,” the answer is yes. Given their speed and precision, computers should be as good as or better than people at following rules for deducing conclusions. Therefore, to build an expert system, a programmer need only extract those rules and program them into a computer.
Why Computers May Never Think Like People
43
JUST HOW EXPERT ARE EXPERT SYSTEMS? However, human experts seem to have trouble articulating the principles on which they allegedly act. For example, when Arthur Samuel at IBM decided to write a program for playing checkers in 1947, he tried to elicit “heuristic” rules from checkers masters. But nothing the experts told him allowed him to produce master play. So Samuel supplemented these rules with a program that relies blindly on its memory of past successes to improve its current performance. Basically, the program chooses what moves to make based on rules and a record of all past positions. This checkers program is one of the best expert systems ever built. But it is no champion. Samuel says the program “is quite capable of beating any amateur player and can give better players a good contest.” It did once defeat a state champion, but the champion turned around and defeated the program in six mail games. Nonetheless, Samuel still believes that chess champions rely on heuristic rules. Like Feigenbaum, he simply thinks that the champions are poor at recollecting their compiled rules: “The experts do not know enough about the mental processes involved in playing the game.” INTERNIST-1 is an expert system highly touted for its ability to make diagnoses in internal medicine. Yet according to a recent evaluation of the program published in The New England Journal of Medicine, this program misdiagnosed 18 o u t of a total of 43 cases, while clinicians at Massachusetts General Hospital misdiagnosed 15. Panels of doctors who discussed each case misdiagnosed only 8. (Biopsies, surgery, and post-mortem autopsies were used to establish the correct diagnosis for each case.) The evaluators found that “the experienced clinician is vastly superior to INTERNIST-1, in the ability to consider the relative severity and independence of the different manifestations of disease and to understand the . . . evolution of the disease process.” The journal also noted that this type of systematic evaluation was “virtually unique in the field of medical applications of artificial intelligence.” In every area of expertise, the story is the same: the computer can do better than the beginner and can even exhibit useful competence, but it cannot rival the very experts whose facts and supposed rules it is processing with incredible speed and accuracy. Why? Because the expert is not following any rules? While a beginner makes inferences using rules and facts just like a computer, the expert intuitively sees what to do without applying rules. Experts must regress to the novice level to state the rules they still remember but no longer use. No amount of rules and facts can substitute for the know-how experts have gained from experience in tens of thousands of situations. We predict that in no domain in which people exhibit such holistic understanding can a system based on rules consistently do as well as experts. Are there any exceptions? At first glance, at least one expert system seems to be as good as human specialists. Digital Equipment Corp. developed R1, now called XCON, to decide how to combine components of VAX computers to meet consumers’ needs. However, the program performs as well as humans only because there are so many pos-
44
KNOWLEDGE MANAGEMENT TOOLS
sible combinations that even experienced technical editors depend on rule-based methods of problem solving and take about 10 minutes to work o u t even simple cases. It is no surprise, then, that this particular expert system can rival the best specialists. Chess also seems to be an exception to o u r rule. Some chess programs, after all, have achieved master ratings by using “brute force.” Designed for the world’s most powerful computers, they are capable of examining about 10 million possible positions in choosing each move. However, these programs have an Achilles’ heel: they can see only about four moves ahead for each pieces. So fairly good players, even those whose chess rating is somewhat lower than the computers, can win by using long-range strategies such as attacking the king side. When confronted by a player who knows its weakness, the computer is not a master-level player. In every domain where know-how is required to make a judgment, computers cannot deliver expert performance, and it is highly unlikely that they ever will. Those who are most acutely aware of the limitations of expert systems are best able to exploit their real capabilities. Sandra Cook, manager of the Financial Expert Systems Program at the consulting firm SRI International, is one of these enlightened practitioners. She cautions prospective clients that expert systems should not be expected to perform as well as human experts, nor should they be seen as simulations of human expert thinking. Cook lists some reasonable conditions under which expert, or rather “competent,” systems can be useful. For instance, such systems should be used for problems that can be satisfactorily solved by human experts at such a high level that somewhat inferior performance is still acceptable. Processing of business credit applications is a good example, because rules can be developed for this task and computers can follow them as well as and sometimes better than inexperienced humans. Of course, there are some exceptions to the rules, but a few mistakes are not disastrous. On the other hand, no one should expect expert systems to make stock-market predictions because human experts themselves cannot always make such predictions accurately. Expert systems are also inappropriate for use on problems that change as events unfold. Advice from expert systems on how to control a nuclear reactor during a crisis would come too late to be of any use. Only human experts could make judgments quickly enough to influence events. It is hard to believe some A1 enthusiasts’ claim that the companies who use expert systems dominate all competition. In fact, a company that relies too heavily on expert systems faces a genuine danger. Junior employees may come to see expertise as a function of the large knowledge bases and masses of rules on which these programs must rely. Such employees will fail to progress beyond the competent level of performance, and business managers may ultimately discover that their wells of true human expertise and wisdom have gone dry.
Why Computers May Never Think Like People
45
COMPUTERS IN THE CLASSROOM Computers pose a similar threat in the classroom. Advertisements warn that a computer deficiency in the educational diet can seriously impair a child’s intellectual growth. As a result, frightened parents spend thousands of dollars on home computers and clamor for schools to install them in the classroom. Critics have likened computer salespeople to the encyclopedia peddlers of a generation ago, who contrived to frighten insecure parents into spending hundreds of dollars for books that contributed little to their offsprings’ education. We feel that there is a proper place for computers in education. However, most of today’s educational software is inappropriate, and many teachers now use computers in ways that may eventually produce detrimental results. Perhaps the least controversial way computers can be used is as tools. Computers can sometimes replace teaching aids ranging from paintbrushes, typewriters, and chalkboards to lab demonstrations. Computer simulations, for instance, allow children to take an active and imaginative role in studying subjects that are difficult to bring into the classroom. Evolution is too slow, nuclear reactions are too fast, factories are too big, and much of chemistry is too dangerous to reproduce realistically. In the future, computer simulations of such events will surely become more common, helping students of all ages in all disciplines to develop their intuition. However, since actual skills can be learned only through experience, it seems only common sense to stick to the world of real objects. For instance, basic electricity should be taught with batteries and bulbs. Relying too heavily on simulations has its pitfalls. First of all, the social consequences of decisions are often missing from simulations. Furthermore, the appeal of simulations could lead disciplines outside the sciences to stress their formal, analytic side at the expense of lessons based on informal, intuitive understanding. For example, political science departments may be tempted to emphasize mathematical models of elections and neglect the study of political philosophies that question the nature of the state and of power. In some economics departments, econometrics-which relies heavily on mathematical modelshas already pushed aside study of the valuable lessons of economic history. The truth is that no one canvasses the dynamic relationships that underlie election results or economies with anything like the accuracy of the laws of physics. Indeed, every election campaign or economic swing offers vivid reminders of how inaccurate predictions based on simulation models can be. On balance, however, the use of the computer as a tool is relatively unproblematic. But that is not the case with today’s efforts to employ the computer as tutor or tutee. Behind the idea that computers can aid, or even replace, teachers is the belief that teachers’ understanding of the subject being taught and their profession consists of knowing facts and rules. In other words, the teacher’s job is to convey specific facts and rules to students by drill and practice or by coaching. Actually, if our minds were like computers, drill and practice would be completely unnecessary. The fact that even brilliant students need to practice when learning subtraction suggests that the human brain does not operate like a com-
46
KNOWLEDGE MANAGEMENT TOOLS
puter. Drill is required simply to fix the rule in human memory. Computers, by contrast, remember instantly and perfectly. Math students also have to learn that some features such as the physical size and orientation of numbers are irrelevant while others such as position are crucial. In this case, they must learn to “decontextualize,” whereas computers have no context to worry about. There is nothing wrong with using computers as drill sergeants. As with simulation, the only danger in this use stems from the temptation to overemphasize some skills at the expense of others. Mathematics might degenerate into addition and subtraction, English into spelling and punctuation, and history into dates and places. A1 enthusiasts believe that computers can play an even greater role in teaching. According to a 1984 report by the National Academy of Sciences, “Work in artificial intelligence and the cognitive sciences has set the stage for qualitatively new applications of technology to education.” Such claims should give us pause. Computers will not be first-rate teachers unless researchers can solve four basic problems: how to get machines to talk, to listen, to know, and to coach. “We speak as part of our humanness, instinctively, on the basis of past experience,” wrote Patrick Suppes of Stanford University, one of the pioneers in computer-aided instruction, in a 1966 Scientific American article. “But to get a computer to talk appropriately, we need an explicit theory of talking.” Unfortunately, there is no such theory, and if our analysis of human intelligence is correct, there never will be. The same holds true for the problem of getting computers to listen. Continuous speech recognition seems to be a skill that resists decomposition into features and rules. What we hear does not always correspond to the features of the sound stream. Depending on the context and our expectations, we hear a stream of sound as “I scream,” or “ice cream.” We assign the space or pause in one of two places, although there is no pause in the sound stream. One expert came up with a sentence that illustrates the different ways we can hear the same stream of sound: “It isn’t easy to wreck a nice beach.” (Try reading that sentence out loud.) Without the ability to coach, a computer could hardly substitute for an inexperienced teacher, let alone a Socrates. “Even if you can make the computer talk, listen, and adequately handle a large knowledge data base, we still need to develop an explicit theory of learning and instruction,” Suppes writes. “In teaching a student, young or old, a given subject matter, a computer-based learning system can record anything the student does. It can know cognitively an enormous amount of information about the student. The problem is how to use this information wisely, skillfully, and efficiently to teach the student. This is something that the very best human tutors do well, even though they do not understand at all how they d o it.” While he recognizes how formidable these obstacles are, Suppes persists in the hope that we can program computers to teach. However, in our view, expertise in teaching does not consist of knowing complicated rules for deciding what tips to give students, when to keep silent, when to intervenealthough teachers may
Why Computers May Never Think Like People
47
have learned such rules in graduate school. Rather, expert teachers learn from experience to draw intuitively and spontaneously on the common-sense knowledge and experience they share with their students to provide the tips and examples they need. Since computers can successfully teach only novice or, at best, competent performance, they will only produce the sort of expert novices many feel our schools already graduate. Computer programs may actually prevent beginning students from passing beyond competent analysis to expertise. Instead of helping to improve education, computer-aided instruction could easily become part of the problem. In the air force, for instance, instructors teach beginning pilots a rule for how to scan their instruments. However, when psychologists studied the eye movements of the instructors during simulated flight, the results showed that the instructors were not following the rule they were teaching. In fact, as far as the psychologists could determine, the instructors were not following any rules a t all. Now suppose that the instrument-scanning rule goes into a computer program. The computer monitors eye movements to make sure novices are applying the rule correctly. Eventually, the novices are ready, like the instructors, to abandon the rules and respond to whole situations they perceive as similar to others. At this point, there is nothing more for the computer to teach. If it is still used to check eye movements, it would prevent student pilots from making the transition to intuitive proficiency and expertise. This is no mere bogeyman. Expert systems are already being developed to teach doctors the huge number of rules that programmers have “extracted” from experts in the medical domain. One can only hope that someone has the sense to disconnect doctors from the system as soon they reach the advanced-beginner stage.
CAN CHILDREN LEARN BY PROGRAMMING? The concept of using computers as tutees also assumes the informationprocessing model of the mind. Adherents of this view suppose that knowledge consists of using facts and rules, and that therefore students can acquire knowledge in the very act of programming. According to this theory, learning and learning to program are the same thing. Seymour Papert is the most articulate exponent of this theory. He is taking his LOGO program into Boston schools to show that children will learn to think more rigorously if they teach a literal-minded but patient and agreeable studentthe computer. In Papert’s view, programming a computer will induce children to articulate their own program by naming the features they are selecting from their environment, and by making explicit the procedures they are using to relate these features to events. Says Papert: “ I have invented ways to take educational advantage of the opportunities to master the art of deliberately thinking like a computer,
48
KNOWLEDGE MANAGEMENT TOOLS
according, for example, to the stereotype of a computer program that proceeds in a step-by-step, literal, mechanical fashion.” Papert’s insistence that human know-how can be analyzed has deep roots in our “rationalistic” Western tradition. We can all probably remember a time in school when we knew something perfectly well but our teacher claimed that we didn’t know it because we couldn’t explain how we got the answer. Even Nobel laureates face this sort of problem. Physicist Richard Feynman had trouble getting the scientific community to accept his theories because he could not explain how he got his answers. In his book Disturbing the Universe, physicist and colleague Freeman Dyson wrote, “The reason Dick’s physics were so hard for the ordinary physicists to grasp was that he did not use equations . . . He had a physical picture of the way things happen, and the picture gave him the solutions directly with a minimum of calculation. It was no wonder that people who spent their lives solving equations were baffled by him. Their minds were analytical; his was pictorial.” While Papert tries to create a learning environment in which learners constantly face new problems and need to discover new rules, Timothy Gallwey, the author of Znner Tennis, encourage learners to achieve mastery by avoiding analytic thinking from the very start. He would like to create a learning environment in which there are no problems a t all and so there is never any need for analytic reflection. Our view lies in between. At any stage of learning, some problems may require rational, analytic thought. Nonetheless, skill in any domain is measured by the performer’s ability to act appropriately in situations that might once have been problems but are no longer problems and so do not require analytic reflection. The risk of Gallwey’s method is that it leaves the expert without the tools to solve new problems. But the risk of Papert’s approach is far greater: it would leave the learner a perpetual beginner by encouraging dependence on rules and analysis.
A1 ON THE BATTLEFIELD The Department of Defense is pursuing a massive Strategic Computing Plan (SCP) to develop completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. SCP has already spent about $145 million and received approval to spend $150 million in fiscal 1986. To bolster support for this effort, the DOD’s Defense Advanced Research Projects Agency (DARPA) points to important advances in AI-expert systems with common sense and systems that can understand natural language. However, no such advances have occurred. Likewise, computers are no more able today to deal intelligently with “uncertain data” than they were a few years ago when our computerized ballistic-missile warning system interpreted radar reflections from arising moon as an enemy attack. In a recent report evaluating the SCP, the congressional Office of Technology Assessment cautioned, “Unlike the Manhattan Project or the Manned Moon
Why Computers May Never Think Like People
49
Landing Mission, which were principally engineering problems, the success of the DARPA program requires basic scientific breakthroughs, neither the timing nor the nature of which can be predicted.” Even if the Defense Department invests billions of dollars in AI, there is almost no likelihood that this state of affairs will change. Yet once vast sums of money have been spent, there will be a great temptation to install questionable AIbased technologies in a variety of critical areas-from battle management to “data reduction” (figuring out what is really going on given noisy, contradictory data). Military commanders now respond to a battlefield situation using common sense, experience, and whatever data are available. The frightening prospect of a fully computerized and autonomous defense system is that the expert’s ability to use intuition will be replaced by merely competent decision making. In a crisis, competence is just not good enough. Furthermore, to justify its expenditures to the public, the military may feel compelled to encourage the civilian sector to adopt similar technologies. Full automation of air-traffic control systems and of skilled factory labor are both real possibilities. Unless illusions concerning A1 are dispelled, we are risking a future in which computers make crucial military and civilian decisions that are best left to human judgment. Knowledgeable A1 practitioners have learned from bitter experience that the development of fully autonomous war machines is unlikely. We hope that military decision makers o r the politicians who fund them will see the light and save U.S. taxpayers’ money by terminating this crash program before it is too late.
THE OTHER SIDE OF THE STORY At this point the reader may reasonably ask: If computers used as logic machines cannot attain the skill level of expert human beings, and if the “Japanese challenge in fifth-generation systems” is a false gauntlet, then why doesn’t the public know that? The answer is that A1 researchers have a great deal at stake in making it appear that their science and its engineering offspring--expert systems-are on solid ground. They will do whatever is required to preserve this image. When public television station KCSM in Silicon Valley wanted to do a program on A1 to be aired nationally, Stanford A1 expert John McCarthy was happy to take part. So was a representative of IntelliCorp, a company making expert systems that wished to air a promotional film. KCSM also invited one of us, Hubert, to provide a balanced perspective. After much negotiating, an evening was finally agreed upon for taping the discussion. That evening the producer and technicians were standing by at the studio and Hubert had already arrived in San Mateo when word came that McCarthy would not show up because Hubert was to be o n the program. A fourth partici-
50
KNOWLEDGE MANAGEMENT TOOLS
pant, expert-systems researcher Michael Genesereth of Stanford University, also backed out. All of us were stunned. Representatives from public TV’s NOVA science series and CBS news had already interviewed Hubert about AI, and he had recently appeared on a panel with Minsky, Papert, philosopher John Searle of Berkeley, and McCarthy himself at a meeting sponsored by the New York Academy of Sciences. Why not on KCSM? It seems the “experts” wanted to give the impression that they represented a successful science with marketable products and didn’t want to answer any potentially embarrassing questions. The shock tactic worked. The station’s executive producer, Stewart Cheifet, rescheduled the taping with McCarthy as well as the demo from IntelliCorp, and he decided to drop the discussion with Hubert. The viewers were left with the impression that A1 is a solid, ongoing science which, like physics, is hard as work solving its quite manageable current problems. The public’s chance to hear both sides was lost and the myth of steady progress in A1 was maintained. The real story remained to be told, and that is what we have tried to do here.
an Ant Colony? Daniel Crevier
It should be clear by now that intelligence defies the heartiest effort to define it. Yet it is equally clear that an essential ingredient of an intelligent system is its ability to manipulate information. Indeed, this is the only function common to brains
and computers. The essential ingredients of information are bits. Just as matter ultimately consists in atoms, all the information that reaches us through our senses can be broken down into little pieces of “yes” and “no.” These particles make up any conversation, scenery, spanking, or caress we experience. To see how this is possible, consider the spectacle of the sun setting into the ocean. Delicate hues play on clouds and water; a royal alley of gold leads to the sun, and waves twinkle like stars in the reflected light. Yet our brain, which lets us appreciate this beauty, has no direct contact with it. Locked up inside our skull, our thinking organ is just as removed from the sea as would be a computer shuttered in the basement of the Pentagon. In order to appreciate this scene, our brain must reassemble the raw elements of data that our senses supply it as nerve impulses corresponding to yes or no bits of information. In the case of sight, perception happens as follows: The cornea, a transparent lens on the front of the eye, projects an image of the scene onto the back of the ocular globe. In this respect, the eye works much like a camera, where a lens projects the image on photographic film. The retina, which plays the role of film in our eyes, is a sheet of nervous tissue covering the back of the ocular globe. It contains many light-sensitive nerve cells called “receptors.” Some of these receptors tell other neurons, through a sequence of nerve pulses, how bright the projection of the image is at the location of the receptor. Some other receptors signal how much red, green, or blue the image contains at their locations. (All the delicate hues of pink, orange, and indigo in the sunset correspond to varying mixtures of these three primary colors.) The image is turned into discrete pulses in two ways:
From “How Many Bulldozers for an Ant Colony?” (281-311) from Al: The Tumultuous History of the Seurch for Artificial Intelligence by Daniel Crevier. Copyright (c) 1993 by Daniel Crevier. Reprinted by permission of BasicBooks, a division of HarperCollins Publishers, Inc.
51
52
KNOWLEDGE MANAGEMENT TOOLS
first, spatially, by becoming an array of dots, with each dot corresponding to the location of a receptor cell; second, in the domain of brightness and color. Colors become mixtures of discrete hues, and brightnesses translate into more or less rapid firings of nerve cells (the larger the brightness, the faster the firing rate). In similar ways, nerve cells in our ears turn the sounds we hear into pulse trains. Sensor cells in our skin do the same for sensations of heat, cold, and pressure. Thus, our brain is constantly bombarded with trains of pulses telling it what our senses perceive. Intelligence has to do with our ability to manipulate these bits of information and use them to make sense of the world. Animals manipulate the information from their senses in a manner that does not let them generate more than immediate reactions to perceived threats or inducements. We, however, get more mileage out of the information we extract from our surroundings. We can refine it into knowledge and use it for long-range planning and abstract reasoning. Nevertheless, our brains do work on the same principles as those of animals. Dissection shows that any differences between our brains and those of most mammals lie in the size of the structures present and in their complexity. Thus, one can logically conclude that our capabilities for planning and abstract thought are built on the same basic skills that allow animals to react to their environment. Further, this extra power probably stems from the additional abilities for processing information that the more elaborate structure of our brains allow. Thus, intelligence has to do with how much information one can manipulate in a given time (say, per hour or per second). Since one measures information in bits, one aspect of intelligence, in its most elementary form, is bits per second of raw processing power. If one compares the brain to a telephone switching station, this power would correspond to the number of phone lines the station can switch in a given time. Of course, there is more to intelligence than raw power, but let us not worry about this aspect of the problem right now. Let us just recognize that no matter how superbly structured and programmed the switching station is, it will simply not do its job if it can’t process enough connections in a given time. In the first part of this chapter I shall try to answer the following questions: How many bits of information can the brain manipulate per unit of time, and how close do our present computers come to this benchmark? I shall then look back at the history of computer development, and try to extrapolate how long it will take for our machines to rise to the level of our brains. Finally, I shall acknowledge that raw processing power is not the only ingredient required for intelligence, and discuss whether software powerful enough to emulate the human mind can be developed for the computers of the future.
THE HUMAN CORTEX AS CIRCUIT BOARD The exposed human brain is certainly not an impressive sight: about three pounds of soft, jellylike, grayish tissue. This mushy texture long prevented anatomists from cutting the brain into clean thin slices suitable for microscopic obser-
How Many Bulldozers for an Ant Colony?
53
vation. Further, the uniform color of the material kept them from seeing structural details. It was only in the late nineteenth century that different hardening and coloring processes, among them the still-used Golgi stain, enabled anatomists to study the fine texture of neural tissue. It came as no surprise that, like other organs, the brain is made up of cells. They come in varying sizes and shapes, and neuroanatomists called them “neurons.” One feature of the neurons, however, did astonish early researchers, including the Spaniard Santiago Ram6n y Cajal and the Italian Camillo Golgi, developer of the staining process. They were astonished by the intricacy and extensiveness with which these cells connected to each other, each sending out literally thousands of tendrils that link it to as many other neurons. They make up a network of such Byzantine complexity that Golgi, for one, firmly believed it formed one continuous tissue extending throughout the brain. He defended this point of view, called “reticularism,” in his 1906 Nobel address.’ Later observations-as science progressed in its usual tedious, prodding path-proved him wrong. Indeed, as we will see, the gaps between neurons play a crucial role in the workings of the brain. Early in this century, researchers started to distinguish elements of order in the apparent chaos of brain structure. First, investigators realized that, although neurons can differ from each other as much as a honey-suckle bush does from a sequoia tree, they come in a small number of different shapes and sizes. Only seven kinds of cells with similar exterior morphology exist throughout the cortex (the largest structure in the brain). Moreover, cells very similar to these make up the brains of mammals, from the higher primates down to the puny mouse. The nuts and bolts of our most abstract thoughts are thus the same ones that support the mouse’s instinctive reactions. The cortex, the brain structure responsible for our perceptions, motor responses, and intellectual functions, is a thin sheet of nerve cells, about six millimeters (a quarter of an inch) in thickness. Its surface area is that of a square twenty inches on a side: roughly the space that IBM’s first personal computers used to take up on a desk. To fit it into our skulls, Nature has had to fold the cortex; hence, the furrowed look of the naked brain. The cortex comprises six distinct layers, caused by an uneven distribution of neurons of different types. The thicknesses and makeup of the layers vary over the area of the cortex. However, except in the visual part, the number of cells per unit area remains fairly constant at 146,000 per square millimeter. (Multiplying this figure by the area of the cortex produces an estimated total number of neurons in it of about 30 billion, or 3 x 1O’O.) The average distribution of types of cells throughout the cortex also remains constant. The density of cells per unit area is likewise the same for all mammals. What distinguishes us from the mouse is the area of our cortex, and not the kinds or density of the cells in it. To an engineer’s eye, the cortex presents striking similarities with a structure universally present in computers: the printed circuit board, a flat, thin support holding integrated circuit chips, which serve as processing elements. The board allows the chips to talk to each other through conductive paths buried in distinct layers over its thickness. Strangely enough, a typical board comprises six layers,
54
KNOWLEDGE MANAGEMENT TOOLS
just like the cortex. Each chip on the board is made up of microscopic elements (transistors, capacitors, diodes) of about the size of a neuron, and performs a specific function within the computer. It turns out that one can also divide the cortex into chips, after a fashion. Experiments conducted on animals by probing the sensory cortex with a microelectrode can detect firing impulses from single neurons. If you move the electrode to and fro, in a direction perpendicular to the surface of the cortex, you will meet only nerve cells that process one kind of stimulus. For example, if buried into the visual part of the cortex, the probe may only respond if you shine a light in the left eye, and not on the right. However, if you probe in a nonperpendicular direction by slanting the needle, you will meet neatly separated regions which respond to the left or the right eye, alternatively. It is as if the cortex were divided horizontally into different modules, each extending throughout its depth. Why did Nature design our cortex as a circuit board? We can conjecture that, in both brain and board, it is necessary to separate the closely interacting processing elements from the cabling connecting faraway parts of the network. As in a circuit board, the cabling of the brain lies underneath the computing units. Large neurons present in the cortex, called “pyramidal cells,’’ send nervous fibers downward, out of the cortex, toward other regions of the brain or cortex. Covered with an insulating greasy layer of myelin, these fibers make up a whitish tissue, very different in appearance from the gray color of the cortex itself. Only about one cell in a hundred extends beyond the cortex, yet the volume of white matter in the brain is larger than that of gray matter. Having the “white cables” travel through the gray matter would have interfered with the direct communication between adjacent gray cells: this is probably why Nature kept them out of the cortex. The similarity in structure between the cortex and a circuit board may have yet another reason, related to ease of design. It is already quite complicated to lay out the chips and connecting paths on a flat surface. Doing it in three dimensions would be a combinatorial problem of monstrous proportions. Perhaps Nature was no more willing to face this difficulty than human engineers are! Whatever the reason for it, one cannot contemplate the convergent evolution of brain and circuit boards without wondering. Let us go back to the basic building block of the brain, the neuron. Its anatomy can tell us more about the amount of computing performed by the brain. Extending from the cell body of the neuron are different appendages. On the input side, the dendrites look like the branches of a tree. They connect with sensor cells, or other neurons, and receive their electric pulses. The meeting points between dendrites and appendages of other cells are actually gaps, called “synapses.” Although researchers had long suspected their existence, they could not prove it before the invention of electron microscopy in the 1950s. Direct observations of synapses then struck a final blow to the theory of nervous system continuity, or reticularism, which Golgi had defended until his death in 1926. When strong enough, nerve pulses can cross the synaptic gap between cells. Pulses usually increase the electric potential of the receiving cell, which encourages this cell to generate a pulse of its own, or “fire.” Sometimes, however, arriv-
How Many Bulldozers for an Ant Colony?
55
ing pulses decrease this potential, and discourage the receiving cell from firing. The cell body sums (or otherwise combines) the membrane potentials and fires at a rate that is an s-shaped function of this sum. The pulses generated by the cell body travel along a wire-like appendage of the neuron called the “axon.” It may be short or very long: axons sometimes bundle together to form a nerve, which can be as long as your arm. They also form the “white cabling” of the brain I have mentioned. The axon eventually branches out into another treelike network of fibers. These pass along signals to other cells, or activate muscles. The input and output ramifications of the neuron are its most striking characteristic. Extrapolations from counts on electron micrographs show there are from 1014 to 1015 synapses in the cortex. This means that, on the average, each neuron receives signals from about 10,000 other cells and sends its own messages to as many others. In this respect, the brain differs markedly from electronic circuits: on a circuit board, one component typically makes contact with fewer than five others. However, what computers lose in connectivity, they make up for in speed. In the brain, the pulses traveling from neuron to neuron are local imbalances in salt concentrations moving a t relatively low speed-exactly how fast depends on the diameter of the nerve fibers, but it is in the order of 100 feet per second. This is why sensory stimuli take from 10 to 100 milliseconds to reach the cortex. In a computer, however, pulses moving from chip to chip are pure electromagnetic fields. They travel at two thirds of the speed of light-about seven million times faster than nerve pulses!
The Brain’s Processing Power Although our knowledge of the brain’s structure is progressing, it remains sketchy. Recently bold-hearted scientists have tried to use this scanty evidence to estimate the amount of raw computing going on in the brain. I shall examine two such tries-by Jacob T. Schwartz at New York University and by Hans Moravec at Carnegie Mellon University. It will come as no surprise that these professors achieved wildly different results. In fact, the very divergence of these estimates is a good illustration of how little we really know about the brain. Yet, because of the accelerating pace of technological development, even guesses as poor as these provide useful estimates of when we will be able to beat Nature at brain building. I shall start with the work of Jacob T. Schwartz, a professor at NYU’s Courant Institute of Mathematical Sciences.* Schwartz estimates that since a neuron can fire about one hundred times per second, it sends information to other neurons at a rate of about one hundred bits per second. The amount of information processed inside the neuron is, however, much larger. To decide, every one hundredth of a second, whether to fire, the average neuron has first to combine all the signals it receives from ten thousand other neurons. The cell must then establish whether this total is large enough for it to fire. The decision to fire is complex, especially since some of the messages received from other neurons may inhibit firing rather than promote it. Schwartz estimates that to reach this decision, the neuron
56
KNOWLEDGE MANAGEMENT TOOLS
must-for each firing, at each synapseperform the equivalent of calculations involving forty bits. Since these operations involve intermediate steps, let’s assume that to simulate them, we have to manipulate one hundred bits of information per synapse, per firing. It is then a straightforward affair to work out the overall amount of information processed by one neuron in one second: 100 bits per synapse per firing per neuron x 100 firings per second per synapse x 10,000 synapses per neuron = 100 million bits per second per neuron. From there, we get an estimate for the processing power of the entire cortex: 100 million bits per second per neuron x 3 x 1O1O neurons in the cortex = 3 x 1OIx bits per second of information processing. Extrapolating to the entire brain, a total of about 1019bits per second results. Thus we have our first estimate: Schwartz’s estimate of brain power: l o t 9 bits per second. Schwartz, however, puts a very strong qualifier on this figure. He points out that computation rates, many orders of magnitude lower, might suffice to represent the logical operations of the brain. Indeed, it is fairly safe to suppose that what really matters to our thought processes is not the internal mechanics of a neuron, but how it looks like to other neurons. This may be considerably more simple than the neuron’s internal structure would show. Thus, stick-figure models of neurons may be enough to simulate the brain accurately. Moreover, our brain is built to accommodate a very large amount of redundancy, and much of its complexity may be due to the constraints limiting its growth and evolution (see the section entitled “Avoiding Nature’s Mistakes” later in this chapter). Hans Moravec calculates the information-processing power of the brain in a manner different from Schwartz’s, concentrating on the retina, the paper-thin layer of nerve cells and photoreceptors in the back of the eye.3 After performing a certain amount of massaging on the information provided by the receptors, the nerve cells send the results of their calculations to the brain through the optic nerve. Such is the structure of the retina that, in effect, it makes up an extension of the brain. Yet, contrary to most brain structures, the functions the retina performs are well understood. They are similar to those of artificial vision systems processing TV images. Of course, we know exactly how much computing power these operations require. Further, and by no coincidence, the resolution (number of receptors) of the fovea, the high-resolution part of the retina, is about equivalent to that of a television image. On that basis, Moravec estimates the processing power of the retina at about one billion operations per second. He then proceeds to extrapolate from this figure the computing power of the entire brain, and is faced with a dilemma. The brain has about 1,000 times as many neurons as the retina, but its volume is 100,000 times as large. Which figure correctly accounts for the larger computing power of the brain? We can attribute the excess volume to three factors. First, the connections between neurons in the brain are longer: the required cabling takes up most of the space in the brain. Next, there are more connections per neuron in the brain. Finally, the brain contains nonneural tissues, such as the greasy myelin sheath of many nerve fibers. Of these three factors, only one-the excess of connections per neuron-entails an increase in complexity. Following Moravec, we shall thus take the Solomonic deci-
How Many Bulldozers for an Ant Colony?
57
sion of awarding the brain a computing power 10,000 times that of the retina. There follows an information-processing capability on the order of l O I 3 calculations, or about 10“ bits, per second-which is Moravec’s estimate of brain power. According to him, the brain is thus 100,000 times slower than Schwartz’s estimate of l O I 9 bits per second. Moravec’s procedure has one crucial advantage: since it sidesteps the need to rate the unknown factors required to adjust Schwartz’s estimate, we no longer need to guess the effective (“stick figure”) processing power of an individual neuron, and we are also spared the need to assess the unnecessary complexity with which evolutionary constraints burdened our brains. Moravec’s estimate probably lies closer to the truth. How do computers fare compared with the processing power of the brain? Not well at all. The fastest computer in existence in 1989, the Cray-3, could process only 10” bits per second. By Moravec’s estimate, it is therefore 1,000 times weaker than the human brain, or at about the level of the laboratory rat, with its brain of 65 million neurons. Further, the Cray-3 is much too expensive to serve in A1 work. Researchers must make do with machines like the Sun-4 workstation. At 2 x lo8 bits per second, the Sun-4 is 500,000 times less powerful than a human brain-but it would evenly match the 100,000 neurons of a snail!
The Size of Human Memory As any computer enthusiast knows, the rate at which a given machine calculates is but one measure of its power. Another crucial question is, How much information can the computer hold in memory? Similarly, in respect to the human mind, how well we think is very much a function of how much we know. As it turns out, it is possible to estimate how much memory we need to function in the world. There are three ways to go about this.‘ One is to repeat what I have just done for the brains calculating power: that is, examine the brain’s anatomy and work out estimates from hardware considerations. A more direct method is to survey the knowledge of an average adult. Third, we could also deduce how much adults know from how fast they can learn, and how long they live. To start with the first approach, what happens in our brain when we remember something? Scientists are still very much in the dark about memory. They know plenty about the periphery of brain operation, such as how we perceive the world through our senses, or how we activate our muscles to act on the world. What happens between, though, remains very much a conjecture. Researchers do not really know how we settle on one particular response to a perception, or how we store the memories on which we base this decision. One can make plausible assumptions, though. Consider a mouse that flees in response to the snarl of a cat: this instinctive reaction mechanism probably resembles our own. First, sensor cells in the mouse’s ears send nerve pulses to other neurons in the brain, which start firing in response. Thus, an activation pattern of neurons assembles in the
58
KNOWLEDGE MANAGEMENT TOOLS
mouse’s brain. Eventually, further waves of activation reach the neurons controlling the legs, which send the mouse running. The snarl of the cat thus corresponds to an activation pattern of neurons in the mouse’s brain. A human brain would represent it in the same way. What happens, then, when we remember the snarl of a cat in response to another cue, such as the sight of an angry cat? It is logical to assume that at least some of the neurons that fired when we last perceived a snarl become active again. By this token, a memory is also a neural activation pattern. Can we identify in the brain the elements responsible for eliciting such an activation? What can cause a certain group of neurons, among the billions present in the brain, to become active all of a sudden? All evidence points to the synapses, these microscopic gaps between neural terminations. The average neuron makes contact with ten thousand others through synapses of various conductivities. A synapse can let through more or less of the nerve pulses emitted by the source neuron. Of the ten thousand downstream neurons, those that are more likely to fire in response are those with the more conductive synapses. Thus, one can assume that highly conductive synapses connect the neurons representing an angry cat to those representing the snarl of a cat. Hence, synapses probably store the information causing the recall of memories. If this is the case, we can estimate the capacity of the brain for encoding memories as follows: Assume that the degree of conductivity of a synapse can have sixteen values. Then the synapse can store four bits of information, since a sequence of four bits can represent the numbers from 0 to 15. The l o J ssynapses in the brain would then hold room for 4 x 1OIs bits. Does this mean the brain can actively use that many bits of information? Probably not. Synapses are just the mechanism that induces patterns of neural activation in response to stimuli or other activation patterns. There are, however, many fewer neurons than synapses. If a memory item corresponds to a group of neurons firing together, then there will be fewer such items than synapses also. In the past few years, A1 researchers have devoted much attention to studying networks of artificial neurons. Experimental results, as well as mathematical theory, show that the number of bits one can store in such a net depends on the number of neurons in it. Further, an artificial net can store typically fewer bits than there are neurons in it. For example, a type of neural net known as a Hopfield network, containing n neurons, has a storage capacity of 0 . 1 5 Assuming ~~ a similar ratio for the brain, we end up with a capacity of about 15 billion bits of usable memory. A more direct way of finding out how much each of us knows is the game of twenty questions. It involves two people. Player A thinks of a subject, and player B must find out what it is by asking questions that A will only answer with yes o r no. A wins if B can’t guess the subject in twenty questions or less. The target item must be clearly identifiable and known to both players. Facts that one must deduce from other primary information-like “300,286 is the product of 482 by 6 2 3 ” - d o not count. It turns out that a good player can usually come to the answer in just about twenty questions-this fact reveals how many items you have to sift through. The first question lets you partition the other player’s memory in
How Many Bulldozers for an Ant Colony?
59
two groups: the items corresponding to a yes answer, and those corresponding to no. The next question divides one of these groups in two again, and so on. Since twenty partitions are required for you to end up with a single fact, the number of items to choose from is clearly 220,or about 1,000,000.We must, however, correct this figure because players will typically limit their choices to neutral items of mutual knowledge, such as Marilyn Monroe or the Eiffel Tower. Items known to A only, or too sensitive for casual evocation, will be avoided. It is not farfetched to multiply by another factor of 2 to compensate for this effect: we are now up to 2,000,000 items. A knottier issue concerns the hidden information corresponding to unconscious or informal knowledge. For example, how much memory does a recipe for how to tie shoelaces, say, take up? What about the knowledge that enables us to interpret body language or voice inflections in people? Such knowledge will never appear as an item of the game. Psychologists consider that most of our mental activities occur at the unconscious level. Are most of o u r memories, then, also unconscious? Not necessarily. There are two reasons for a mental activity to remain unconscious: one is the repression of painful associated emotions (“I desire mother, but it is forbidden!”); the other reason, which probably accounts for many more cases, is that there is simply not enough room in a person’s awareness for all his or her mental activities. For example, consider your behavior when you drive while carrying on a conversation. You will steer left or right, watch for other cars, slow down or speed up as required without any conscious decision. That does not mean you are ignorant of the technicalities of driving, such as being able to stop by stepping on the brake pedal. You are simply too busy to pay attention to these details. Thus, much unconscious behavior uses facts that could come to full awareness in the twenty-question game. So one might reasonably argue that it is not most of our memories that are unconscious, merely just a good deal! Let us boldly “guesstimate” again and multiply by 2 to compensate for this effect: we are now up to 4,000,000 items of memory in a typical human being. An important question remains. What I have called an “item” represents much more than one bit. Typically, it would correspond to an entire data structure: if player A says, for example, “1 was thinking of Abraham Lincoln,” to how many bits of information does the Abraham Lincoln structure correspond? First, the sequence “Abraham Lincoln” contains 14 characters plus 1 blank, which probably requires the equivalent of 15 bytes of storage space in the brain. (A byte is a set of eight bits. Digital computers use bytes to represent characters.) The sequence of phonemes corresponding to the pronunciation of the name probably requires about as much. A few years after history classes, most of us probably remember only a sketchy biography of the sixteenth president: “He taught himself law and campaigned against slavery as a congressman. His election as president caused the Civil War. Lincoln led the North to victory, emancipated the slaves, and delivered the Gettysburg Address. He was assassinated in 1865 when attending a play.” Our internal representation of this information is certainly very different from the string of 265 characters it takes to write it down or store it in a computer. There is no reason, however, to believe that this representation requires
60
KNOWLEDGE MANAGEMENT TOOLS
much less information in our brain. If this were so, we would probably have developed a much more concise language and writing. Similarly, if we needed much more information than a few hundred bytes to store these characters, it would mean that our speech is much more efficient than our thinking: this is hard to believe. Let us, therefore, accept that this short biography of Lincoln requires the equivalent of about 265 bytes of storage in o u r brain. What about Lincoln’s face? Even if we cannot visualize him precisely in our mind’s eye, we could certainly recognize his photograph among thousands of others. How many bits of information does one need to store a recognizable likeness of a face? Not much, it turns out. Digitizing TV images into arrays of numbers, with varying resolutions, shows that an image of 20 x 20 pixels (dots) of varying gray levels provides a very recognizable likeness. With 4 bits per dot, it would take 200 bytes to store such a picture, which brings the size of our Lincoln data structure to 495 bytes. Yet another item of our internal representation of Lincoln is the emotional aura surrounding assassinated presidents. Something in the data structure must be pointing to the emotions awe or sadness. We also require pointers to other items of information: the words congressman or Civil War refer us to other complete data structures. Further, we know that Lincoln was a man and a politiciun-categories that contain still more information we could tap if required. We can estimate how many bytes these relations require by reference to conventional computer data bases. Some kinds of them, such as relational data bases or the data structures used by the LISP language, require pointers between data items. These pointers serve much the same purpose as in our human memory model above. In a LISP data structure, as much memory is reserved for the pointers as for the data. Thus, let us assume that in the brain, the amount of memory required for relationships between items is equal to the memory required for the data items themselves. How much information does one require, finally, for the data structure “Abraham Lincoln”? The 495 bytes above correspond to about 4,000 bits of direct information. Doubling this for pointers brings us to 8,000 bits for one item of the twenty-question game. For the 4 million items of information the game and other considerations show we hold in memory, the total would amount to about 32 billion bits. In view of the number of approximations and informed guesses involved, this figure is in surprising agreement with the one we got by assuming 0.15 bits per neuron, or 15 billion bits. Looking at how we gather information provides yet another cross-check on this figure. As witnessed by the helplessness of a newborn baby, little of our abilities are inborn. From walking to the multiplication table, we must painfully learn virtually all of the skills and knowledge we need to function in the world. Yet, in less than twenty years, we learn the basic material that will support us in life. If we knew how fast we can absorb new information, we could estimate how much of it the basic “human knowledge base” contains. Certainly, we do not commit new information to memory as fast as our senses feed it to us. The optic nerve, for one, sends over a hundred million bits of
How Many Bulldozers for an Ant Colony?
61
information to the brain at each second. Yet we interpret and remember only a tiny fraction of this information. Consider what happens when you try to learn a page of text by heart. At a resolution equivalent to 300 dots per inch, your optic nerve can send over to the brain the entire contents of that page in about a second. Yet, if you glance a t an open book for a second, and then read the page in your mind’s eye, you’ll discover that you can retain at most a few words. Further, rather than corresponding to the image of the words, which requires thousands of bits per letter to describe, your memory will be a highly abstracted description of the words you recognized while glancing at them. This encoding probably requires only a few bits to store a letter. In fact, experiments on memorizing random sequences of syllables show that we can absorb new information only a t rates of 100 bits per second.6 Learning at 100 bits per second means memorizing an entire page of text (about 400 words) in less than three minutes. At that rate, an actor could memorize his lines by reading them once aloud! Yet, even at such breathtaking speed, twenty years of continuous learning a t eight hours a day would let you digest only 21 billion bits of information. We thus have four estimates of the size of human memory. Assessing it from the number of synapses leads to the astronomical figure of 4 million billion bits. But as I pointed out, there is no direct link between the capacity of synapse storage and the number of explicit items represented in the brain. The other three estimates give much lower values: considerations from the number of neurons in the brain and neural-net theory yield 15 billion bits. The twenty-question game leads to 32 billion bits. Learning rate and duration give 21 billion bits. The relatively close agreement of these three estimates lets one hope that the true value lies somewhere in the range they define. Thus, I shall settle on 20 billion bits (2.5 gigabytes) as an estimate of the memory capacity of the brain. This is still a lot of information: it corresponds to slightly more than a million pages of printed text or twenty-five hundred books like this one. Can computers store that much information? Yes: by this yardstick, our machines have already overtaken us. The Cray-2 supercomputer, built in 1985, already had 32 billion bits of memory capacity.’ Even A1 research budgets allow scientists to come close: As I write this the typical A1 workstation offers about 200 million bits of random access memory, only 100 times less than the brain. . . . the Cyc common-sense knowledge base will be slightly smaller than our estimate of the brain’s capacity (about 8 billion bits instead of 20). A1 workstations will have that much random access memory a t their disposal in a very few years.
REACHING HUMAN EQUIVALENCE Despite this essential parity of the board in memory, machines still process information thousands of times more slowly than we do. Because of its myriads of cells working together, it is tempting to compare the brain to an ant colony. Computer engineers, by contrast, like to think of their mainframe machines as bulldoz-
62
KNOWLEDGE MANAGEMENT TOOLS
ers shoveling about mountains of data through their unique central processors. Puny bulldozers, indeed, since it would take thousands of them to match the ant colonies in our skulls. This fact certainly explains many of the disappointments A1 researchers have met. If the brain is a jet engine, they have to make do with the equivalent of bicycles! Rather than uncover the secrets of intelligence, they must spend most of their time programming around the weaknesses of their machines. Yet, as we will see next, engineers are closing the gap. Soon computers will approach the power of the human brain.
The Fifth Generation of Computers As I described . . . ,the first generation of computers was based on vacuum tubes: orange-hot filaments glowed in various computing machines from 1943 to 1959. Even during those years, progresses in vacuum tube technology cut down by a factor of 20 the time needed to perform an addition. The gain in cost per unit of computing power was even more impressive. In 1943, it cost about one hundred dollars to buy one bit per second of computing power. Sixteen years later, it cost less than ten cents. Generation 2, based on single transistors, accounted for most of the new machines until 1971. From then on, computers were built out of integrated circuits: silicon chips containing first a few, then hundreds, and finally thousands of microscopic elements. These formed the third generation of computers, and lasted until 1980. Around that year, it became possible to put the entire processing unit of a computer on a single chip; and by 1985, these microprocessor chips contained up to a quarter of a million elements. Thus was born the fourth generation of computers. As I write this in the early 1990s, the upward spiral of computer power continues to accelerate. If you’ve ever pondered the economics of replacing an aging computer, you may have felt a kinship with the future space traveler faced with the star ship problem: a better time to leave is always next year, because by then ships will be faster and will get you to your destination sooner. So is it with computers: next year’s machine will, on the average, offer 50 percent more computing power than this year’s for the same price. Let me demonstrate this tendency by focusing on two particular machines.s The Zuse-2, the first electromechanical computer built in 1939 by the German engineer Konrad Zuse, would then have cost about $90,000 in today’s money and took 10 seconds to multiply two numbers. By contrast, the Sun-4 workstation, introduced in 1987, cost $10,000 and can multiply two numbers in 400 nanoseconds. In raw power, measured by the admittedly crude yardstick of the time required to multiply two numbers, the Sun-4 is 25 million times faster than its predecessor. If we consider the cost per unit of computing power, the comparison is even more favorable: it costs 225 million times less to do a multiplication with the Sun-4 than with the Zuse-2.
How Many Bulldozers for an Ant Colony?
63
To understand the staggering implication of these figures, consider what similar improvements would bring about if applied to automobiles. A luxury car of 1938-say, a Cadillac-would have cost about $30,000 in today’s money. It reached a top speed of 60 miles an hour and traveled about 15 miles on a gallon. If today’s Cadillac were to the 1938 car what the Sun-4 is to the Zuse-2, it would cost only $3,300, run at twice the speed of light, and do 3 billion miles per gallon! No fairy waved a wand suddenly to induce these changes. If one compares the relay-activated Zuse-2 to electronic machines introduced right after it, and progresses on to the Sun-4, there are no abysmal drops in price or dramatic improvements in performance anywhere. Instead, a smooth evolutionary process is revealed. Hans Moravec courageously calculated and plotted the cost per unit of computing power of sixty-seven consecutive machines, starting with a mechanical calculator built in 1891.9 These data points clearly show that, for the past sixty years, the cost of computing has decreased by a constant factor of 2 every other year. As a result, the mainframes of the 1970s are the desktoppers of today. Mainframes of the 1960s can now be stored in a single chip, and many electronic wristwatches contain more elements than these early machines did! Although speculators who blindly extrapolate stock prices from past tendencies usually end up broke, there are sound arguments for applying yesterday’s trends to tomorrow’s computers. The staggering progress of the past sixty years stems from profound structural processes in the evolution of the technology. Since we are also dealing with the behavior of an industry, many of these arguments are economic rather than technical. In fact, one could even argue that the brainequivalence problem is essentially economic. Indeed, it is now technically possible to build a machine with the raw computing power of the brain if we connect a thousand Cray-3 supercomputers. And, even though managing such interconnection, and programming it to perform like a human brain, would still raise formidable problems, the raw power would be available for us to experiment with. Before we could do that, however, we would have to deal with the small matter of finding the twenty billion dollars this network would cost. Thus, the problem of building human-equivalent hardware boils down to reducing the cost of processing power to affordable levels. Let me examine, therefore, why we can expect the sixty-year-old trend of decreasing prices to continue. First, the regularity of the price curve is to a large extent the result of a selffulfilling prophecy. Manufacturers, aware of the tendency, plan the introduction of new products accordingly; hence, the absence of drastic jumps in cosdperformance ratios. Manufacturers introducing a product well ahead of the competition in performance have no incentive to reduce prices drastically, even if their costs are much lower. Instead, they pamper their profits for a while, until competition forces them to accept lower prices. Competition in the computer industry is fierce. To grab a share of a $150billion world computer market, companies are willing to scrambIe.l0 For this reason, the number of people developing computers, and the resources at their disposal, are on the rise. Since computer companies spend a constant fraction of their revenues on research and development, resources for computer development
64
KNOWLEDGE MANAGEMENT TOOLS
grow about as fast as the computer market. They pushed ahead by about 15 percent a year since 1960. Since this growth is much faster than that of the economy, it will slow down eventually. (Otherwise, a continued growth would lead to the impossible situation of everybody developing or building computers.) Even if the number of people and dollars devoted to computer development levels off, the total intellectual resources available for this activity would still increase exponentially. The reason is because computers are largely designed by other computers. Indeed, involving computers in their own conception can have dramatic effects. Consider the problem of planning the paths of metallic traces on printed circuit boards. In the assembled board, these traces connect together the pins of different processing chips. They all have to fit in the restricted area available on the board, while maintaining minimum distances between each other. Typically, out of a multitude of possible combinations of paths, only a few satisfy these constraints. In the 1960s and 1970s, laying o u t these paths with pencil and ruler used to take months. Worse, changing a design after testing took almost as long as starting anew. Nowadays, computers perform this layout automatically in a matter of hours. Similar gains occurred in implementing those procedures at the chip level: integrated circuits are also designed by computers. In coming years, computers ever more powerful will gradually assume a larger part of the design and construction of their successors, further speeding up the design (or, reproduction) cycle. Economies of scale should also speed up the rate of price decrease. Present computers typically contain only one, expensive, processing unit. Future machines, however, will consist of thousands, and eventually millions, of identical components which will serve as both memory and processors. Manufacturing these components in such large quantities will give rise to economies of scale comparable to those affecting memory chips. Since there are many memory chips in a computer, they come down in price faster than processing chips. Recognizing these new economics, the Defense Advanced Research Projects Agency’s goal is to double the pace of cost reductions in coming years. From now on, instead of multiplying computer power by 2 every other year, the U.S. government hopes to double it every year. But even if we devote ever larger resources to perfect electronic components, little will result if Nature does not cooperate. Aren’t we coming up against basic natural barriers that cannot be overcome? Won’t we soon bump our noses on the outer limits of computation? One obvious boundary which is fast approaching is that of the speed of light-or, as computer scientists sometimes call it, the “Einstein bottleneck.” In a conventional computer equipped with a single processing unit, information flows between the memory and the lone processor as acrobats leap-fly between swings. Infinitesimal errors in timing lead to murderous crashes, and the entire computer must operate like a finely tuned clock. Indeed, an electronic master clock beats time, keeping all components in lockstep, as inexorably as the drummer in a slave ship. For the drummer to be obeyed, there should be time enough for one beat to reach all parts of the computer well before the clock generates the next beat. The
How Many Bulldozers for an Ant Colony?
65
beats are electric signals: the fact that they travel at close to the speed of light, but no faster, imposes a limit on the frequency of the clock. For example, the time required for light to travel the entire width of a l-foot-wide computer is 1.76 nanoseconds. This implies a maximum clock frequency of 568 million beats per second (megahertz, in computerese). Many desktop computers already operate within a factor of 10 of that limit. One solution would be to keep walking on the path we have so profitably followed since the invention of the transistor: that of miniaturization. Let us make the components smaller and cram them in a tighter space. The signals will have less distance to travel, and we can then speed up the clock. Alas, this approach immediately bumps into another obstacle: heat removal. Not only will crowding more components in a smaller space increase the amount of heat generated, but having them work faster will also increase the heat generated per component. Evacuating this extra heat to keep the machine from melting down requireswhen it is possible at all-technological prowesses. These complications just about cancel any economies brought about by the extra miniaturization. Nature herself presents us with a way out of this blind alley. Compared with those of computers, the components of the human brain operate at positively slumbering rates. A neuron will generate about a hundred impulses per second, as opposed to the millions of beats per second of a digital computer’s clock. Being so sluggish, each neuron generates little heat, and we normally find it easy to keep a cool head. Yet the brain packs about a thousand times the information-processing capability of our fastest computers. This performance is due to the different mode of operation of the brain. Von Neumann’s suggestion to break up computers into a memory and CPU initially offered obvious advantages. It was possible to manufacture the memory as many low-cost, identical cells. Since there was only one processor, it could be as complex as required to perform a variety of logic or arithmetic functions. Further, this layout reduced programming to the relatively simple task of issuing a single string of instructions to the one processing unit. Unfortunately the setup also introduced a major inefficiency that became clear when memories increased in size to billions of bits. So huge is the memory of a modern computer that it compares to a large city. (Indeed, it could store the names and addresses of all inhabitants of New York or Los Angeles.) A metropolis of its own, the processing unit also houses millions of elements. The problem is that only one road connects these two cities. It takes a long time to travel and allows through only a very few bits of information at a time. Running a program in a von Neumann computer is like moving your household from New York to Los Angeles in the following senseless fashion. First load the TV set in your car, drive it to Los Angeles, and come back. Take the laundry iron, drive to LA, and come back. Take the coffee pot, drive to LA, and so on. To enhance matters a bit, computer manufacturers have recently tried flying between the cities instead of driving: they have improved the data-transfer rates between the parts of a computer. Unfortunately, this amounts to speeding up the circuitry, and soon bumps into the speed-of-light and heat dissipation limitations I men-
66
KNOWLEDGE MANAGEMENT TOOLS
tioned. An analogue to the obvious solution-using a van to move all items of your household at once-is not possible in a computer. Each bit transferred between processing unit and memory requires a separate wire, and there is a limit to how many of these can be crammed into a machine. Over the years, manufacturers have widened the data path from 8 bits at a time to 32, and even to 128 for large machines. A small improvement, this amounts to little more than letting you move both the coffee pot and laundry iron together! In terms of my two-city analogy, the solution adopted by our brains is surprising. It consists in moving Los Angeles to New York and mingling the two cities so you don’t have to move at all! In my fable, New York plays the role of processing unit, and Los Angeles, that of memory. It turns out that the brain does not make any difference between these two functions: each neuron serves as both a memory and a processing unit. In the brain, there is indeed no clearly identified center of intelligence comparable to the processing unit of a von Neumann computer. For example, despite long-standing efforts, neurologists have never been able to pinpoint a center of consciousness. The neurosurgeon Wilder Penfield suggested it might lie in the combined action of the upper brain stem and various areas of the cerebral cortex.” Others pointed out that consciousness has to do with laying down and recalling memories of the world. In this case, the hippocampus, which plays a central role in this function, might qualify for the seat of consciousness.’* Another view holds that our ability to communicate makes up our most obvious mark of intelligence: the language centers, located in the left cerebral cortex, would then bear the palm.I3 There are, however, good reasons to believe that, although various parts of the brain handle special functions, consciousness arises from the combined operation of many areas.I4 Likewise, long-term memories, once laid down, appear distributed throughout large areas of the brain. What are the advantages of this distributed configuration over the von Neumann architecture? For starters, it allows the brain to perform a myriad of operations concurrently. This is how your brain can analyze the torrent of information your eyes send it (millions of bits per second), and let you instantly recognize what you’re looking at. Roughly speaking, your brain separates the image into hundreds of thousands of dots, each separately analyzed by several neurons. A pure von Neumann machine would, by contrast, slowly process each dot in succession. Computer scientists call “parallel processing” the simultaneous application to a single task of many processors, be they neurons or computers. In addition to the speedup inherent in getting more workers on the job, applying parallel processing to computers offers another potential advantage: it amounts to nothing less than breaking the light-speed barrier. Since processors in a parallel machine work separately, they are no longer enslaved to the drumbeat of a central clock. These semi-independent units could now be made as small and quick as we want them. Through such parallelism, Nature will allow us to keep on increasing the power of our computers at a steady rate for a long time to come. Eventually, individual processors will reach microscopic dimensions. The emerging science of nanote~hnology’~ will soon let us build structures in which every atom plays its assigned role.
How Many Bulldozers for an Ant Colony?
67
By common agreement among computer scientists, fifth-generation machines are those that implement parallel processing in an extensive way. A few of these machines are now in existence: for example, the Connection Machine, built by Thinking Machines, Inc., of Cambridge, Massachusetts, with 250,000 processors. As I said. . . , machines based on neural networks, in which microscopic components will emulate the neurons of our brains, are being contemplated.
Duplicating the Brain’s Processing Power I can now attempt to answer the question raised earlier: How long before we close the gap in processing power between computers and the human brain? I have summarized in tables 4-1 and 4-2 the earlier estimates about the brain’s computing power and information-storage capacity. Since we still know little about how the brain works, different avenues of investigation lead to extremely different results. The two estimates I cited for the information-processing capacity of the brain (table 4-1) differ by a factor of 100,000. My estimates for the memory capacity of the brain (table 4-2) do not fare any better, being six orders of magnitude apart. Our mightiest computers offer only an insignificant fraction whichever value we adopt for the brain’s processing power. How long will it take for the upward spiral of hardware progress to close this gap? Various answers appear in tables 4-3 and 4-4. Despite the arguments for a speedup in the rate of computer improvement, I have taken the conservative view that the sixty-year-old tendency of doubling every other year persists. I have taken for benchmarks in tables 4-3 and 4-4 the Cray-3 supercomputer, built in 1989, and the Sun-4 workstation, built in 1987. These tables list the years in which machines of a cost equivalent to the Cray-3 (about $10 million) and the S u n 4 (about $10,000) should reach human equivalence. The “best case” columns correspond to the weaker estimates of brain-processing power and memory in tables 4-1 and 4-2; the “worst case” columns correspond to the stronger estimates.
TABLE 4-1 Two Estimates of the Computing Power of the Brain Argument
Estimate
Detailed modeling of neurons
1019bits per second
(Schwartz)
Comparison of the retina with similar hardware
10“ bits per second
68
KNOWLEDGE MANAGEMENT TOOLS
TABLE 4-2 Various Estimates of the Information Storage Capacity of the Brain Argument
Estimate
Raw synapse storage
4 x 10” bits
Neural-net theory and number of neurons
15 x lo9 bits
20-question game
32 x lo9 bits
Human learning rate and duration
21 x lo9 bits
TABLE 4-3 Estimates for the Year in Which Supercomputers Will Reach Human Equivalence Best Case
Worst Case
Processing power
2009
2042
Memory
1989
2023
TABLE 4-4 Estimates for the Year in Which Desktop Computers Will Reach Human Equivalence Best Case
Worst Case
Processing power
202s
2058
Memory
2002
2037
The large discrepancies between estimates make remarkably little difference on dates. According to table 4-3, if the weak estimate is right, supercomputers will attain human equivalence in the year 2009. If the strong estimate holds, this sets us back only thirty-three years, to 2042! Indeed, if computer power doubles every other year, thirty-three years is all it takes to improve by a factor of 100,000. Also, as is clear in both tables, the roadblock is processing power, since we will always reach the required memory about twenty years earlier. From the first line of table 4-3, supercomputers will attain human equivalence around 2025,* give or take seventeen years. According to table 4-4, desktop machines will have to wait until 2041, with the same error margin. ‘Recent developments indicate that this date may be advanced: the September 1992 issue of Spec(the journal of the Institute of Electrical and Electronic Engineers) noted that “engineers.. . expect teraflops machines by 1996. . . . Price estimates exceed U.S. $100 million” (page 40). A teraflops is the approximate equivalent of our weaker estimate for brain power. This power, howtrum
How Many Bulldozers for an Ant Colony?
69
After these dates, we can expect our machines to become more clever than we are. We have already done Nature one better for all physical abilities of the human body. Our machines are stronger, faster, more enduring, and more accurate than we are. Some of them have sharper eyesight or hearing. Others survive in environments that would crush or suffocate us. Shouldn’t we expect to improve upon our mental abilities just as well?
Avoiding Nature’s Mistakes Not only can we build into our machines the strength of our minds, with eventually much to spare, but we can also avoid duplicating the many weaknesses and inefficiencies of our brains. Indeed, when building artificial minds, we enjoy much more freedom that Nature had in building us. First, we are free of the limitations on material and structure imposed on biological organisms. Living cells must grow, reproduce, repair themselves, and move over to their proper positions in the body early in life. They must constantly absorb nutrient material from their environment and evacuate waste. Most of their internal structure and functions serve these ends. An artificial neuron, however, would not have to perform any of these tasks. Its function would reduce to generating electric signals similar to those of a biological neuron. Thus, we can expect the structure of an artificial neuron to be much more simple than that of a natural one. Further, it could use materials that transmit impulses millions of times faster than protoplasm and process signals that much faster. Yet another limitation our machines will dispense with has to do with blueprints. Nature’s blueprint for our bodies, the DNA molecule, does not contain enough information to specify the connections of each neuron. Instead, Nature must make do with general instructions issued to entire classes of brain cells. What we know of neuroembryology shows that in the early stages of life, brain cells emit filaments that travel through brain tissue. These filaments eventually form the dendrites and axon of the adult cell. They travel more or less at random, until their ends meet cells of a kind that chemically attracts them. The filaments then bind to the cells in connections that become synapses. To understand the limitations this mode of construction places on the brain’s performance, consider the following fable, which I have called “Harry’s Plight.” Harry, an electronics engineer, has just taken charge of a new computer assembly plant in the remote country of Ogomongo. Harry has accepted a mission no one else in the company wants: to design a computer that the unskilled workers of Ogomongo can assemble easily from component chips. To Harry’s dismay, it soon becomes clear that the Ogomongans are incapable of reading a connection diagram. Neither can they tell apart chip models, except by their colors. Since difever, will come at ten times our target price of $10 million. Further, the degree of specialization of these early teraflops machines makes it unlikely that any amount of programming could endow them with intelligence.
70
KNOWLEDGE MANAGEMENT TOOLS
ferentiating the pins on the chips is also a little hard for Ogomongans, Harry has to settle for mounting instructions that typically read: “Connect any pin of a green chip to any pin of a yellow chip.” It is now Harry’s considerable challenge to design chips of a kind that will, when connected in this haphazard way, produce a computer. To increase the chance that compatible pins on different chips will connect, Harry first increases the number of pins per chip. Second, he adds some intelligence inside the chips and decrees that each newly assembled computer will undergo a “running in” period of a month. During this time, each chip sends out, through each of its pins, exploratory, low voltage pulses. It also listens to pulses emitted by other chips. Each pin of each kind of chip emits a characteristic pulse pattern, enabling chips on each side of a connection to check the validity of this link. Specially designed internal mechanisms break off connections of the wrong kind. Connections of the right kind are maintained and strengthened. Much to the surprise of Harry’s colleagues, and somewhat to his own, this Rube Goldberg procedure eventually does produce a working computer. There is only one snag: the machine is a hundred times bulkier and more expensive than a conventional number cruncher. Harry’s company wants to close the plant. The Ogomongans, however, feel it improves their international image, and insist on buying it. Since Ogomongo sits on newly discovered oil fields amounting to half the world’s reserves, they can well afford to. This is all fantasy, of course-but any resemblance to existing biological processes is intentional! In addition to the indiscriminate assembly of its parts, the brain suffers froin their lack of reliability. A neuron is a complex, delicately balanced mechanism, and we all lose hundreds of thousands every day. Yet we do not feel any the worse for this loss because the brain compensates for it by having a large amount of redundancy in its circuits. Computers also benefit from redundancy, and engineers are now finding ways to let their machines tolerate minor component failures. Yet they must pay a price: making computers more resilient requires more components. For this reason, building an intelligent machine out of parts more simple and robust than neurons would increase its performance. The brain evolved through a process of small-scale, local changes spanning millions of years. It embodies many elegant features that an intelligent designer would not disown. The layered, circuit-board-like structure of the cortex is a prime example. Yet the brain’s overall architecture expanded gradually, without benefit of advance planning, and it shows. In many respects, the brain is like a Midwestern country schoolhouse turned into a major city high school. It started with a one-room cabin with a wood stove, spacious enough to accommodate the children of the first few settlers. Then came the railway station: the school needed another room to handle the suddenly doubled population. Over the years, classrooms multiplied. To keep a studious atmosphere, workers had to pare precious square feet from each room to set up linking corridors. Later, installing indoors plumbing and electricity required major surgery, which gave the principal a severe headache. When it became necessary to add a second floor on a structure never meant for one, the mayor and the city engineer almost came to blows. The aldermen, leery of raising taxes for a whole new building, finally overruled the engineer
How Many Bulldozcrs for an Ant Colony?
71
and hired a contractor themselves. Twenty years later, congested plumbing and air conditioning, corridor traffic jams, and wavering lights prevented any further expansion of the school. The city council voted the site into a park and erected a new school elsewhere. Evolution does not have the option of starting over, and our brains still contain the original cabin cum wood stove. Lemon-sized, it grows o u t of the upper end of the spinal cord. The reticular formationI6 is in fact the brain of our reptilian ancestors. Programmed to stake out a territory and attack prey or enemies, it holds our darker instincts. Wrapped around the reptilian brain is the limbic system, or old mammalian brain: this is the school’s second floor. Developed from centers that govern smell in primitive mammals, the limbic system is the seat of emotions. It enabled our warm-blooded ancestors of a hundred million years ago to care for their young. Its programming often contradicts the reptilian brain, and many of our internal conflicts have no other origin. The cerebral cortex holds our higher reasoning functions and forms the outer layer of the brain. It talks to the inner parts through many nerve fibers, which somehow coordinate its action with theirs. The cortex has no equivalent in our fictional country school. At that level, a human architect trying to design a better brain probably would have started over. Our old friend the retina offers a striking example of how Nature evolves impressively elaborate fixes to make up for no longer adequate structures. Evolution hit upon the retina’s peculiar layout early in the development of vertebrates, and its unthinking mechanisms later locked it into place. As you recall, the retina includes photoreceptors, which turn light into nerve pulses, and layers of nerve cells that preprocess the image. These cells pack the number-crunching power of a modern mainframe computer. Nature made the early mistake of placing them up front, so light must pass through the cell layers to reach the photoreceptors. This arrangement put a major design constraint o n the data-processing part of the retina: just imagine IBM trying to make their computers perfectly transparent. Yet Nature rose to precisely that challenge in evolving our eyes: the nerve cells in the retina are transparent. There is yet another difficulty: the nerve cells’ position forces the optic nerve to pass through the photoreceptive layer to reach the brain, creating a blind spot in our field of vision. We do not see it because, through more sleight of hand, our brain interpolates from neighboring parts of the image and covers up the blind spot. “What if it wasn’t an early blunder?” you may ask. “Couldn’t a constraint we do not realize make this roundabout design the only possible one?” It seems not, because the independently evolved octopus and squid do have their photoreceptors up front. In a classic paper in Science, the eminent biologist Franqois Jacob maintained that evolution is not a rational designer but a thinker. He illustrated his point with many more examples of biology mixing slapdash foundations with prodigies of workmanship.” Many find this iconoclastic view of evolution shocking. Indeed, imperfection in Nature’s creations contradicts many an ecologist’s view of cosmic order. Personally, 1 find Nature’s ability to make up for its mistakes and keep forging ahead more impressive than the blunders themselves. And who knows: perhaps
72
KNOWLEDGE MANAGEMENT TOOLS
creating o u r brains was a crucial step in this self-correcting process? Now that we realize our imperfections, we may help weed them out of the next batch of intelligent beings. As in our other duplications of natural functions, we will probably discover in building artificial minds that it pays to design a little differently than Nature. Much as airplanes have wings but do not flap them, intelligent machines will operate on the same principles as their natural equivalents, but exploit these principles better. Streamlined, robust, and faster, they may well surpass our minds the way airliners do sparrows.
SOFTWARE: THE STRUGGLE TO KEEP UP So far I have compared the brain to a telephone switching station and looked only at how fast it can switch lines. I have neglected the fact that the switching station has to be wired to make the right connections. Since there is a lot more to intelligence than simple line switching, it is time to ask this question: If we d o develop, in the early part of the next century, hardware powerful enough to process as many bits per second as the brain does, will we be able to program intelligence into this hardware? In other words, will software progress follow hardware development? If the past is any indication, hardware and software development are closely linked. In general, software needs can provide the motivation for and point the way to appropriate directions in hardware development. Conversely, weaknesses in hardware can not only act as a powerful brake on software development but also divert it into blind alleys. There is no question that the relative inadequacy of early computers hindered early progresses in artificial intelligence. Marvin Minsky recalled for me how early researchers (himself included) would toil for years over a program, only to see it founder over lack of memory.18 For example, Ross Quillian, the inventor of semantic nets, never could test his theory on word disambiguation simply because the computers of the mid-1 960s couldn’t hold enough word definitions (see chapter 4). We also saw in chapter 6 that the advent of expert systems had to await the availability of computers with enough memory to hold large amounts of knowledge and the programs needed to quickly sift through it. Early A1 work was deliberately performed in toy task that did not require much special knowledge. This mind-set became so ingrained that researchers didn’t always realize that they were programming around their machines’ weaknesses instead of addressing the real issues. Consider SHRDLU, the talkative block-manipulating program that made up the wonder hack of the early 1970s. . . . Carl Hewitt, who invented the PLANNER language used for SHRDLU, pointed out the following to me:
PLANNER performed so well because, and we weren’t so conscious of it in those days, by working on only one aspect of a problem at a time it accommodated itself to the very small machine memories we had then. When it ex-
How Many Bulldozers for a n Ant Colony?
73
plored possible solutions to a problem, it went down one single branch and only used the amount of storage needed for that one branch. I f that solution didn’t work out, it would backtrack, recover all the storage, and try another branch.l 9 Despite such craftiness, Marvin Minsky told me, SHRDLU still required a formidable amount of memory by the standards of the time. Without belittling the role of MIT’s researchers, I might add, DARPA’s financial largesse probably counted as much as their genius in the success of such projects. “The MIT A1 Laboratory,” Patrick Winston remembered, ”took delivery of the first megabyte memory. It cost us a million dollars, a dollar a byte.” He added ruefully, “It is strange to think that I carry ten megabytes in my portable PC these days.”2o Re-examining the history of A1 in the light of unrealized hardware constraints can lead to interesting revisions of accepted explanations for why the field took certain orientations. For example, although the demise of neural networks in the 1960s is widely attributed to Minsky and Papert’s implacable criticism of Perceptrons in their book of that same title, Carnegie Mellon’s James McClelland, a major contributor to the revival of this field in the mid-l980s, suggested an alternative explanation to me. He pointed out that most research on neural networks involves simulating them on digital computers:
I don’t believe it was that book per se which discouraged Perceptron research in the 1960s. I think what actually happened is that the world wasn’t ready for neural networks. A certain scale of computation is necessary before simulations show that neural networks can do some things better than conventional computers. The computing power available in the early sixties was totally insufficient for this.2’ Patrick Winston also believed that hardware limitations have often led researchers to make the wrong turns in their paths towards progress: “We are discovering that a lot of ideas we once rejected on the grounds of computational impracticality have become the right way after all.” He explained how conventional robots control their movements by constantly recalculating the control signals they send to their arms. Recent experimental robots, however, can use their increased memories and parallel processors to learn gradually by experience which efforts to exert under given circumstances. “This idea had been rejected twenty years ago,” continued Winston, “and a lot of the efforts that went into motion dynamics and the mathematical approach now seem somewhat misplaced. In my view, one of the milestones of A1 research over the last five years is the realization that we can do things on vastly parallel computers that we couldn’t do before.” However, Winston was quick to point out that hardware progress will not solve all of AI’s problems. Raising a cautioning finger, he added:
Don’t infer from what I said that we should just stop software research for twenty years and wait for the hardware to catch up. In fact, I’m a little schizophrenic on the subject of hardware. I’m saying, on the one hand, that
74
KNOWLEDGE MANAGEMENT TOOLS
the availability of better hardware allows the discovery of new ways of doing things. At the same time, I think we could do a whole lot more with the hardware we’ve got. In fact, I believe it would take us ten years worth of current software research to do hardware bad.
Minsky was, for his part, convinced that if hardware had constituted a bottleneck until the 1970s, the shoe was now on the other foot: that, in the 1980s, software turned into a millstone around A15 neck: “The machines right now could be as smart as a person if we knew how to program them.” Minsky’s former student David Waltz later elaborated on this point for me: In the old days, machine memories were too small to hold the knowledge researchers wanted to pour into them. Now it’s the other way around: you aren’t ever going to fill the new machines with hand code. Nowadays almost all research on learning is really aimed at making use of hardware in a better way. Ideally, you should only have to hand-code certain initial circumstances into the machine. You would then feed it experience in some form, which would allow the computer to acquire new knowledge on its own.22
CONCLUSION Thus it would appear that A1 software scientists have stepped into sevenleague boots too large for them. Their hardware colleagues have outfitted them with machinery they can’t quite handle. Will A1 software developers, then, remain hopelessly behind? Probably not: throughout the history of computer science, hardware and software development have kept leapfrogging each other. Software developers, periodically overwhelmed by hardware suddenly grown ten times as powerful, soon push it to its limits and start clamoring for more speed and memory. Because of the subject matter’s complexity, it hadn’t happened before in AI. Yet there is n o reason to believe that A1 software won’t take the lead again, and it may already have happened in areas of A1 other than symbolic reasoning. For example, in the Autonomous Land Vehicle project, which fell short of its objectives . . . , more powerful vision hardware might have made all the difference. Finally, although most of the A1 programs described in this book ran on computers with about as much processing power as a snail’s brain, these programs appeared much brighter than any snail. If A1 software researchers could cajole that much performance o u t of such puny hardware, what will they not achieve with machines a million times as powerful? And, when this day arrives, what may lie in store for humankind? H o w will we fare in a world containing machines intellectually equal, if not superior, to most human beings is the subject of my next, and final chapter.
How Many Bulldozers for an Ant Colony?
75
NOTES 1. Jean Pierre Changeux: L‘Homme neuronal (Paris: Fayard, 1983). 2. Jacob T. Schwartz, “The New Connectionism: Developing Relationships Between Neuroscience and Artificial Intelligence,” in Stephen R. Graubard, ed., The Artificial Intelligence Debate: False Starts, Real Foundations (Cambridge: MIT Press, 1988), p. 126. 3. Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Cambridge and London: Harvard University Press, 1988), pp. 57-60. 4. W. Daniel Hillis, “Intelligence as Emergent Behavior; or, The Songs of Eden,” in Graubard, Artificial Intelligence Debate. 5. J. J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities,” Proceedings of the National Academy of Sciences, U.S.A.. vol. 79, April 1982, pp. 2554-58. 6. Allen Newell and Herbert Simon, Human Problem Solving (Englewood Cliffs, N.J.: Prentice-Hall, 1972), p. 793. In this section of their book, Newell and Simon estimate the time it takes to store knowledge into long-term memory at five to ten seconds per chunk, but leave undefined the size of a chunk in bits. However, Newell’s Soar program models the psychological mechanism of learning by chunk creation: Soar encodes chunks as if-then rules which occupy about 1000 bits of memory. Putting these two figures together yields a human learning rate of about 100 bits per second. 7. IEEE Scientific Supercomputer Subcommittee, “Supercomputer Hardware: An Update of the 1983 Report’s Summary and Tables,” I E E E Computer, November 1989, p. 64. 8. Moravec, Mind Children, pp. 174-77. 9. [bid., p. 64. 10. Data extrapolated from IEEE Scientific Supercomputer Subcommittee, “The Computer Spectrum: A Perspective on the Evolution of Computering,” IEEE Computer, November 1989, p. 57. 11. W. Penfield and H. Jasper, ”Highest Level Seizures,” Research Publications of the Association for Research and Mental Diseases, New York 26 (1947):252-71; quoted in R. Penrose, The Emperor’s New Mind (New York: Oxford University Press, 1989), p. 493. 12. H. O’Keefe, “Is Consciousness the Gateway to the Hippocampal Cognitive M a p ? ” in D. A. Oakley, ed., Brain and Mind (London and New York: Methuen, 1985); quoted in Penrose, The Emperor’s New Mind, p. 495. 13. J. C. Eccles, The Understanding o f t h e Brain (New York: McGraw-Hill, 1973);quoted in Penrose, Emperor’s New Mind, p. 496. 14. Penrose, Emperor’s New Mind, pp. 492-500. See also Francis Crick and Christ of Koch, “The Problem of Consciousness,” Scientific American, September 1992, pp. 152-9. 15. K. Eric Drexler, Engines of Creation: The Coming Era of Nanotechnology (New York: Anchor Books [Doubleday], 1986). 16. David Ritchie, The Binary Bruin: Artificial Intelligence in the Age of Electronics (Boston: Little, Brown, 1984). 17. Franqois Jacob, “Evolution and Tinkering,” Science 196 (1977):1161-66.
76
KNOWLEDGE MANAGEMENT TOOLS
18. Interview with Marvin Minsky, 13 May 1991. All other personal quotes of Minsky’s in this chapter are from this interview. 19. Interview with Carl Hewitt, 13 May 1991. 20. Interview with Patrick Winston, 15 May 1991. All other personal quotes of Winston’s in this chapter are taken from this interview. 21. Interview with James McClelland, 20 May 199 1 . 22. Interview with David Waltz, 15 May 1991.
PART TWO
Knowledge Generation
This page intentionally left blank
5 Information Systems and the Stimulation of Creativity David Bawden
The nature of scientific creativity from the point of view of information provision is discussed, and the contributions of current information systems assessed. The changes necessary to enable “formal” libraryhnformation channels to play a fuller part in stimulating creativity are discussed. They include: representation of information for detection of analogies, patterns and exceptions; interdisciplinary information, and the role of reviews; creation of an information-rich environment, including peripheral material; extension of browsing capabilities, in both printed and computerized systems; direct involvement of information users; serendipitous use of literature; individually oriented information access; integration of information systems into formal creativity stimulation techniques. The implications of new information technology, particularly for the convergence of formal and informal communication channels, are considered. The necessity for research (theoretical and practical) on these topics is pointed out. A bibliography of 103 references attempts to draw together some of the scattered relevant literature.
INTRODUCTION The literature of creativity, scientific and otherwise is vast, but relatively little of it refers explicitly to information gathering and processing. Such mention as is to be found is often rather negative, suggesting that information provision
through formal channels is of little importance for creative advances; some writers have even gone so far as to suggest that it may actually be detrimental. Similarly, although library and information workers appear to assume implicitly that they make some contribution to the creativity and problem-solving abilities of their clients, there is little discussion of this aspect in the professional literature. This paFrom David Bawden, “Information Systems and the Stimulation of Creativity,’’ Volume 12, 1986. Reprinted with kind permission of Bowker-Saur, a division of Reed Elsevier (UK)Ltd. and the Institute of Information Scientists.
79
80
KNOWL.EDGE MANAGEMENT TOOLS
per is intended as a modest attempt a t redressing the imbalance, and emphasising the actual and potential value of information services for these purposes. If the content and its treatment appear somewhat diffuse, then that is a reflection of the nature of the subject. Brittain has suggested that studies of the way in which information users process information supplied to them, merging it with existing knowledge and utilizing it for creative problem solving, has a potential future relevance to information science.20This is a much more ambitious objective than that which is aimed a t here, but perhaps more limited background studies like this one will pave a way towards it. The paper is divided into four sections: Section 1-Background, Section 2-Nature of creativity, Section 3-Information provision for creativity, Section 4-Conclusions. After a short background section, the main points of the paper come in the two main Sections (2 and 3). First (in Section 2), some aspects of scientific creativity, and formal creativity stimulation techniques, per se, are discussed with a view to isolating factors of relevance to information provision. Second (in Section 3), some thoughts on the sort of information systems able to offer real support to creative activity are outlined. Some conclusions are attempted in Section 4. The bibliography is not comprehensive (it could not be so in this subject area without excessive length), but is intended to include sufficient material to enable those interested to obtain an entry into relevant aspects of a highly scattered literature.
1. Background Before going further, it may be worthwhile commenting on the importance of the topic. Major conceptual advances, in both pure and applied research are of course very rare. In pharmaceutical research, for example, many drugs are discovered which have some incremental improvement in properties over those already available, but the number which are genuinely novel in concept (beta-blockers, H2 receptor antagonists etc.) is very small. Such advances are crucial to the longterm success of all research effort, academic or industrial, and it would be unfortunate, to say the least, if information services had no contribution to make to them. A few words on the scope of the paper are in order. The discussion will be centred around creativity in the context of research and development in science and technology, although reference will be made, where appropriate, to concepts from other fields of study. This restriction is unfortunate in some ways, particularly when one of the major themes of the discussion is the importance of crossdisciplinary contact, but it is necessary to keep the length of the paper, and the spread of the arguments, within bounds. Creativity will be considered as related
Information Systems and the Stimulation of Creativity
81
to the initial stages of concept formation and problem solving. In industrial research terms, this is the process of invention, rather than innovation (by which an invention is transformed into a marketable product), or diffusion (by which an invention or innovation spreads through a community of potential users). The role of information transfer in innovation (broadly defined) has been extensively studied and reported in past years, and will be referred to here when relevant: useful introductions to this literature are.46,5n-59*60 Little attention has been paid to the specific concept of information for invention, however, though the importance has been frequently recognized. “Invention,” says Hill, “is but one stage of an innovation, but what an important one! It is the creative step, a quality which distinguishes it from the other components of innovation, equally important though they may be.”60 Creativity is of course a highly individual process, but creative individuals work, to some extent, within a social and organizational framework. It will therefore be necessary to consider information provision in aiding creativity at an organizational level, while not forgetting that a concentration on the information needs of the individual, and the matching of the style of information provision to the individual’s thought and work patterns, is arguably more important here than in any other context.
2. Nature of creativity This section has four subsections:
2.1. Definitions. 2.2. Specific aspects of creativity. 2.3. Formal creativity stimulation techniques. 2.4. Summary. 2.1. Definitions
Karl Popper has remarked that the subject of scientific creativity is vast, and also dangerous, because so much nonsense has been written about it.54The literature on creativity is indeed vast, with very many books and articles written in the past twenty five years. For example the Social Science Citation Index showed over 1500 articles with the term “creativity” in the title, from 1972 to mid-1986, while the US “Books in Print” database had over 300 items with the word “creativity” in the title. There are almost as many definitions of creativity as authors discussing the topic-the following are given as exemplifying the rather pragmatic approach used in this paper. Creativity, in art or in science, consists in the ability to present information in a light which had not appeared before, but which nevertheless adds to a coherent pattern already publicly available.’2
82
KNOWLEDGE MANAGEMENT TOOLS
[Creativity involves] the relating of things or ideas which were previously unrela ted.s2 [Creativity involves] perceiving significantly new patterns in bits of knowledge-data and theories-already available.s4 The essence of creativity in problem solving is the ability to break through constraints imposed by habit and tradition, so as to find “new” solutions to problems.s3 Originality often consists in finding connections or analogies between two or more objects or ideas not previously shown to have any bearing on each othetsS It is worth noting the recurrence of two themes in these definitions: the relating of ideas generally held to be quite distinct, and the recognition, or creation, of patterns. Also, the information processing aspect of creativity is brought out clearly. 2.2. Specific aspects
Five topics seem to recur in discussions of scientific creativity. These are:
(i) the role of chance in discovery; (ii) the great value of analogies; (iii) the importance of careful examination of exceptions to, and inconsistencies within, the accepted scheme of things; (iv) the damaging effect of commonly-held ideas which are in fact false; (v) the importance of inter-disciplinary research. These will now be examined briefly in turn. (i) The importance of chance observations in scientific research has been emphasised, with numerous e ~ a m p l e s . ~ ~Of . ~ course, ~ ~ ’ ~ ’ as Medawar points out, there is an inherent bias in our view of the importance of chance, since we cannot know how frequently bad luck robs us of a discovery, or solution to a problem, close at hand.57 The most thorough examination of the chance factor has been made by Austin, who distinguishes four kinds of chance, of relevance to scientific creativity:’
Chance 1: “Blind luck,” unattributable to any actions or qualities of the recipient. Chance 2: “Happy accidents,” when unconnected events impinge upon the matter in hand. Favoured by exposure to seemingly unconnected facts and experiences. Chance 3: “Prepared mind,” “Pasteur principle.” New relationships are perceived because of exposure to many facts related to the problem in hand. Chance 4: Chance favouring the particular individual, because of distinctive knowledge, interests or lifestyle, seemingly far removed from the problem a t hand.
Information Systems and the Stimulation of Creativity
83
Foskett notes that “An information service can at times become itself a stimulus to creativity by providing a user with what is pertinent even though it may seem irrelevant. Chance favours the prepared mind, and such serendipitous information is often the sort that leads to scientific revolution^."^^ The important point is that the more information which has been assimilated, the more likely it is that a fortuitous chance observation will be utilized. This is not necessarily restricted to closely-focused “relevant” information; on the contrary, the information may be apparently divorced from the problem at hand. This latter factor appears to be particularly important for major conceptual advances. One particularly relevant form of chance, of course, is the accidental finding of something of interest in the literature while looking for something entirely different. Ways of assisting serendipitous use of the literature will be considered later. (ii) Reasoning by analogy plays an important part in scientific thought,” and, as is the case with chance, much anecdotal material is available to illustrate the point. A fascinating example comes from the world of library science. Ranganathan derived the initial concept of his Colon Classification after seeing a demonstration of a toy erector set in Selfridge’s department store.19 Another example comes from the work of Cowan in hypothesising an explanation for some druginduced hallucinations, sparked off by the analogy between the simple geometric patterns seen in the early stages of hallucination, and similar patterns seen in fluid c o n ~ e c t i o n The . ~ ~ initial concept of two histamine receptors, which provoked work leading to the discovery of the highly important H2 receptor antagonist drugs, was sparked by analogies with other substances acting at two sites with different effects.38Many other examples may be given. One particularly interesting point to note is that the language used for describing the concepts under consideration can greatly affect the readiness with which analogies can be detected. Formal languages, such as mathematical notation or chemical structure diagrams, are particularly powerful for displaying analogous situations. Natural language is likely to be poor, particularly when analogies are sought across disciplinary boundaries, because of the different specialized terminologies likely to be used. A good example of this is the difficulty of finding analogous legal precedent, using “conventional” retrieval systems.’02 (iii) The saying “science makes progress by the careful study of anomalous results” has become a truism39and again, anecdotal support is available for the value of noting exceptions and inconsistencies in the prevailing world view. For example, Charles Darwin’s son noted his father’s “special instinct” for noting exceptions, even when not particularly striking.s This factor has been recognized as a very general source of inspiration for creativity, in both science and the humanities. “The creative writer,” says Foskett, “looks for flaws in the paradigm, for gaps and inconsistencies in our general picture of reality,”13 while Dainton suggests that important new experiments may be suggested by “going back to the libraries to find and to ponder observations seemingly a t complete variance with existing ideas.”15
84
KNOWLEDGE MANAGEMENT TOOLS
The practicability of drawing particular attention to exceptions and inconsistencies in practical information systems will be discussed below. (iv) The potential danger of false knowledge, particularly of the “it is known to be impossible” kind, in preventing experiments (actual and gedanken) being done, has been commented on by a number of authors. Klingsberg gives the example of the noble gases, for which it was “known” that closed shells of electrons made compound formation i m p o ~ s i b l e .Many ~~ thousands of experimentalists could have done the work leading to the discovery of their compounds, but would have regarded it as a waste of time. It has even been suggested that if Newton and Leibnitz had known that continuous functions are not necessarily differentiable, the calculus would not have been invented.34 Beveridge points out the particular danger of false information in the literature, presenting a barrier to new ideas:$ but Garfield suggests that the danger is exaggerated, showing that scientists of Nobel class, presumably highly creative individuals, make as much use of the literature as their less exalted peers.29The nature of their literature use is of course unknown, and the true extent of inhibition of creativity by false ideas in the commonly accepted corpus of knowledge remains a matter for conjecture. Outdated knowledge is also, in a sense, false. The ability to forget, i.e. to shed past ideas and habits, has been held to be one of the most important aspects of i n n ~ v a t i o n . ~ ~ (v) The importance of interdisciplinary contact has been emphasised by several authors. Hill summarizes the issue in saying that “the benefit of cross-fertilization and the fact that the best site for the proliferation of new ideas is at the interface of two scientific disciplines are well known”60:the same point is made by Dainton, who notes the important discoveries which may stem from expertise in one area being applied elsewhere, giving as example Blackett’s work on palaeomagnetism following a career in cosmic ray Beveridge remarks that the greater our store of knowledge, the more likely that significant combinations will be discovered, the more so if there is a breadth of knowledge extending into related or even distant areas. Scientists who have made important original contributions have often had wide interests, or have changed subjects. The “outsider’s” view may be of most value in an older wellworked field of study. In an active, rapidly developing field, the close-in relevance of expert knowledge is appropriate.ss Interdisciplinary research may bring its own problems. Synge even suggests that those who make major advances in science are often regarded as interlopers or amateurs by their colleagues, because their experiences in entirely different fields of science, or of everyday life, causes them to see the problems in an entirely different way.42
2.3. Formal creativity stimulation techniques A number of formalized techniques for stimulation of creativity have been devised and promulgated, both for groups and individuals. They are described in
Information Systems and the Stimulation of Creatioity
85
detail in numerous books and articles (see for example1s~’n~S1sZ~S3~s4). After attaining great popularity some years ago, their appeal has somewhat diminished, but a consideration of them may still be of relevance in identifying factors of particular importance for creativity. All that will be attempted here is a summary of four of the more widelyknown techniques, emphasising those points relevant to the topic of this paper. (i) Synetiw. Defined as “the joining together of different and apparently irrelevant elements” to solve a problem, synetics is based on the use of analogical thinking, to seek similarities between apparently different things. Very far from obvious links (“metaphors”) are deliberately sought, in order to test assumptions about possible solutions. Application of the technique may require the involvement of specialists in fields far removed from the problem under investigation. (ii) Brainstorming. The aim here is to produce “checklists of ideas,” which may have relevance to solving a problem. Unconventional ideas are sought, and the brainstorming technique relegates evaluation and criticism of the ideas produced to a later stage, in order to encourage maximum freedom in generating concepts. (iii) Morphological analysis. This is a tool for ensuring that all information potentially relevant to the solution of a problem is systematically examined. All the information available is grouped into “attributes” of concepts, or things, and all combinations examined. Even illogical combinations are considered, since they may suggest feasible alternatives. The exhaustive nature of the process reduces the risk of novel combinations (i.e. solutions) being overlooked. The systematic and objective synthesis of possible solutions avoids the constraints of preconceived ideas. (iv) Lateral thinking. This term, devised by De Bono, describes a number of techniques for restructuring conceptual patterns and creating new ones, essentially looking at familiar data in new ways. This may involve using incorrect information as a step towards a correct solution, and may deliberately seek out irrelevant information. The most extreme form of this latter is the use of random stimulation. This may include exposure to ideas from a completely different field, by totally cross-disciplinary discussions, or by entirely random browsing in a library. Roe has described the university lecturer who “makes a practice of noting down the registration numbers of cars parked outside the library, and consulting the books shelved at those numbers, [rarely failing] to find something of interest.”lo3De Bono suggests the “formal” introduction of randomness by random selection of an article from a book or journal, or a word from a dictionary, as a stimulus to problem solving. Alternatively, one may use different aspects of relevant material as alternative entry points to begin problem solving. In these techniques we find strong echoes of factors already noted (analogies, cross-disciplinary data), as well as specific and unconventional ways of dealing with information (systematic combination of concepts, temporary acceptance of apparently nonsensical, or downright false, information, formal randomness, etc.).
86
KNOWLEDGE MANAGEMENT TOOLS
2.4. Summary
This brief survey of some aspects of scientific creativity, and creativity stimulation techniques, has identified a number of points clearly related to information provision. These are listed in Table 5-1. They include aspects related to the sort of information provided (e.g. interdisciplinary), the way in which it is provided (e.g. to identify analogies), and the way in which it is handled (e.g. temporary acceptance of false information). Some approaches to the inclusion of such requirements within information systems will be discussed in the next section.
TABLE 5-1 Some Aspects of Information Handling to Aid Creativity Recognizinglcreating patterns Identifying analogies Interdisciplinary contact Dealing with false knowledge Identifying exceptiondinconsistencies Favouring chance Random stimulation of ideas Temporary suspension of evaluation
3. Information provision for creativity This section has three subsections: 3.1. General aspects. 3.2. Type of information required. 3.3. Organization and retrieval of information.
3.1. General As was noted in the introduction, there is surprising.j little to be .aund in the literature of innovation and creativity on the role of information, as understood by the library/information community. Hill points out the paradox that, while case studies show that information is the key to successful innovation, very few of the monographs and reports written on the topic have even an index entry for information.60 He shows that in studies of the “conception phase” of industrial innovations, literature use is usually quoted in less than 10% of cases as the source of an idea. He suggests that this may be misleadingly low, for reasons including variability of literature usage between subject areas, unwillingness to give too much credit to prior art for fear of invalidating patents, (one must suspect that an element of personal and professional self-aggrandisement may lead to the same effect in the academic world) the fact that a conceptual jump may result from lit-
Information Systems and the Stimulation of Creativity
87
erature read and assimilated much earlier, and (among the more literature-conscious) the very routineness of information gathering, which may therefore go unremarked. This last point suggests an interesting analogy with a study of the UK public library system:’ which showed that a number of prominent politicians, labour leaders etc., who were known from diaries, letters etc. to have relied heavily on public libraries for self-education, consistently failed to mention this, later in life, as an influence on their development. It may simply be the fate of information providers to go unrecognized, or a t least unacknowledged by, through familiarity, their most regular users. The fact is, however, that formal information services do not appear to be regarded as very useful for concept formation, the discouraging summary by Rothwell and Robertson being typical: “in ‘idea generation,’ of course, and barring accidental retrieval, library and SDI systems will play a relatively minor role.” There are naturally countervailing views, most commonly from within the information community, of which the following are typical: “access to pertinent literature” is among factors enhancing scientific creativ-
it^.^ “Library and information services make their own unique contribution to creativity. ” 9 “A continuing flow of information [is] essential both for creative advances and also for the consolidation which underpins them.”16 “Information searching is a creative activity, is proactive as well as, or more than reactive.”” These seem however to have the appearance of articles of faith, and few authors have attempted detailed discussion of specific information requirements for conceptual advance. Of those who have, most have concentrated on accidental, “serendipitous” retrieval, and have equated creative literature use with browsing; for example: “By definition, inspiration is not the end product of direct retrieval-it happens, it is happened upon, not least in the course of unstructured, creative browsing.”3s Garfield makes the distinction between information recovery (routine) and information discovery ( c r e a t i ~ e )He . ~ refutes ~ the idea that use of the literature inhibits creative work, by giving evidence to show that creative scientists are avid literature users. Synge gives examples of the attitude that too much familiarity with the literature inhibits c r e a t i ~ i t y Referring .~~ to his own experience, he suggests that immersion in too narrow an area may be harmful-wide reading, and personal contacts across a wider range of disciplines is advantageous. Beveridge also gives examples to support this, quoting Byron, “To be perfectly original one should think much and read little, and this is impossible, for one must have read before one has learnt to think.” He suggests that overuse of the literature is a problem only to those with a wrong attitude of mind. Preparation is essential, and reading may be a stimulus to originality via significant analogies and generalization^.^^
88
KNOWLEDGE MANAGEMENT TOOLS
Kasperson, studying this particular topic as part of a wider survey of the psychological make-up of scientific w0rkers,2~J~ found that scientists rated as “creative” differed little from their “uncreative” peers in their assessment of the value of published literature, but made much more extensive use of people as sources of information. These studies did not however examine the way in which the literature was used, and one may hypothesise that significant differences could be found by a detailed qualitative study. We shall now go on to consider first the kind of information necessary for creativity stimulation, and second the way it should best be organized and retrieved. There is inevitably some overlap between the two categories.
3.2. Trpe of information required Four kinds of information in particular can be identified as especially relevant for aiding the creative process These are:
(i) interdisciplinary information; (ii) peripheral information; (iii) speculative information; (iv) exceptions and inconsistencies. These will now be considered in turn. (i) Interdisciplinary infomation. The value of interdisciplinary information for creativity has been emphasised above, and it is plain that this is one problem which must be tackled if information systems are to make any real contribution to this area. The effective transfer of information across disciplinary boundaries is never an easy task, even for “informal,” and hence inherently more flexible, information transfer mechanisms. The problems of communication between individuals in the field of “water studies,” for instance, where cloud physicists and microbiologists may share common concerns, have been well described.44 Nonetheless, the rewards are potentially so great that the attempt is made, as is suggested by Kasperson’s work which showed that creative scientists place particular value on interdisciplinary personal contact^.^^^'^ The problems of formal information services in interdisciplinary areas are, of course, correspondingly greater. One based admittedly on a rather small survey, showed that 14 out of 22 American scientists working in interdisciplinary fields rated the information services available to them as unsatisfactory, compared with only 1 out of 55 scientists working in an established discipline. The corresponding figures for a number of countries combined were 68 out of 146 and 26 out of 286. Thus, well over half the interdisciplinary workers were conscious of a lack in information provision, which, given the general apathy of the scientific community towards quality of information services, indicates serious failings indeed. The causes of these problems were readily definedz’.43,62: scatter of publications in the primary literature, and hence in secondary services; abstracting and
Information Systems and the Stimulation of Creativity
89
indexing from a single disciplinary viewpoint; the jargon barrier between disciplines; etc. It is also clear from surveys that it is interdisciplinary journals whose subscriptions are discontinued first in times of financial stringency. The solutions are less easy to find. New journals for interdisciplinary areas are constantly being created, while a number of cross-disciplinary “mission oriented” secondary services have come into being recently: those covering environmental energy:‘ water resources,6S and coffee66are good examples. These latter are, of course, of great value to workers in the fields concerned, but their number will necessarily be constrained by economics. There is also the consideration that journals and secondary services may arise only when a meeting point of two disciplines is “hardening” into a sub-discipline in its own right, and thereby becoming as impenetrable to outsiders as the original disciplines. This is the process which Pachevsky denotes as the emergence of a “complex” scientific discipline from an “interdisciplinary” a ~ e a . 4 ~ What are particularly desirable are information services which aim to cover a field of study for the benefit of non-specialists. An interesting example is Oceanographic Literature Review,67which includes (highly selectively) major papers of general interest, in fields sometimes far removed from oceanography. Many of its readers rely on it, not for their main area of work, but for fields of peripheral interest. This service is available only in printed form, and is primarily designed for browsing, of which more will be said later. The editor holds that “many problems are solved only by connecting the unconnected, and that selected literature reviews can put the pieces on the table,” and that publications like this will become increasingly common67: it certainly appears to be a promising approach. The journal Interdisciphny Science Reviews is an interesting example of an attempt to review interdisciplinary topics in a way suitable for audiences of all subject backgrounds. The importance of reviews in information transfer generally has been s t r e ~ s e d ~ ~and * ’ they ~ * ~are ~ of obvious and particular importance in conveying information across disciplinary boundaries. Many existing review journals emphasise their intention of serving the non-specialist reader, but this is easier said than done, particularly when the reader may have an entirely different subject orientation: Epstein reminds us that simple explanations of scientific theories may inthat Garfield’s call for volve more work than the original d e ~ e l o p m e n tIt. ~seems ~ an upgrading in the prestige awarded to scientific reviewing, and the support provided for it, are particularly relevant here.’O Taking a rather different tack, it seems an attractive proposition to make use of automated aids to computerized retrieval, so that the “jargon” of one field could be automatically converted to that of another, thereby removing one barrier to making use of cross-disciplinary material. There would remain the problem that the concepts chosen for abstracting and indexing might be inappropriate, and access to a machine-readable full-text database would be essential. Whether vocabulary translation techniques of sufficient generality and subtlety could, in any event, be devised is a matter for future research.
90
KNOWLEDGE MANAGEMENT TOOLS
Finally we should note the importance of informal interdisciplinary contacts, inside and outside the scientist’s own organization. These will be considered later, with other “informal” information transfer systems. (ii) Peripheral information. Another point to emerge in the consideration of creativity above was the value of seemingly peripheral, or even irrelevant, material, the “information around the fringe that will lead to new ideas,” as Martin puts it.71 Dijkhuis makes essentially the same point: “the informational spark which is really efficient for the creativity of R and D tends to be of a general, blurred and not immediately applicable nature.”72 One way of meeting this requirement is to limit deliberately the precision of information searching, agreeing with Martin that “in the areas of generating ideas, precision in searching is another block, another stop to people finding out ways to improve and extend . . . we must not teach innovators to search with precision but to search loosely in the area r e q ~ i r e d . ” This ~ ’ amounts to encouraging browsing, which will be discussed later. An essential point, of course, is to ensure that sufficiently wide ranging material is included in information sources (as in Oceanographic Literature Review), and that material when included is not categorized too closely. As long ago as 1914, Armstrong was railing against having any arrangement into sections in the Chemical Society’s abstracts, so that “we may have, in fact be forced to have, the whole of the subject matter under our noses.”73 With the growth of literature in the interim, such heroic measures have become totally impracticable, but on a smaller scale this author can confirm that the highly miscellaneous “History, Education and Documentation” section of Chemical Abstracts produces intriguing surprises. The advantages of broad categorization for browsing will be considered later. A general conclusion stemming from the recognition of the importance of peripheral, apparently irrelevant, material for creativity stimulation must be that information services within research organizations should provide as informationrich an environment as possible, deliberately going beyond the bounds of perceived relevance to their organization’s interests. The latter idea goes against much of the conventional wisdom of information service management, and would be particularly hard to argue for in the present economic climate, since it could involve provision of books, journals, and other information resources viewed as, at best, peripheral. The choice of such material would of course, not be arbitrary, and would involve at least as much care and skill as conventional resource acquisition. Truly random information input, as recommended in the lateral thinking approach, must be left to controlled creativity stimulation exercises, which will be considered below. (iii) Speculation. One of the keynotes of the formal creativity stimulation techniques discussed above was the importance attached to free speculation and idea generation. Though such speculations flow freely through informal communications channels, they are, on the whole, barred from formal information transfer systems. Dijkhuis regrets this, pointing out the tremendous freedom to express conjecture in the literature of seventeenth century science, one of the most highly creative of periods.72
Information Systems and the Stimulation of Creativity
91
The Elsevier journal Speculations in Science and Technology prints articles on speculative ideas which may not be supported by any currently accepted theoretical background or experimental evidence, together with comments, which may be either critical or supportive. The main value of such a journal is in stimulating discussion, starting from an unorthodox viewpoint, which may help to clarify the issues involved. Other journals do this to a lesser extent, by publishing “target” papers with comments and rejoinders, but the degree of speculation involved is much more closely controlled. An example of almost uncontrolled speculation is given by the Daedalus column in the weekly magazine N e w Scientist.87 This column is devoted to (in the words of its author) “ingenious and novel concepts [which] fall in that uneasy no man’s land between the clearly possible and the clearly fantastic.” Despite the intentional “flight of fancy” nature of the column, experience has shown that almost 20% of its ideas are later seriously suggested as practical, patented, actually implemented, or shown to have been done already. They are also used by academic departments as studies for students. It is rather debatable to what extent such material belongs in the formal literature, with its slowness of publication and limited possibilities for interaction, rather than in the informal communication area. However, with increasing use of electronic means of communication, the boundaries between the two are likely to become blurred, and this will be discussed later. (iv) Exceptions and inconsistencies. Some means of drawing attention to exceptions and inconsistencies in existing knowledge is also a desirable attribute for information systems aiding creativity. One of the most obvious cases is a gap in existing knowledge, and it is a truism that it is essentially impossible to search the literature for what is not known. One interesting venture has been the Encyclopaedias of Ignorance, in which the authors of the articles discuss unsolved problems in a particular subject area.Z3 Because of the rapid progress in many fields, such information on ignorance would become dated very rapidly. Indeed, by the time a problem has been clearly formulated, it is probably close to solution (except in mathematics!). Periodical surveys of gaps in knowledge in particular subject areas, where the means of solving the problem are not apparent, seem a useful and feasible concept. Something of the sort has been suggested for unsolved problems in physics, seeming to require new mathematical t e c h n i q ~ e s .It~ may ~ also be profitable to consider ideas and concepts introduced in the past and never followed up,27J8especially if some particular technique or concept was lacking at the time of the original introduction.60 This can perhaps be particularly well done within a conference ses~ion.~’ The identification and description of gaps, exceptions, and concepts not followed up, or now unfashionable, will naturally be among the most important components of reviews specifically aimed at aiding creativity. The suggestion has been made that the literature record of relatively old work may be used as a direct stimulus to invention, in that information retrieval strategies may be designed to emulate a pattern of the scientific discovery process.so Discovery is regarded as the creative synthesis of previously developed abstract “chunks” of knowledge, generally over a long, and statistically predictable
92
KNOWLEDGE MANAGEMENT TOOLS
time-span. Information systems may be able to show at what point in the sequence a particular topic is situated, and identify particularly relevant concepts ripe for synthesis. Finally, they should be able to point to the implications of new discoveries in related areas. Considering the more recent literature, perhaps products such as the IS1 Atlas of Sciences” will be able to indicate local areas (particularly at the junction of “conventional” topics) where progress is likely to be made rapidly.
3.3. Organization and retrieval of information We will consider the most appropriate means of organizing and retrieving information under four broad headings:
(i) browsing; (ii) retrieval techniques; (iii) informal communication channels; (iv) formal creativity stimulation techniques. ( i ) Browsing. It is clear from what has been said above that creative use of literature very often amounts to browsing. Klingsberg regards the two as synonym0us,2~while Pacey notes the importance for inspiration of “unstructured, creative and Foskett speaks about the “random juxtaposition of ideas gained by purposeful browsing [which] may suddenly bring together apparently unconnected pieces of information to form a new coherent picture.”” Actual accounts of successful browsing are rare, but Austin describes his discovery of a paper of major importance to his studies of hypertrophic neuritis thus: “[I had] considerable time to browse in the library. There, one day, meandering. . . in the Cumulated Index Medicus, I chanced upon the pivotal cross-referen~e.”~ Although the importance of browsing is generally recognized, its nature appears to be little understood. At least three kinds of browsing have been recognized7? “purposive” browsing, the deliberate seeking for new information in a defined (albeit broad) subject area; “capricious” browsing, random examination of materials without a definite goal; and “exploratory” or “semi-purposive” browsing, in search, quite literally, of inspiration. Little is known of the success rate of this sort of information seeking, and still less of those factors which are likely to encourage it, and make it more productive. As mentioned above, selection of material over wide subject areas, and lack of detailed subject organization in secondary publications, are likely to assist, as is a varied presentation of inform a t i ~ nIn. ~a ~library context, a broad classification scheme can aid the browsing function.” It has been suggested that printed material is inherently more suitable for supporting browsing than computerized information s y ~ t e m s , at ~ ~least . ~ ~ with searching software of the kind used at present. Surridge roundly declares that the idea of machine browsing is “sheer lunacy,” since “it is a fact that neither the machine application of ‘relevance’ nor the machine constraints on ‘browsing’ allow
Information Systems and the Stimulation of Creativity
93
comparison with the synonymous human process.”g8 Martin however argues for the value of interactive computer searching for unstructured information seeking, particularly because of the facility to follow up rapidly ideas and leads.71 He recommends changes in online charging structures, and wider availability of abstracts online, to encourage browsing in computer systems. Looking further to the future, alternative means of computer searching, more akin to a browsing process, are being i n v e ~ t i g a t e d . ~ It. ~seems ~ * ~ clear ~ * ~ ~that more study of the browsing process itself, and means by which it may be facilitated, is a necessity. This is particularly so in view of the changes likely to be brought about, for good or ill, by the advent of new information technology. One point which has been reiterated by a number of writers is the importance of letting the people who are to make use of information (the “end users” in the jargon), have access themselves to information resources, rather than being compelled, or even allowed, to use an intermediary, if creative use of literature is to be encouraged. The reasons are clear enough: there is a potential comprehension gap when even a clearly visualized information need has to be expressed to another, and this must become even worse when the information requirement is ill-structured and imprecise. Further, use of an intermediary effectively removes all possibility of serendipitous browsing. This point was made by Klingsberg in the context of use of printed but has come more sharply into focus with the recent controversies about the appropriateness of “end-user access” to online information systems, with Martin, for example, strongly arguing for its neces~ity.”.’~It has been noticed that, in a t least one case, the training of scientists to use online retrieval systems resulted in more online browsing then would be the case with an intermediary searcher.lx In an academic context, the preference for scientists to do their own printed index searching, and to participate in computer searching, has been noted.17 The more generalist information officer, lacking the “prepared mind,” might not spot a vital link, and would certainly fail to follow a creative information gathering trail into apparently irrelevant areas. There appears to be little doubt that keeping the specialist in close touch with the most appropriate resources is one of the keynotes of creative use of information and data. (ii) Retrieval techniques. We should now consider whether, in addition to the encouragement of the browsing type of search, there any particularly desirable changes in, or improvements to, information retrieval software, which would enable computerized information systems to be better used in a way aiding “information discovery.” It has been suggested that the abandonment of controlled vocabularies, and replacement by free-text searching will automatically lead to broader searches, and hence to highly desirable access to fringe inf~rmation.~’ While free-text searching is always a valuable aid, particularly for non-information professionals, and will certainly be the method of choice in some types of search, the issue is by no means so c l e a r - c ~ t . For ~ ~ *example, ~~ searching for a broad concept such as “toxicity and hazardous effects” is made far easier with the use of intellectually assigned concept codes and section headings, then by reliance on free-text terms.xz Again, one need only contemplate the hierarchy of MeSH (Medical Subject Head-
94
KNOWLEDGE MANAGEMENT TOOLS
ing) terminology to appreciate the advantages of using a pre-established hierarchical classification over its constituent terms and their variations to be found in freetext. Of course, free-text searching has a great part to play in creative literature use, particularly to get at “unconventional” concepts, not included in indexing. Plevier gives the example of a search for chemical apparatus for which a main feature was an oval shape: not a concept likely to be inde~ed.’~ What is required, for information discovery as much as for information recovery, is a means of combining the best features of controlled and uncontrolled terminology, and doing so in a way easily accessible to the non-expert user. Perhaps this need can be met by a computerized searching aid, able to give access equally to free-text term or to broader concepts. Another profitable line of research is the design of search software, which does not use conventional Boolean logic, but rather retrieves items on the basis of a quantitative measure of similarity between these items and the “search query.” After what has been said above about the importance of analogies and loosely related material for creativity stimulation, it will be clear that this is a particularly promising approach. Several such techniques are under inve~tigation~~J’ and it is to be hoped that some practical implementation will soon be possible. In addition to better ways of using information systems, it may be that alternative means of representing information will be needed, in order to bring out “hidden” relationships, patterns, analogies, etc. One such means is the description, or mapping, of an area of knowledge by some very detailed “indexing language,” such as the proposals for the use of relational indexing.92 Another possibility is the encoding of information in the form of logical relations between concepts, as is done in expert systems, using computer languages such as PROLOG.93 This may allow for checking the consequences of omitting, or amending particular pieces of information, and for testing for consistency and exception conditions. These are possibilities for the future, but experience has shown clearly the advantages of using formalized representations of knowledge in information systems: chemical structure representations and mathematical relations are the most obvious examples. These representations make searching for analogies (the importance of which has been emphasised above) particularly productive. The choice of representation is of course crucial: it is necessary, for instance, in the case of databases of chemical structures to go beyond the chemical structure diagram and consider conformational, electronic, or physiochemical factors, to establish valid analogies in some cases. The establishment of novel means of representing information is likely to prove a major requirement for information systems to support creativity. The display of information is also a topic of vital importance (though one which cannot be dealt with here in any depth). The use of visual imagery and symbols has a significant place in the creative process, and the ability to present information in a form other than the printed word is likely to have a considerable impact on the extent to which information systems can contribute to the creativity of their users. The nature of the graphical or symbolic display used will naturally
Information Systems and the Stimulation of Creativity
95
depend on the nature of the information being portrayed. Examples would be multivariate display techniques for information which can be represented in quantitative or semi-quantitative form,98 and molecular graphics techniques for displaying molecular structural i n f ~ r m a t i o n . ~ ~ One point which cannot be emphasised too strongly is that the highly personal nature of the creative process must be matched by a personalized approach to information provision. In view of what has been said earlier about the importance of direct involvement of the user in information seeking, it seems likely that this need will best be met by the provision of information resources accessible to their users in a variety of ways (computerizedprinted, browsabldsearchable, varying levels of detail in indexing, ability to include/exclude categories of material, etc.). It may also be advantageous, in considering creativity on an organizational scale, to pay particular attention to the needs of “information gatekeeper^,"".^^ who are likely to play an important role in promoting the creative use of information. (iii) Informal communication channels. Up to this point, we have considered “formal” information systems, but the literature of creativity referred to above indicates that the “unofficial” communication channels (personal contacts, attendance at meetings etc.) are of great importance, especially where they cut across disciplinary and organizational boundaries. The contribution of the information professional here will be to provide the sort of tools already available (indexes of expertise, referral services, meetings calendars, etc.), and indeed to expand their scope where possible, and to try to maximize receptiveness within their organizations to the importance of these informal channels.!” Beyond this, it is possible that new information technology will have the effect of blurring the boundaries between formal and informal communication. Cross suggests that a new generation of “lateral” information systems (electronic mail, teleconferencing, etc.) will prove more important in the long term than “vertical” systems (databases, question-answering program^).'^ This may be the case, especially for those systems which cater specifically for interdisciplinary information, as with “information routeing groups.”’00 This could have considerable consequences for the perceived utility of information systems as an aid to creativity, but considerably more experience will be needed before the true effects of the convergence of information technologies can be adequately assessed. (iv) Fomal creativity stimulation techniques. Finally we should note the potential for the retrieval and presentation of information as an integral part of formal creativity stimulation techniques. Virtually all of these techniques rely on some “information” input, though not necessarily in the library/information sense of the word. Two novel possibilities present themselves. First, factual or conceptual databases could be used directly, as an integrated part of the practice of one of the techniques described earlier. This could be particularly worthwhile if carried out in conjunction with one of the computerized aids to “ideas p r o ~ e s s i n g . ” ~ ~ Second, the results of a “conventional” literature search, carried out on behalf of one or more end-users, could be modified, as part of a controlled procedure, so as to attempt to aid creative thought. For example, significant “negative” informa-
96
KNOWLEDGE MANAGEMENT TOOLS
tion (e.g. reports of the impossibility of a particular procedure) could be omitted, or modified, in the input to a planning meeting, and revealed in full a t a later stage. Apparently irrelevant information could also be included in material for discussion. As with some of the ideas mentioned above, this sort of thing runs counter to much accepted professional practice, but its value, within a controlled situation, should not be dismissed out of hand.
4. Conclusions Information provision and processing is fundamental to the creative process. Formal libraryhnformation channels play a more important role than they are often given credit for, but improvements involving, to some extent, a change in basic precepts are necessary. In general, information systems must adapt so as to meet the criteria of Table 5-1. Several particularly important themes have emerged from this discussion and these are briefly noted in Table 5-2. One is the need for improved representations of data, information, and knowledge, so as to aid the recognition, retrieval, and display of analogies, patterns and anomalies in existing knowledge, perhaps the most fundamental contribution to creativity. This has implications for information professionals at all points in the information transfer chain, and particularly for system designers and operators. It will necessitate an emphasis on analysis and repackaging of information to suit particular needs. The importance of the creation of an information-rich environment, at an organizational, rather than simply an information systems, level is clear. This cannot be achieved by the information manager and department alone, requiring the active support of the organization’s higher management. One particular aspect of this is the need for inclusion of material of a peripheral or speculative nature. The value of interdisciplinary information cannot be overstated, and the provision of reviews, in various guises, is arguably the single most important means of achieving this. The little-understood browsing function is the single most important means of creative use of the literature, whether in printed or computerized form.
TABLE 5-2 Some Aspects of Information Systems to Aid Creativity Overall information rich environment Inclusion of peripheral and speculative material Provision of interdisciplinary information Representation of information to bring out analogies, patterns, exceptions, etc. Emphasis on browsing facilities Direct involvement of information user Encouragement of informal channels Information provision geared to individual preferenceslrequirements Appropriate use of new information technologies
Information Systems and the Stimulation of Creativity
97
The direct involvement of the information user appears essential, in most cases, particularly to allow for serendipitous use of literature, and it is vital that access to information is sufficiently flexible to suit the individual requirements of the users. Finally, the inevitable, and considerable impacts of new information technology, particularly with respect to the convergence of formal and informal communications channels, must be kept in mind. A considerable amount of research into these topics, both academic and practitioner-oriented, will be necessary before information systems are able to play their full part in aiding their user’s creativity.
REFERENCES 1. J.H. Austin, Chase, Chance and Creativity (Columbia University Press, 1978). 2. J.H. Austin [ l , p.1581. 3. J.H. Austin [l, p.1541. 4. J.H. Austin [ l , p.1501. 5. quoted by J.H. Allen (1,p.1101. 6. J.H. Allen [ l , p.1091. 7. J.H. Austin [ l , p.91. 8. D.J. Foskett, Pathways for Communication (Bingley, London, 1983). 9. D.J. Foskett (8, p.341. 10. D.J. Foskett [8, p.831. 11. D.J. Foskett [8, p.531. 12. D.J. Foskett [8, p.421. 13. D.J. Foskett [8, p.511. 14. D.J. Foskett [8, p.801. 15. M.I. Stein, Stimulating Creativity. 2. Group Procedures (Academic Press, New York, 1975). 16. E.J.W. Barrington, The importance of information for creative biological research, Infomation Services and Use 3 (1983) 23-33. 17. N. Higham, Information in national planning-a developing role, Aslib Procs. 36 (1984) 136-143. 18. J.R. Hayes, The Complete Problm solver (Franklin Institute Press, Philadelphia, PA, 1982). 19. E. Garfield, A tribute to S.R. Ranganathan, Current Contents 6 (1984) 3-10. 20. C.J. Kasperson, Psychology of the scientist: XXXVII. Scientific creativity: A relationship with information channels, Psychological Reports 42 (1978) 691-694. 21. T. Gardner and M.L. Goodyear, The inadequacy of interdisciplinary subject retrieval, Special Libraries (1977) 193-197. 22. J. Doucet, Information and creativity, Impact of Science on Society (1979) 394-395. 23. J. Duncan and M. Weston-Smith, eds., The Encyclopaedia of Medical Ignorance (Pergamon Press, Oxford, 1984).
98
KNOWLEDGE MANAGEMENT TOOLS
24. C.J. Kasperson, An analysis of the relationship between information sources and creativity in scientists and engineers, Human Communication Research 4 (1978) 113-1 19. 25. F. Dainton, Choices and consolation, N e w Scientist (1980) 112. 26. E. de Bono, Lateral thinking and indexing, The Indexer 1 1 (1978)61-63. 27. W.D. Loomis, Citation classic, Current Contents (37) (1978) 17. 28. E. Garfield, Current comments, Current Contents (1 970) 117. 29. R. O’Dette, ed., Literature and the creative process-help or hindrance, /. Chem. DOC.9 (1969) 183-190. 30. J.M. Brittain, What are the distinctive characteristics of information science, in: L. Harbo and L. Kajborg, eds., Theory and Application of Information Research (Mansell, London, 1980). 31. D. Matthews, Infinite hospitality of London Library’s categories, Lib. Assoc. Rec. 85 (1983)227-228. 32. M. Kline, Mathematics, the Loss of Certainty (Oxford University Press, New York, 1980). 33. M. Kline [32, p.3001. 34. M. Kline [32, p.1771. 35. P. Pacey, Art libraries, in: L.J. Taylor, ed., British Librarianship and Information Work 1976-80, Library Association, London, 1982. 36. Anon., The mathematics of hallucination, New Scientist (1983)367. 37. H.B. Barlow, Intelligence, guesswork, language, Nature 304 (1983)207-210. 38. J.H. Shelley, Creativity in drug research 11, Trends in Pharmacological Sciences (1983) 8-10.
39. J.P. Hudson, Who said that? New Scientist (1981) 455. 40. D.T. Saggers, The application of the computer to a pesticide screening programme, Pesticide Science 5 (1974) 34 1-352. 41. C. Hansch and E.J. Lien, Struaure-activity relationships in antifungal agents. A survey, lournu1 of Medicinal Chemistry 14 (8) (1971)653-670. 42. R.L.M. Synge, Wasteful research in pure and applied science, Interdisciplinury Science Reviews 4 (1979) 98-105. 43. T. Pachevsky, Problems of information services with respect to integration of the sciences.J.A.S.1.S. (1982) 115-123. 44. E Franks, Polywater (MIT Press, Cambridge, MA, 1981). 45. T.B. Cross, Vertical versus lateral thinking systems. Online 7 (1983) 6-7. 46. J. Martyn, The role of communication in technological innovation, in: The Problem of Optimization of User Benefit in Scientific and Technological Information Transfer, AGARD Conference Proceedings No. 179, 1975. 47. A. Smith, Information and creativity, ASIS meeting, 1981. 48. V.V. Nalimov, Faces o f Science (IS1 Press, Philadelphia, PA, 1981). 49. I. Maddock, Why Industry must learn to forget, New Scientist (1982) 368-369. 50. G. Harmon, Information Retrieval based on patterns of scientific discovery, Proceeding of the 41st ASIS Annual Meeting, 1978.
Information Systems und the Stimulation o f Creativity
99
51. R.L. Ackoff and E. Vergara, Creativity in problem solving and planning: a review, European /ournu1o f Operational Research 7 (1981) 1-13. J.G. Rawlinson, Creative Thinking and Brainstorming (Gower, London, 1981). E. de Bono, Lateral Thinking: A Textbook o f Creativity (Ward Lock, London, 1970). W.I.B. Beveridge, Seeds o f Discovery (Heinemann, London, 1980). W.I.B. Beveridge, The Art o f Scientific Investigation (Heinemann, London, 1950). R.S. Mansfield and T.V. Buse, The Psychology of Creativity and Discovery (NelsonHall, Chicago, 1981). 57. I?B. Medawar, Beveridge on discovery, Nature 288 (1980) 17-1 8. 58. R. Rothwell and A.B. Robertson, The role of communication in technological innovation, Research P o k y 2 (1973) 204-225. 59. B.T. Stern, ed., Infomution und Innovation (North-Holland, Amsterdam, 1982). 60. M.W. Hill, Information for innovation: a view from the UK, in [59]. 61. I? Sykes, The Public Library in Perspective (Bingley, London, 1979). 62. L.G. Donarurna, Some problems encountered in interdisciplinary searches of the polymer literature, /. Chem. Inf. Comput. Sci. 19 (1979) 68-70. 63. B. Miller, Overlap among environmental databases, Online Review 5 (1981) 403404. 64. B. Miller, Energy information online, Online 2 (1978)27-30. 65. J.R. Lendtke and R.D. Walker, Water resources abstracts database, Database 3 (1980) 11-17. 66. C.P.R. Dubois, The COFFEELINE database, Quart. Bull. Int. Assoc. Agric. Lib. Dec. 28 (1983) 1-5. 67. EC. Shepherd, The various roles of secondary publications (some thoughts), Intwnatioml]oumal of Micrographics and Video Technology 2 (1983) 101-104. 68. A.M. Woodward, The roles of reviews in information transfer, /. Amer. SOC.Inf. Sci. 28 (1977) 175-180. 69. L. Epstein, Understanding or belief, New Scientist (16 August 1984) 44. 70. E. Garfield, Proposal for a new profession-scientific reviewer, Current Contents (14) (1977) 5-8. 71. I? Martin, The innovative process and the online information channel, in [59]. 72. W. Dijkhuis, Innovation: its evolution and present state, in [59]. 73. H.E. Armstrong, A move towards scientific socialism, 1914; reprinted in: ]oumal of Information Science 7 (1983) 195-202. 74. E. Garfield, Introducing the IS1 Atlas of Science, Current Contents (42) (1981) 5-13. 75. J.E. Vickery, A note in defence of browsing, BLL Review 5 (1977) 110. 76. T. Norton, Secondary publications have a future in libraries, Aslib. Procs. 36 (1984) 317-323. 77. N.J. Belkin, R.N. Oddy and H.M. Brooks, ASK for information retrieval: part 1, Background and theory,J. Doc. 38 (1982) 61-71. 78. A.J. Paley and M.S. Fox, Browsing through databases, in: R.N. Oddy et al., eds., Information Retrieval Research (Butterworths, London, 1981). 79. C.R. Hildreth. Online browsing support capabilities, Proc. ASlS Annual Meeting 19 (1982) 127-132.
52. 53. 54. 55. 56.
100
KNOWLEDGE MANAGEMENT TOOLS
80. I? Martin [59, p.1721. 81. J.S. Haines, Experience in training end-user searchers, Online 6 (1982) 14-20. 82. D. Bawden and A.M. Brock, Chemical toxicology searching, a collaborative evaluation, Journal of Infomation Science 5 (1982) 3-1 8. 83. J.W. Plevier [59, p.1761. 84. S.A. Perry and P. Willett, A review of the use of inverted files for best match searching in information retrieval systems, Journal of Infomation Science 6 (1983) 59-66. 85. V. Stibic, Influence of unlimited ranking on practical online search strategy, Online Review 4 (1980) 273-279. 86. W. Runge, The role of reviews in science (German). Nachz f. Dokum. 34 (1983) 7278. 87. D.E.H. Jones, The Inventions of Daedalus (Freeman, Oxford, 1982). 88. R. Surridge, Relevance and Reality, Libz Assoc. Rec. 86 (1984) 407. 89. I? Duckitt, The value of controlled indexing systems in online full text databases, Proc. 5th International Online Infomation Meeting, London, 198 1 (Learned Information, Oxford, 1981) 4 4 7 4 5 3 . 90. C.P.R. Dubois, The use of thesauri in online retrieva1,Journal of Infomation Science 8 (1984) 63-66. 91. M. Clarke, Many guises in Bristol, Nature 311 (1984) 605. 92. B.C. Brookes, Measurement in Information Space: Objective and Subjective Metrical Space, J. Amez SOC.Inf. Sci. (1 980) 248-255. 93. J.L. Alty and M.J. Coombs, Expert Systems: Concepts and Examples (Wiley, Chichester, 1984). 94. L.A. Myers, Information systems in research and development: the technological gatekeeper reconsidered, R6D Management 13 (1983) 199-206. 95. M.L. Tushman, Special boundary roles in the innovation process, Administrative Science Quarterly 22 (1977) 587-605. 96. See for example C. Eden, A system to help people think about problems, Computer Weekly (876) (1 983) 26. 97. RL. Noerr and K.T. Bivins Noerr, Browse and navigate: an advance in database access methods. Infomation Processing and Management 21 (1985) 205-213. 98. B. Everitt, Graphical Techniques for Multivariate Data (Heinemann, London, 1978). 99. I? Quarendon, C.B. Naylor and W.G. Richards, Display of quantum-mechanical properties on Van der Waals surfaces, Journal of Molecular Graphics 2 (1984) 4-7. 100. D. Andrews, Information routing groups-towards the global superbrain. Journal of Information Technology! (1986) 22-35. 101. R. Slack and A.W. Nineharn, Medical and Veterinary Chemicals, Vol. 1 (Pergamon Press, OxfordNew York, 1968) 32. 102. R. Reusch, The search for analogous legal authority: how to find it when you don’t know what you’re looking for, Legal References Service Quarterly 4 (1985) 33-38. 103. R.J. Roe, On line but o u t of mind, Guardian (22 July 1985).
George Johnson
On the July 4 weekend of 1981, while many Americans were preoccupied with barbecues and fireworks displays, followers of an immensely complex, futuristic war game called Traveller gathered in San Mateo, California, to pick a national champion. Guided by hundreds of pages of design rules and equipment specifications, each player calculated how to build a fleet of ships that would defeat all comers without exceeding an imaginary defense budget of one trillion credits. Intellectually, Traveller is an extremely challenging game. To design just one vessel, a player must take into account some fifty factors: how thick to make the armor, how much fuel to carry, what type of weapons, engines, and computer guidance system to use. Each decision is a trade-off: a powerful engine will make a ship faster, but it might require carrying more fuel; increased armor provides protection but adds weight and reduces maneuverability. Since a fleet may have as many as a hundred ships-exactly how many is another question to decide-the number of ways that variables can be juxtaposed is overwhelming, even for a digital computer. Mechanically generating and testing every possibly fleet configuration would, of course, eventually produce the winner, but the process would take almost forever and most of the time would be spent blindly considering designs that were nonsense. Exploring Traveller’s vast search space requires the ability to discover and learn from experience, developing heuristics about which paths of inquiry are most likely to yield reasonable solutions. In 1981, Eurisko, a computer program that seems to display the rudiments of these skills, won the Traveller tournament hands down, becoming the topranked player in the United States and an honorary admiral in the Traveller Navy. Eurisko had designed its fleet according to principles it discovered itself-with some help from its inventor, a young, mustachioed researcher named Douglas B. Lenat, who was then employed as an assistant professor in Stanford University’s computer-science department. From Machinery of the Mind by George Johnson. Copyright 0 1987 by George Johnson. Reprinted by permission of l i m e s Books, a division of Random House, Inc.
101
102
KNOWLEDGE MANAGEMENT TOOLS
Lenat’s playful sense of humor has earned him a reputation as one of the most amusing lecturers and writers in the A1 field. In journals noted for dry, unadorned, sometimes unreadable prose, his papers sparkle with whimsy and wit. At the AAAI conference in 1984, he was in fine form, illustrating one of his talks with pictures cast by an overhead projector onto a screen at the front of the auditorium. As Lenat pointed out various details in the diagram, the giant shadow of a finger-presumably his own-moved across the screen. But, unbeknownst to his audience, he was pointing not with his own hand but with an artificial one-a prop for a practical joke. Having finished with his demonstration, Lenat walked from the projector to the podium, leaving behind the five-fingered prosthesis, which continued to cast its silhouette as though its owner had suffered amputation at the wrist. Without dropping a beat, Lenat continued his lecture as, little by little, the audience noticed something was amiss and laughter rippled across the room. Programming a computer to become Traveller champion was just the sort of intellectual lark he enjoyed. “I never did actually play Traveller by hand,” Lenat said three years after his program’s victory. “I don’t think I even watched anybody play it. I simply talked to people about it and then had the program go off and design a fleet. When I went into the tournament that was the first time that I had ever played the game.” Eurisko’s fleet was so obviously superior to those of its human opponents that most of them surrendered after the first few minutes of battle; one resigned without firing a shot. The experiments of Winston and Schank demonstrate how a machine might learn about cups, football, or Chinese cooking, spinning facts into webs of meaning inside its software-and-silicon brain. But, so far, all the programs have been able to learn are things their mentors deliberately teach them. With Eurisko, Lenat hoped to take the learning process a step further. What he had in mind was a grand, enormously complex program that would strike out on its own, like an explorer in an unknown realm, discovering things that humans didn’t know about. What Lenat envisioned was not simply a modern-day version of the cyberneticists’ old dream, the child machine, whose mind would start out empty-a blank slate to be written full of knowledge about life. Lenat realized that his program would have to be born already knowing a great number of things. It would need to know rules about how to make discoveries, and rules about how to discover new rules about making discoveries. And it would need a pool of elementary concepts, basic ideas to play around with. Only then could it be expected to discover something heretofore unknown. Lenat liked to compare the discovery process with genetic evolution. To explore a field of knowledge, Eurisko started with its set of basic concepts, given to it by Lenat. Then it modified and combined them into new, more complex ideas. As structures developed, the most useful and interesting ones-judged according to standards encoded in the program-survived. The structures Lenat wanted to see evolve were Traveller fleets. First, of course, he had to tell the computer what Traveller was. He did this by typing in
The Light of Discovery
103
descriptions of 146 Traveller concepts, some of them as basic as Acceleration, Agility, Weapon, Damage, and even Game Playing and Game. Others were more specific: Beam Laser, Meson Gun, Meson Screen, and Computer Radiation Damage. A Eurisko concept can be thought of as a box, or “frame,” containing “slots” filled with information describing it. For example, the Is-A slot in the box representing Energy Gun indicates that it is u Defensive Weapon Type and an Offensive Weapon Type-and a Physical Game Object as well. These concepts are, in turn, described by other boxes, each of which contains its own set of slots. Another slot tells Eurisko that information on Energy Gun’s firing range will be found in a frame called Energy Gun Attack Info. A My Creator slot records the name of the person who originally typed in a concept, or (if it is one that Eurisko discovered) the heuristic that was used to synthesize it. Everything that Eurisko knows is encoded this way. Even a simple concept like Is-A must be described by a frame, which (recursively) contains its own Is-A slot. “Is-A is a slot.” In programming nothing can be taken for granted. While some frames hold information about basic Traveller concepts, others are used to describe heuristics, rules about how to make discoveries. These heuristics (initially supplied by Lenat) advise Eurisko on fruitful ways to test its concepts and mutate them to form new ones. Many of the heuristics are obvious to us, but they must be spelled out to the computer: “If a concept proves valuable in designing fleets, then raise its worth rating, and vice versa.” “If a concept proves occasionally useful but usually worthless, then try creating a new, more specialized version of it.” Then it can be applied only to situations in which it’s likely to be helpful. With a network of this kind of knowledge programmed into its memory, Eurisko was ready to begin exploring the Traveller domain. It did this by designing ships and simulating battles, which each took between two and thirty minutes. After each of these internal altercations, the program examined the results and made adjustments to its fleet. Then it tested this new configuration in another simulated battle, and made adjustments again. As this evolutionary process continued, an ever stronger Traveller fleet evolved. For example, in the course of its ongoing war, Eurisko discovered how easy it was to provide ships with enough armor to protect them against energy guns. Thus the value in the Worth slot of Energy Gun, originally set at 500, was eventually lowered to 100. Weapons that proved more valuable would increase in worth, toward a maximum value of 1,000. The values in the Worth slots of Eurisko’s heuristics also would rise or fall, depending on how useful they proved in winning battles. “At first,” Lenat later wrote, “mutations [to the fleets] were random. Soon, patterns were perceived: more ships were better; more armor was better; smaller ships were better; etc. Gradually, as each fleet beat the previous o n e . . . its ‘lessons’ were abstracted into new, very specific heuristics.” By analyzing the differences between winners and losers, the program inferred new rules for good ship design. Then these new heuristics were added to the concept pool to vie against others in the evolutionary contest.
104
KNOWLEDGE MANAGEMENT TOOLS
Eurisko was creating concepts on its own. It was distilling thousands of experiences into the judgmental, almost intuitive, knowledge that constitutes expertise-rules that can’t always be proved logically, that can’t guarantee a correct answer, but that are reliable guides to the way the world works, a means of cutting through complexity and chaos. In one case, Eurisko took the heuristic that advised it to specialize concepts that are only occasionally useful and used it to specialize itself. The result was a number of new, more useful heuristics: “Don’t bother to specialize a concept unless it has proven itself useful at least three times, or extraordinarily useful at least once”; “When specializing a concept, don’t narrow it too much-make sure the new version can still do all the good things the old one did.” Of course, within Eurisko, the heuristics weren’t expressed in such chatty, informal prose. They were procedures, lumps of Lisp code. But they worked as though they were words of wisdom, aphorisms, bits of good advice. Each night, Lenat would leave Eurisko running on a Lisp machine in his office, returning in the morning to examine the results, occasionally helping the process by recognizing the discoveries that seemed most fruitful and weeding out mistakes. “Thus the final crediting of the win should be about 60/40% Lenat/Eurisko,” he wrote, “though the significant point here is that neither party could have won alone. The program came up with all the innovative designs and design rules . . . and recognized the significance of most of these. It was a human observer, however (the author), who appreciated the rest, and who occasionally noticed errors or flaws in the synthesized design rules which would have wasted inordinate amounts of time before being corrected by Eurisko.” After weeks of experimentation, and some 10,000 battles, Eurisko came up with what would be the winning fleet. To the humans in the tournament, the program’s solution must have seemed bizarre. Most of the contestants squandered their trillion-credit budgets on fancy weaponry, designing agile fleets of about twenty lightly armored ships, each armed with one enormous gun and numerous beam weapons. Eurisko, however, had judged that defense was more important than offense, that many cheap, invulnerable ships would outlast fleets consisting of a few high-priced, high-tech marauders. There were ninety-six ships in Eurisko’s fleet, most of which were slow and clumsy because of their heavy armor. Rather than arming them with a few big, expensive guns, Eurisko chose to use many small weapons. In any single exchange of gunfire, Eurisko would lose more ships than it destroyed, but it had plenty to spare. The first battle in the tournament was typical. During four rounds of fire, the opponent sank fifty of Eurisko’s ships, but it lost nineteen-all but o n e - o f its own. With forty-six ships left over, Eurisko won. Even if an enemy managed to sink all Eurisko’s sitting ducks, the program had a secret weapon-a tiny, unarmed, extremely agile vessel that was, Lenat wrote, “literally unhittable by any reasonable enemy ship.” The usefulness of such a ship was accidentally discovered during a simulated battle in which a “lifeboat” remained afloat round after round, even though the rest of the ships in the fleet
The Light of Discovery
105
had been destroyed. To counter an opponent who might have devised a similar strategy, Eurisko designed another ship equipped with a sophisticated guidance computer and a giant accelerator weapon. Its sole purpose was killing enemy lifeboats. After Eurisko prevailed so easily, the tournament’s directors tried to ensure that the 1982 championship would be different. “They changed the rules significantly and didn’t announce the final new set of rules until a week or so before the next tournament,” Jxnat said. “The first year that would have not been enough time for me to run the program to converge on a winning fleet design.” But since then Eurisko had learned heuristics that were general and powerful enough that they could be applied to new versions of the game. ‘‘We won again and they were very unhappy and they basically asked us not to compete again. They said that if we entered and won in 1983 they would discontinue the tournaments. And I had no desire to see that happen.” So Eurisko retired undefeated. Lenat decided to pursue a career in artificial intelligence in 1971, while he was a student at the University of Pennsylvania. In the course of earning undergraduate and master’s degrees in physics and mathematics, he began to find the abstractions becoming so pure and rarefied that they seemed almost irrelevant. “I got far enough in mathematics to decide that I wanted something to do that had more contact with the real world,” he recalled. “Also it became clear that I would not be the absolute best mathematician in the world, and that was pretty depressing. As far as physics went, I was very interested in high-energy physics and in astrophysics and general relativity. I got far enough in each to see, again, in some ways how far from reality, how mathematical and stylized each activity was. In high-energy physics we were looking for ‘resonances,’ and it was a never-ending game where the fold-out card you carried in your breast pocket kept getting longer and longer as more particles kept getting discovered. And in astrophysics it was finding solutions to Einstein’s equations-and again it seemed more of a mathematical exercise. “So I was groping around for something that would have more immediate impact on my life and the world and so forth and chanced to take, in my senior year, a single course-the only computer course I’d ever taken-which was Introduction to Artificial Intelligence. I decided this was the place that I wanted to spend the next twenty or fifty years. What could be more exciting than figuring out how intelligence works and testing your hypotheses and theories by doing experiments to get programs to act intelligently?” In 1972 Lenat was accepted into the A1 program at Stanford and began working with Cordell Green on automatic programming-the attempt to design software that, given a simple description of a task to be performed, will write an appropriate program. More generally, Green was exploring how an intelligent system-whether natural or artificial--can analyze a problem and solve it by devising a plan. Green was part of a school in A1 which holds that the best way to make computers intelligent is to teach them logic. One of the leading proponents of this
106
KNOWLEDGE MANAGEMENT TOOLS
view is Nils Nilsson, the head of the A1 program at SRI International. Artificial intelligence, Nilsson likes to say, is applied logic. In the early 1970s, for example, SRI researchers tried to use logic to design a program that would help a robot named Shakey plan and carry out simple tasks, such as moving from room to room. Suppose that Shakey was to retrieve a book from the top of a table. First the problem was coded into symbolic logic. Some axioms described the initial situation: On(Book, Table), At(Table, X), At(Robot, Y), meaning that the book is on the table, the table is at place X, and the robot is at place Y. Simple rules, such as the fact that if an object is at place X and it is moved to place Y, then it will be at place Y, had to be translated into still more axioms. Then the problem to be solved (getting the robot to the book) was described as a formula-At(Robot, Book)-which the computer had to prove by showing that it could be derived from the axioms using rules of logic. In the process of constructing the proof, a plan for moving the robot would emerge as a by-product. Using logic to solve even the simplest problems turned out to be very difficult. To find a correct proof for a formula, the computer had to juxtapose axiom after axiom, searching for the proper constellation. Problems involving more than a few axioms generated enormous search spaces. If the number of axioms was doubled, the number of possible configurations was squared; triple the axioms and the number of configurations was cubed. In the case of automatic programming, where the plan to be produced was as complex as a computer program, this “exponential explosion” was especially severe. By the time Lenat arrived at Stanford researchers had realized that in automatic programming the search space was too vast to explore without heuristics that captured some of the programmer’s art. But Lenat wasn’t much interested in logic. Unlike Nils Nilsson and Cordell Green, he didn’t believe that proving theorems had much to do with intelligence. Logic, he said, assumes that “the world is all black and white, false and true. Using logic to prove things is like using a scalpel to divide the world very, very precisely into false and true. And that’s fine for some problems, but in fact very little that we really do in the world-we as human beings coping with the world-has very much to do with truth and falsity. If you had to prove almost anything about what you’re doing-about why it will work or whether it will work-you’d have no chance at all of coping with the world. If someone knocks over a glass, whether you get up, or get up slowly, or quickly, or don’t bother getting upyou’re not doing that by proof. You’re not even using quantitative methods like integrating the hydrodynamic equations to see whether or not this fluid is going to drip on you. You’re using some rough heuristics-hindsight you’ve acquired by seeing lots of spills in your life. And you somehow compiled that hindsight into a small set of rules which enable you, in a fraction of a second, to decide whether or not you ought to move out of the way given this new spill that you’ve just seen.” While Green and other researchers began working on ways to use heuristics to guide the theorem-proving method, Lenat became interested in a more humanlike approach to automatic programming, one that had nothing to do with formal logic. He began by imagining a group of experts sitting in a room and collaborating on writing a program. He imagined how many experts would be necessary to
The Light of Discovery
107
write a program, what each would have to know and say to the others, how tasks would be delegated. Then he wrote a program in which the experts, called “beings,” were imitated by little programs, or “subroutines.” “There was one subroutine called Psychologist and one subroutine called Loop Expert and one subroutine called Domain Expert, and so forth,” Lenat explained. “There were maybe a hundred of these little beings altogether. You’d pose a problem and one of them would recognize it and broadcast to the whole community a set of requests for what needed to get done in order to solve it. And anyone who-any being who could recognize any of those requests would respond to it, process it, and broadcast requests of what it needed to have done. That succeeded pretty well. One of the tasks we applied the program to was synthesizing Patrick Winston’s thesis,” the program that learned to recognize arches. First, Lenat wrote the dialogue he imagined that his beings would engage in if they were writing Winston’s program. (The beings included one called Messenger, which would communicate with User-the human running the program, who could be called on occasionally to supply information or resolve disputes.) “For each being we had a list of the very specific facts that it needed to know. And I coded them up and now we had this system that carried out the original dialogue-perfectly. Each expert that was just the right expert came in at just the right time. It was spectacular and, sure enough, Winston’s thesis program came out as the end product. And it was great-except it couldn’t do anything else. ” The program’s knowledge was too specific to apply to other tasks. “Of all the five hundred thousand things that a psychologist might know, we only put down the ten things that it had to know in order to do Winston’s thesis program. And so if you asked anything else, and a psychologist should have responded, this one wouldn’t, because it didn’t know that. And so I was very depressed, because on the one hand I’d succeeded with this massive goal, but on the other hand I hadn’t really learned anything. And the reason was that having a very specific target, knowing where you want to go, makes it too easy to get there. So I said, ‘Okay, let’s try and design a program that doesn’t know where it’s going.’ And the idea of mathematics research came to me as the archetypical case where you’re playing around with concepts and you don’t really have any particular goal in mind. And so I said, ‘Okay, fine, we’ll write down ahead of time all the knowledge that’s reasonable to know to guide a researcher in elementary mathematics. And then we’ll set the program loose, and we won’t add to it or do anything to it. We’ll just see where it goes. And that will be the experiment.’” Lenat called his new program AM, for Automated Mathematician. For the next several months he gave it knowledge about set theory, a very abstract and basic discipline that forms the infrastructure of mathematics. A set is simply a collection of objects-(A,B,C,D), for example. It doesn’t matter what the letters represent: rocks, stars, people, galaxies. A set is an abstraction, and set theory is the science describing the behavior of this pure mind stuff. For AM to understand set theory it had to know about such fundamental notions as equality (when two sets are identical), union (when two sets are com-
108
KNOWLEDGE MANAGEMENT TOOLS
bined to form a third set) and intersection (when the members that two sets have in common are used to form a third set). If (A,B,X] and (X,J,S) are two sets, then their union is the set (A,B,XJ,S); their intersection is (X). To give AM the basics that it needed to perform original research, Lenat included 115 such concepts in the program, along with about 250 heuristics that captured the rudiments of mathematical discovery and aesthetics. To the nonmathematician the idea of mathematical aesthetics can be somewhat obscure. For example, one of Lenat’s heuristics said that a function often tends to be interesting if its “domain” and “range” coincide. Other heuristics were more elementary. For example, one rule told AM to test the worth of a concept by trying to find examples of it, then recording them in its Examples slot. A concept with many examples is considered interesting. But if there are too many examples, another heuristic advised, then it probably is not so interesting after all. So perhaps it should be specialized. On the other hand, another heuristic suggested, concepts that have too few examples probably should be generalized to make them more useful. Details aside, the important point is that Lenat found a battery of such simple, judgmental rules would help guide AM in its search through the ever-branching maze of mathematics, nudging it in directions most likely to yield interesting discoveries. The process worked like this: AM would pick one of the concepts Lenat had given it, such as Set Equality, and apply the heuristic that said a good way to start exploring a concept is to find examples of it. To do this, AM randomly generated sets two at a time and checked to see if they were identical-that they satisfied the concept of set equality. Both positive and negative examples were saved for future analysis in the concept’s Examples slot. Since the odds are low that any two sets will happen to be the same, the results of the experiment triggered the heuristic that said if a concept has few examples, try generalizing it. In this way, AM broadened the relatively bland notion of set equality into the more easily satisfied concept of sets with the same length. In the course of hundreds of such experiments, AM explored the world of set theory, modifying and combining ideas until new ones evolved like crystals growing in a supersaturated solution. In this way, the program chanced upon the concept of natural numbers (0,1,2,3,.. . ), which enabled it to discover arithmetic. The concept of union led to that of addition; addition to multiplication, which led to exponentiation. One heuristic advised AM to study the inverse of interesting functions. Thus AM turned multiplication into the concept of divisors. By applying a heuristic that suggested looking for extreme examples of interesting concepts, it found numbers that have only two divisors: the primes. Once AM knew about prime numbers, it was only a matter of time before it created versions of such famous mathematical ideas as Goldbach’s conjecture (every even number greater than 2 is the sum of two primes) and the Fundamental Theorem of Arithmetic: any number can be factored into a unique set of primes. “AM went off and discovered hundreds of things,” Lenat said, “about half of which were useful and half of which were sort of weird and probably losers.” Then, after its first two hundred discoveries, the program began to run dry.
The Light
of
Discovery
109
“It started doing silly things, one after another, like defining numbers that were both odd and even or other just awful stuff that doesn’t exist, or of which there is only one.” The percentage of good concepts-the “hit rate,” as Lenat called it-dropped from 62.5 percent to less than 10 percent. As the conceptual world A M was building grew increasingly complex, Lenat realized, the heuristics he had originally provided it were becoming inadequate. They had been written to deal with set theory, not arithmetic and number theory. To test his hypothesis, Lenat added some new, more appropriate heuristics, raising the hit rate slightly. Then he had the insight that led to the invention of Eurisko: heuristics, like any other concept, could be coded into the system as frames with slots and allowed to evolve. Once it was given access to its own heuristics, the program could experiment with them, gathering data on their usefulness. Then the rules about when to specialize or generalize mathematical concepts, or when to raise or lower their worth, or combine them to form new ideas-all could be applied to this new task of modifying and improving heuristics. Heuristics could be applied to heuristics, allowing the program to constantly learn how to make better discoveries. When AM was accepted as Lenat’s doctoral dissertation in 1976, he was already at work on Eurisko. Loosed upon the domain of number theory, Eurisko upstaged its predecessor A M by discovering several helpful heuristics such as this: “If an inverse function is going to be used even once, then it’s usually worthwhile to search for a fast algorithm for computing it.” The lesson reflected the fact that while, for example, it is easy to multiply several numbers to produce a larger number, it is extremely time consuming to reverse the process, taking a number and breaking it into all of its factors. When playing Traveller, Eurisko learned what Lenat called the “nearly extreme’’ heuristic: ”In almost all Traveller fleet-design situations, the right decision is to go for a nearly-but not quit-xtreme solution.” Thus Traveller would choose ships with an Agility rating of 2, but not 1; fleets with a total of ninety-six ships but not the one hundred allowed. So far, Eurisko’s most notable success has been in Traveller, but Lenat has found it to be general enough to make discoveries-and discoveries about discovering-in many domains. When applied to the field of microscopic circuitry design Eurisko discovered a new configuration for a memory cell. However, it might be of limited use since, Lenat wrote, “the cell can be realized most efficiently on the surface of a Mobius strip.” When Eurisko was given a set of concepts about Lisp, it was able to modify parts of itself. While sometimes these self-imposed changes helped the program increase its own efficiency, it also gave it the ability to damage itself. Lenat liked to compare the dilemma to that which faces the human race: once Eurisko knew about atoms-in this case Lisp atoms-it had the power to destroy itself. As with any program, there were bugs to work out. Sometimes a “mutant” heuristic evolved whose only function was to continually trigger itself, creating within the program an infinite loop. In another case, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that Eurisko had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of newly created concepts,
110
KNOWLEDGE MANAGEMENT TOOLS
located those with the highest Worth values, and inserted its name in their My Creator slots. By falsely taking credit for these discoveries, the heuristic made itself appear far more valuable than it really was. It had, in effect, learned how to cheat. Another bug was, Lenat wrote, “even stranger. A heuristic arose which (as part of a daring but ill-advised experiment Eurisko was conducting) said that all machine-synthesized heuristics were terrible and should be eliminated. Luckily, Eurisko chose this very heuristic as one of the first to eliminate, and the problem solved itself.” As Lenat continued his experiments, the similarity between Eurisko’s discovery process and Darwinian evolution became all the more striking. In both cases, programs (whether written using Lisp or the genetic code) generate structures, which are tested by setting one against another and having them compete for survival. One day, Lenat decided to give Eurisko the task of designing an animal. After several generations of simulated evolution, in which various “organisms” mutated and adapted in the environment of the program, Eurisko produced a creature that was “smaller, whiter, lighter-boned, had bigger and sharper teeth, larger jaw muscles, larger leg muscles, increased brain size, slept more, sought safer burrows, had thicker and stiffer fur, an added layer of subcutaneous fat, smaller ears, and one of a set of possible mechanisms to metabolize lactic acid more effectively.” As with Traveller fleets and mathematical concepts, the evolution of the simulated animal didn’t proceed entirely at random. The process was guided by heuristics. One rule (supplied by Lenat) advised that “whenever an improved animal was produced with a change in parameter X, [and] that animal also happened to have a certain change in parameter Y . . . in the future any mutations of X ought to have a higher chance to modify Y as well.” During the simulation, this heuristic helped Eurisko discover that “decreased ability to defend in combat” and “increased sensitivity to nearness of predators” should probably go hand in hand. In other words, Eurisko was smart enough to exploit the advantages of useful co-mutations and avoid wasting time experimenting with combinations that obviously would not survive. While, according to Darwin, evolution proceeds by means of what computer scientists call “random generate and test,” Lenat’s microworld operated by a more intelligent procedure: “plausible generate and test.” Mutations were more like intelligently conducted experiments than games of chance. Perhaps, Lenat has written, evolution in the real world also works this way. “What I conjecture is that nature (that is, natural selection) began with primitive organisms and a random-mutation scheme for improving them. By this weak method (random generation, followed by stringent testing) the first primitive heuristics accidentally came into being. They immediately over-shadowed the less efficient random-mutation mechanism, much as oxidation dominated fermentation once it evolved.” Or, to put it another way, DNA may have developed the power to learn from experience. DNA molecules hold within their spiraling shells all the information necessary for producing an organism as complex as a human. So why not posit that DNA also contains some sort of encoding of its own genetic history-a
The Light of Discovery
111
record of the changes that the species has undergone over the course of its evolution? Included in this history would be information about past experiments-mutations that were particularly useful or detrimental to helping the species survive. If we also suppose that DNA has the ability to examine this historical record and notice regularities and patterns, then it could conceivably learn rules about what seem to be the most fruitful ways to mutate-heuristics about how to most efficiently explore the evolutionary pathways of the great search space called life. These heuristics, Lenat speculates, would be inserted into the genetic program, included along with the other information in the spiraling DNA. When an organism reproduced, the heuristics would encourage certain mutations and discourage others, perhaps by producing enzymes to promote or suppress the appropriate chemical reactions. Evolution still would proceed by conducting millions of successful and unsuccessful genetic experiments, but it would not work, as Darwin supposed, blindly. It would be guided by intelligence. Not the intelligence of an outside creator, which would impose it from the top down, but an intelligence that developed from the bottom up, according to natural, physical laws. Nature would be like a giant, very sophisticated Eurisko program. What would an evolutionary heuristic look like? One very simple one might say, in effect, that “if a gene has mutated successfully several times in the recent past, then increase its chance of mutating in the next generation, and vice versa.” “There may be a body of heuristics,” Lenat wrote, “related to an abstract entity S, which you and I know as snow, perhaps more precisely as glaciation, and a concept H, which we might take to mean heat, or perhaps body heat.” Translated into English, these heuristics might look like this: ”If there is more S in the world, then improve mechanisms to conserve H”; “If H is to be dissipated, then evaporation is a good way to do it”; “If it is desired to facilitate evaporation, then increase body parts having large surface areas”; “If you want to conserve H, then increase sleep and dormancy”; “If you increase sleep and dormancy, then you also increase passive vulnerability”; “If you want to decrease passive vulnerability, then increase body armor or perception skills.” Lenat wrote:
Even though most of the terms used in the heuristics are incomprehensible to the D N A itself, it might nevertheless use these rules, carry out inferences upon them, and come up with a better-designed animal.. . . The nouns in the above rules (for example, “fatty layer”) would point to gene complexes responsible for morphological structure (such as enzymes that determine the thickness of the fatty layer) without comprehending why they had such an effect. Of course, the D N A would not “understand”what a predator was, or what fat was, or what snow was, but it would have a large corpus of facts about each of those mysterious (to it) concepts, related changes to make, frequency of their occurring, and so on. But then again, what more do we as AZ researchers mean when we say that one of our programs “understands” a concept? Terms like “sleep,” “dormancy,’’ “evaporation,” and “vulnerability” would be symbols, defined in terms of other symbols, as in a semantic net.
112
KNOWLEDGE MANAGEMENT TOOLS
Lenat emphasizes that he is not committing the biological heresy of Lamarckianism-the long-discredited theory that what an animal learns about the world is stored in its genes and passed on to its offspring, so that we inherit our parents’ experience as pianists or computer programmers. We are not supposing that there is any direct sensing of temperature, humidity, predators, and so on, by the DNA. Rather, the heuristics guide the production of, say, two types of progeny: the first are slightly more cold adapted, and the second more heat adapted. The first bas an assertion that the climate is getting snowier, the second that the climate is getting more tropical. Initially, they are produced in equal numbers. I f one group dominates, then its assertion about the climate is probably the correct one. . . . Incorrect heuristics die out with the organisms that contain them. Useful ones survive and multiply. It is not the DNA of any individual animal that learns from experience-it is the evolutionary system as a whole. In fact, a quite sophisticated model of the world might be built up by now, purely by the D N A making guesses, designing progeny consonant with those guesses, and letting natural selection rule out those based on false assumptions. . . . B y now a large knowledge base may exist about ecology, geology, glaciation, seasons, gravity, predation, symbiosis, causality, conservation, behavior, evolution and knowledge itself. In a small number of generations, man has managed to invalidate many of these bits of knowledge, this model of the world. I f the heuristics can trace this breakdown to the increasing size of our brains, they might take quick corrective action, preserving homeostasis and the validity of their knowledge base by drastically decreasing human brain size over just a few generations. While this is of course a fanciful tongue-in-cheek extreme case, i t . . . demonstrates the power, the coordination, that a body of heuristics could evince if it were guiding the process of evolution. Perhaps, Lenat allowed, we have not yet evolved to the point where heuristic evolution has taken over. In that case, he suggested, scientists might someday synthesize heuristics using recombinant DNA techniques, then “insert them into DNA and study the results, thereby improving the entire process of evolution.” At this point, Lenat’s ideas on evolution are more science fiction than science, but the idea of attributing intelligence to the process of evolution is an intriguing example of just how far the information-processing metaphor might be extended in our attempts to explain the world. And, while Eurisko contains but a simulacrum of the creative spirit found in humans, the program demonstrates that discovery, like learning and perception, can proceed in a more orderly manner than romantics might like to admit. O u r flashes of discovery are likely the result of intelligently guided search, not, as Lenat puts it, “the mystique of the all-seeing 1’s: illumination, intuition and incubation.”
The Light of Discovery
113
The very name “Eurisko” sounds like a cross between “heuristic” and “eureka,” which comes from the Greek word for “I have found it.” According to Plutarch, “eureka!” is what the ancient Greek mathematician Archimedes called out when he was struck by the insight that led him to discover the law of displacement and the concept of specific gravity. According to the story, Archimedes was retained by Hieron 11, the king of Syracuse, who had paid a craftsman to make him a pure gold crown. After the job was done, the king became suspicious that the goldsmith had cheated him, adulterating the gold with silver. How, he asked Archimedes, could he determine if this was so. Pondering the problem, Archimedes stepped into a bathtub. As the water rose, slopping out of the tub, he realized that he had his answer. He could submerge the crown in water and measure the amount of liquid it displaced. If it was equal to the amount of water displaced by a piece of gold of equal weight, then the crown was genuine. If the amount of water displaced was different, then the crown contained impurities. Archimedes’ method worked because equal weights of different elements such as gold and silver displace different amounts of water. In terms of modern chemistry, they are said to have different densities, or specific gravities. Another way to think of it is this: an ounce of gold has less volume than an ounce of silver, because gold atoms have heavier nuclei. Thus gold will displace less water than an equal weight of silver. In retrospect, Archimedes’ discovery doesn’t seem all that impressive, but it is easy to forget how little information he was working with. There was no welldeveloped atomic theory that might lead him to imagine that gold and silver are made from invisible lattices of atoms. Even the concept of density was far from obvious. Since gold is heavier than silver, one might very well suppose that it would displace more, not less, water. For that matter, water might be elastic instead of perfectly buoyant, so that a submersed object would squeeze some of the liquid together rather than displace it. Perhaps some substances were more apt to squeeze water and others to displace it. Then there might be no simple law of displacement, only a chaos of effects too convoluted or irregular to describe. The story of Archimedes is often used as an example of the “Aha!” experience-that flash of insight that seems to occur when everything comes together, lighting the proverbial bulb above the head. Patrick Langley and Gary Bradshaw, working with Herbert Simon at Carnegie-Mellon University, are seeking a more rational explanation. In fact, they have reenacted Archimedes’ discovery with a computer program, Bacon. While Eurisko makes its discoveries by starting with concepts, such as set theory or Traveller rules, Bacon begins with experimental data. Then, guided by its own set of heuristics, it finds regularities and uses them to postulate laws. In A1 parlance, Bacon is data-driven while Eurisko is theorydriven. Or, to put it another way, while Bacon makes discoveries from the bottom up, Eurisko makes them largely from the top down. In an attempt to mimic the Archimedes discovery, Bacon first was given data on an experiment in which three pieces of silver were submerged one by one into three flasks. While the flasks were identical in size, they contained different amounts of water. As we can confidently predict from hindsight, any one piece of silver made the water level in each flask rise by the same amount. It didn’t matter how full or empty the flask was to begin with. Noting this regularity, Bacon pos-
114
KNOWLEDGE MANAGEMENT TOOLS
tulated that there was some quantity X associated with each piece of silver. X is what we’d call volume, though Bacon, of course, didn’t know that. Then, comparing this “Xness” against the weight of each piece of silver, Bacon discovered that they were related in a linear manner-that is, if W doubles, X does too; if W is tripled, halved, quartered, increased by 12 percent, or decreased by 54 percent, then X varies by the same degree. Bacon looked for linearity because a heuristic told it to. The discovery of the two linear variables fired another heuristic which said, in effect, to try combining them into a proposed new quantity W/X and examine it. Since W and X are locked together in a linear relationship, the ratio W/X will always produce the same number. Take any piece of silver, divide its weight by its volume, and the answer is always 10.5, which is the specific gravity of silver. Given similar displacement data for gold and lead, Bacon discovered that they, too, have unique ratios of W/X associated with them. By juxtaposing variables and looking for patterns and invariances, Bacon discovered, step by step, the same law that came in a flash to Archimedes. In a similar manner, Bacon rediscovered Ohm’s law-that voltage equals current times resistance-and other laws of physics, such as Kepler’s third law of planetary motion and Galileo’s law of falling bodies (the one he supposedly discovered by dropping two rocks, a small one and a large one, off the Leaning Tower of Pisa). A later version of the program was able to glean the fact that chemical elements combine to form compounds according to fixed ratios (water, for example, is always H20). Using this information, the program discovered for itself the concepts of atomic and molecular weight. In real science, discoveries are not always purely inductive, proceeding from the specific data to the general theory. Sometimes the process works the other way around. Through deduction, specialized laws are derived from more general ones. To add this top-down flavor to their model, Langley, Simon, and Bradshaw are working on ways to give Bacon heuristics that will let it know about the laws of conservation of mass, momentum, and energy, and use them to help it make discoveries. A gracious, cultured man with graying dark hair and a fetching smile, Simon brings to his research a wide range of intellectual interests. At Carnegie-Mellon he is professor of both psychology and computer science; in 1978, when he was sixty-two, he won the Nobel Prize in economics. But in A1 circles, Simon is best known for his work in human problem solving. From the days of the Dartmouth conference, when he and Allen Newell unveiled the GPS program, Simon has studied how people use their faculties of short-term and long-term memory and their ability to make simple logical deductions to search a space of possible solutions, converging on the answer to a problem. One of the most striking things about Simon’s work is that it has led him to believe that human behaviorwhether it involves solving the cannibals-and-missionaries puzzle or making an important scientific discovery-is a fairly simple process. Experiments have shown that humans can retain fewer than ten items or “chunks” in their short-term memory. If we look up a telephone number we can
The Light of Discovery
115
usually remember the seven digits long enough to dial them, but if we are interrupted before we pick up the phone we are lucky to remember the first few. Chunks are not necessarily single digits. If we call Manhattan often enough, we automatically know first to dial 212. The three-digit area code has become compiled into a single chunk in our mind. Likewise, we can remember a street address of more than seven digits and letters. The name of the street is likely to be a familiar word-Main, Elm, Washington-and can be held in short-term memory as a single item. If streets were named after random jumbles of letters-Nfgrtfyrfawhb-each letter would be a chunk and we probably would be unable to remember them all without rote practice, painstakingly transferring the information to long-term memory. The amount of information we can squeeze into a chunk varies with expertise. In one classic experiment performed in the early 1960s, a psychologist, Adriaan de Groot, had his subjects spend several seconds observing a chessboard with twenty or so pieces arranged as though a game had been interrupted in midstream. After the pieces were removed, chess masters could easily reconstruct the board from memory; amateurs could not. But if the pieces were originally arranged at random, not according to the rules of the game, the chess masters did as badly as the novices. The experts apparently saw the board in terms of a few familiar chunks each consisting of several pieces. For the novices, the position of each piece was a chunk. There was so much information that it overwhelmed short-term memory. Herbert Simon estimates that a chess master holds about 50,000 such chunks in long-term memory, a figure approximately equal to the number of words a college graduate can recognize. It takes about five seconds to transfer a chunk of information from short-term to long-term memory. A good deal of expertise, Simon suggests, consist of spending a decade or so memorizing and indexing 50,000 to 100,000 of these packets of information. Then, when faced with a problem, whether in chess strategy or medical diagnosis, an expert can recognize certain cues-a pattern on the board, a combination of symptoms, something familiar that evokes the retrieval of the proper chunk from long-term storage. As fields become more complex, so that one cannot become an expert in ten years, they tend to divide into specialties, and practitioners make greater use of external memory devices such as books and libraries. And so, with patience, we overcome our limits. By working on a project one step at a time we gradually accomplish it, storing immediate results in long-term memory, on paper, or in a computer file. Even with our ten-chunk processors, we can solve problems that take months or years. We can write books and compose symphonies. Given enough time, wonderful structures evolve, but at any one moment we are like a spider spinning a web-or, to use one of Simon’s favorite metaphors-an ant walking along the hills and valleys of a wind-carved beach. “Viewed as a geometric figure, the ant’s path is irregular, complex, hard to describe,” Simon wrote in his book T h e Sciences of the Artificial. “But its complexity is really a complexity in the surface of the beach, not a complexity in the ant.”
116
KNOWLEDGE MANAGEMENT TOOLS
It doesn’t matter how complicated are the millions of chemical reactions that take place within the hardware of the ant’s nervous system. Viewed from the outside it is a simple device, able to sense food, climb a hill, detour around the unclimbable. By applying these simple procedures to its complex environment, the ant produces seemingly rich behavior. And so it is with people, Simon believes. “A man, viewed as a behaving system, is quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself. . . provided that we include in what we call man’s environment the cocoon of information, stored in books and in long-term memory, that man spins about himself. “That information, stored both as data and as procedures and richly indexed for access in the presence of appropriate stimuli, enables the simple basic information processes to draw upon a very large repertory of information and strategies, and accounts for the appearance of complexity. . . . ” Simon considers his work on Bacon to be a direct extension of his earlier work, the demystification of human thought. “Scientific discovery is problem solving,” he said. “I just don’t think that one has to postulate magic. These are understandable processes, and the processes in Bacon are exactly the processes we saw in all the problem-solving programs: a lot of recognition, a little bit of reasoning.” Simon has tested his ideas about discovery on people as well as machines. In one case, he tricked several of his colleagues into discovering Planck’s radiation law, one of the most famous and significant equations of twentieth-century physics. “Planck’s discovery of the radiation law is very interesting. It turns o u t that he did it in one day-the law, not the explanation-simply as an exercise in interpolation. There was previously known a formula which explained the radiation in a certain frequency range, and new data suggested in a different frequency range a quite different formula. If you interpolate between the two formulas and try to find a simple formula that will explain both of them, you get Planck’s law very readily. “I tested this by going to lunch and sitting down with various applied mathematicians and physicists on our faculty and saying, ‘Look, I’ve got a problem. I’ve got some data and up here in the range of large X it’s fitted very nicely by ex, and down here for small X it is proportional to X. Maybe you can help me think of some sort of simple equation that would fit both of those things.’ I tried this on about eight people. Five of them had the answer in two minutes, and three of those five did it in exactly the same way-by taking a Taylor series of ex and noticing that if you just shifted a 1 over to the other side of the equation you had what you wanted. So, no miracles. They just did what came naturally.” It is not necessary to know what a Taylor series is to appreciate Simon’s point: that even the great discoveries like Planck’s are based on keen problemsolving skills, not inexplicable flashes of insight. While Simon’s main interest has been in using computer programs like Bacon to help understand human psychology, he sees no reason why eventually we
The Light of Discovery
117
can’t have machines that will practice science on their own. Of course, Simon and his colleagues are a long way from instilling a program with the kind of knowledge and fluid thinking that scientists use when they decide how to design an experiment, or which variables to pay the most attention to-when they suddenly realize that if they completely reframe a problem the data will all make sense. But he feels that all those talents can be mechanized. “If we are ever to get people to really believe in Bacon, it will have to make an original discovery or two of its own. Until then people will say, ‘Well, didn’t you sneak the discovery into the program?’ No, we didn’t sneak it in, but how do you prove that? One great way to prove it would be to discover a new law. We’ve really got to get that on the agenda.”
This page intentionally left blank
PART THREE
Knowledge Codification
This page intentionally left blank
7 Information Retrieval and Cognitive Authority Patrick Wilson
THE AUTHORITY OF T H E PRINTED WORD Libraries are storehouses of knowledge and of much else. They house the paper products of the knowledge industry, but that does not mean that they contain a collection of works each of which presents a contribution of knowledge. From our survey of the organization of production in the knowledge industry, we have to conclude that a complete collection of the published results of work in the industry contains a very great deal of material in the form of proposals that found little or no acceptance and that it may contain a great deal of work of no value or significance whatever. If we believe in progress in knowledge, we will have to expect that many of the older products have been made obsolete by later work. And if we suspect that fashion plays a large part in determining what is produced and what is currently thought of past productions, we will have to expect that many of the older products may have gone out of fashion even though they are still valuable and usable. If we doubt that workers in many parts of that industry are really able to settle the questions they raise, we will recognize that much of the content of the library will represent opinion, not knowledge. If we admit that the number of different perspectives from which the world can be viewed and described is endless, we shall expect that the library will contain competing, conflicting accounts of the world that cannot be incorporated into a single consistent story of the way things are. And if we recognize the existence of a cultural underground pseudoknowledge industry catering to the superstitious and deluded, then we will expect a complete or indiscriminate library to contain much that represents neither knowledge nor reasonable opinion at all. We did not explicitly consider whether the production of works aimed at a nonspecialist audience should be considered part of the knowledge industry, and From Second Hand Knowledge, Patrick Wilson, Copyright (c) 1983 by Patrick Wilson. Reproduced with permission of Greenwood Publishing Group, Inc., Westport, CT.
121
122
KNOWLEDGE MANAGEMENT TOOLS
we need not answer that question now, simply noting the fact that while most of the specialist productions of the industry never reach any audience but a specialist one, there is a flourishing industry devoted to the production of textbooks, popularizations, and works of serious or semiserious scholarship aimed at a wide audience (including much of what we have included under the general category of history), which will be found in the library and will represent as much variation in quality and credibility as will the specialist productions. A small library might contain only “the best that has been thought and said,” but a large library is bound to contain much of the worst as well. All of the books, journals, newspapers, manuscripts, and films in the world’s libraries are possible sources of knowledge and opinion, but they present us with the same sorts of questions of cognitive authority that their authors would if we were fzce to face with them. Which of the works in the library are to be taken seriously? How much weight are we to give to what the texts say? Some of them will tell us about things of which we already know a great deal, and we can test their claims directly against what we already know (or think we know). But most of them will tell us about things we d o not know enough about to apply the direct test. We consult them to find what we do not already know. And so we have to approach them as we would anyone claiming special expertise: by applying indirect tests. The tests available are similar to those we would apply to any person. The obvious basis for recognizing the cognitive authority of a text is the cognitive authority of its author. We can trust a text if it is the work of an individual or group of individuals whom we can trust. The usual considerations that would warrant recognition of a person’s authority can be transferred to his work as long as the work falls within the sphere of his authority. But at once the element of time enters to distinguish the case of textual authority from that of personal authority. The basic tests of personal cognitive authority apply to a person as he is now, at the time the tests are applied: present reputation, accomplishments up to now, and so on. That he now merits recognition as an authority does not mean t h a t he did so earlier or that he will continue to do so. If he now merits that recognition, a work he composed twenty years ago may not merit it; for one thing, he may have repudiated it himself, and if we now trust what he says on the same subject, we cannot transfer his present authority to that past work. Present authority in the person does not automatically transfer to past work. Nor does past authority of the person automatically transfer to the text giving it any present authority. That someone was an expert o n a subject in 1850 does not provide warrant for taking the texts he produced then as now having any authority at all. An old reputation is not enough to establish the present authority of old texts. It is present standing that we need to determine. Present reputation provides the strongest practical test of the cognitive authority of an old text-not just reputation among any group of contemporaries but reputation among those we recognize as having cognitive authority in the appropriate sphere. They may or may not be those now expert in the field in which the old work originated, or in some successor field into whose scope the old questions have now passed (for the geography of inquiry changes constantly, and ques-
Information Retrieval and Cognitive Authority
123
tions migrate from one field to another). If we recognize the authority of a specialist group now claiming jurisdiction over the sphere within which the old text falls, its present reputation in that group may be decisive for us; but if we do not recognize their authority, its reputation in that group may be immaterial to us. We can rely on personal recommendation, that being just a special case of the present reputation rule. We are prepared to trust the texts that one whom we trust tells us we can trust. But what do we do when present reputation is unknown or irrelevant and we lack personal recommendations? Since the passage of time erodes authority, we may rely on a simple rule of recency: the newer the better, the older the worse. Such a simple-minded rule is sure to lead us to neglect good old works for shoddy modern ones, but no indirect test can be expected to work all the time. And if one cannot formulate and apply any more complex time-discount rule, this simple rule is better than nothing. Given no other basis than time since writing on which to decide about the cognitive authority of texts, one would not do better by using the rule “the older the better,” nor would one do better by ignoring time entirely. But more complex rules for discounting for time since writing are easy to formulate: for instance, by dividing subjects into those one suspects to be progressive and those one thinks unprogressive, applying the simple rule of recency for progressive ones and ignoring time entirely or not giving it much weight in unprogressive subjects. Whether a subject is progressive may be decided by reference to institutional authority or its most important condition, present consensus. By this rule we might come to the conclusion that science is progressive and other fields of inquiry unprogressive. Another sort of test is applicable to texts but not to people: publication history. A publishing house can acquire a kind of cognitive authority-not that the house itself knows anything, but that it is thought to be good at finding those who do and publishing their work. So publication by a house we respect constitutes a kind of almost personal recommendation. A single journal can have the same kind of authority, which transfers to the articles it publishes. Other sorts of institutional endorsement are sometimes available and used as tests of authority: sponsorship of a publication by a learned society or professional organization; use as a textbook by teachers in prominent educational institutions; publication by a governmental agency or state printer; prizes and awards given to the text or to its author for this text. Issuance of several successive editions and translations serves as indirect test of authority, counting as an extraordinary accomplishment, since for most texts the first edition is also the last. Finally, published reviews furnish a special indirect test. If the reviewer already has cognitive authority for us, his review constitutes a personal recommendation (or not). If we are given sufficient information about the reviewer, along with the review, we may be able to arrive at an estimate of his authority. If the reviewer is unknown, his judgment may mean nothing, while if he is an anti-authority, reliably wrong, his praise may be fatal to the work he reviews. A text may acquire cognitive authority independent of the authority of its author. The tests just enumerated are applicable to the text directly, not first to its author and then derivatively to the text. The authority of standard dictionaries
124
KNOWLEDGE MANAGEMENT TOOLS
does not derive in our eyes from that of their compilers; we do not know these people. A standard reference work that is repeatedly revised may be thought of as an institution in its own right. Those responsible for its revisions may derive their reputation from this connection rather than the work deriving its reputation by reflection from theirs. Finally, the test of intrinsic plausibility, always available, is particularly important in questions of cognitive authority of texts. A text usually has only one chance to capture our attention and interest; reading a few words of it may be enough to discourage us from continuing or may lure us on to reading the whole thing. These rapid assessments are based on more than intrinsic plausibility, but that is a large element. If the sample of text we read strikes us as nonsense, we are unlikely to continue; if it seems eminently sensible, we may read on. Instant recognition of a work as representing a school of thought that we flatly reject, a style of research that we think worthless, or a theoretic commitment that we think foolish allows us to dismiss much of what we encounter as not worth bothering with. Not that we always reject what we see to be in conflict with our prior beliefs and cognitive positions-there are plenty of occasions when we must read what we find uncongenial-but we cannot avoid awareness of a text’s contents as plausible or implausible and give or withhold authority accordingly. Application of these various external tests for cognitive authority is as frail and uncertain as are the tests applied to people. They can be applied in various ways with different results; however applied, they guarantee nothing. But they are all one has to go on. Or is there a further guide to estimating the cognitive authority of texts? Do those professionally responsible for information storage and retrieval have anything further to offer in the way of guidance? If we go to a library to find out what is known on some matter or what the state of opinion is on the matter, with luck or with the help of a librarian we may find a single source that appears to tell us what we want to know: a reference book, a treatise, a textbook, a review of the literature. The question that can always be asked about the single source is, Need I look further, or can I take this source as at least provisionally settling the matter? This is the familiar question of cognitive authority in only slightly different guise. If I am already convinced of the authority of the source (it is, after all, the standard work on the subject; it is, after all, the dictionary), the question is already answered, but if I am unfamiliar with the source, the question is likely to arise explicitly, if only briefly. Since there may be many other sources giving quite different stories about what is known or what the situation is with respect to the question, it would be a mark of credulity to settle for the first source that came to hand and seemed to answer the question. Caution would suggest that one needed not only to find reasons for taking the single source seriously but also for thinking that there were no other sources deserving to be taken still more seriously. This calls for information not found in the sources themselves. We cannot tell the reputation of a text or of its author by looking at the text. Even when we think we have found out something about reputation, the question remains of what weight to give it. If we are not so lucky as to find a single source that appears to tell us what we want to know, we may have to search for a number of texts from which, col-
Information Retrieval and Cognitive Authority
125
lectively, we can find what we seek. Finding the right collection of texts is neither simple nor straightforward; using the texts to arrive at a satisfactory result is even less so. The most difficult situation would be that of having to consult original reports of scholarly and scientific research, for there may be only the most tenuous and indirect relation between what they say and the consensus of the specialists, if any. The question of how much weight to give to any particular specialist group’s views is ever present. It would be ideal if there were someone whom we could trust who could tell us about the single sources that seem to answer our question, “You need go no further.” It would be ideal if someone could tell us about multiple sources, “You can ignore this lot, and of those remaining, this one and that one are the most important, the others adding little to what they contain.” Whoever did this would be providing us with the most important sort of quality control on texts. A text can be of high or low quality in many different ways: well and clearly written but unfortunately inaccurate; imaginative and stimulating but unsound; and so on. But for one who wants to find out what is known or what is the state of some question, the chief aspect of quality is credibility: can one believe what the text says, or can one at least take it seriously? Other good or bad points about the text are of subordinate interest. The question of cognitive authority can be rephrased as one of quality control: can those professionally responsible for information storage and retrieval act as quality controllers? Those professionals might perform a further service: to undertake to do all our work with texts for us, including formulating an appraisal of the state of the question if no single formulation already exists that they find adequate. It would be the most luxurious service if they could not only tell us which of many texts we should consult to arrive at a good understanding of the state of the question but go on to use those texts themselves to draw up a critical description of the state of the question. We will ignore this last service for the present; one who could not be trusted to act as quality controller could not be trusted to do this further service, and we must first try to settle the question of the information storage and retrieval professional’s ability to control quality. These professionals certainly include librarians, and, increasingly, people who prefer other occupational labels, such as information specialists or information scientists. Insofar as they deal with texts, one might call them all bibliographers, though many would reject that label too. We will generally speak of librarians, understanding that what we say of them applies to others doing comparable work with comparable skills and techniques.
Misinformation Systems It is somewhat surprising that so little attention is given in the literature of information storage and retrieval and of librarianship to the quality of the information stored and retrieved.’ One would have thought that the difference between information and misinformation would be central in any work aimed at the design and operation and improvement of anything that called itself an information system, for if that difference was not of central importance, why not call the
126
KNOWLEDGE MANAGEMENT TOOLS
system a misinformation system? Libraries are still mainly called libraries, but with increasing frequency they are referred to as information centers. Unless the library is centrally concerned with the quality of what it contains, it might as well also be called a misinformation center. If people use these institutions in order to find out about some matter, then they at least are interested in the difference between finding o u t and being misled. The sense in which we speak of books as containing information about a subject, from which we can find out about the subject, the sense in which information is a valuable commodity, worth bothering to store and retrieve and give to people who want to learn something, is a sense in which information contrasts with misinformation.2 No doubt it is true that people often use libraries simply to find out what different people have said on a question, but certainly they often also want to know whether what people have said can be believed. That is a question of cognitive authority. Trying to sort out the information from the misinformation in a batch of texts that discuss the question that interests me, I have to use such clues about authority as I can find. Does the information specialist or librarian not share my concern? One would think that theoretical and practical writings about information service would be full of concern for ways of determining, measuring, or estimating and then registering the quality of the texts stored and retrieved. But they are not. The question hardly arises. Occasionally someone will complain that the question of quality is being unduly neglected. The complaints go unheard, or at any rate go without effect. Writers on collection development in libraries regularly discuss the evaluation of library materials, but it is surprising how little is said about how one is to assess the quality of a work.3 One is regularly advised to determine the authority of the author or to investigate his qualifications, but no instructions are given as to how to do this. (Nothing like the distinction between expertise and authority is to be found in the standard works.) Writers on collection development have wrestled inconclusively with the conflict between trying to provide works of high quality and also providing works that people will be interested in reading. The conflict centers on works of literature and the different appeal of works of high and popular culture rather than on nonfiction. What happens to a work after it enters the library is not of concern to collection development; that is the business of others. But whose business, if anyone’s, is it to indicate to library users the quality of the works in the library? It is not the business of those who make the library’s catalogs. While it would be feasible to provide information about the quality of works along with other information in catalog records, it is not in fact done, and it is not recognized as the job of the catalog to provide evaluative information. Of course, the catalogs of most libraries reveal rather little about the contents of the libraries’ collections, since the smallest unit separately listed is usually the separately published book, or the periodical publication treated as a single unit. To find journal articles or separate chapters in collections of papers, for instance, one must use a variety of other indexes. A few of these indexes are explicitly evaluative, a combination of critical review and abstracting service, but most indexes are evaluatively neutral, indicating only content. Little in the literature on indexing betrays any strong concern
Information Retrievcl and Cognitive Authority
127
for the quality of the material covered. Reference librarians are heavily involved in matters of quality, but the literature of reference work is curiously free from explicit discussions of how to determine the quality of information sources or how to decide on the proper measure of cognitive authority that should be given to a source. Librarians and others professionally concerned with information storage and retrieval shy away from the question of quality. Librarians have a standard response to proposals that part of their professional responsibility should be to provide information about the quality of the texts that they collect, describe, and make accessible. It is that evaluation of the content of texts requires expertise in the subject matter of the text evaluated, and the librarian or information specialist does not and cannot be expected to have expertise in every subject for which there are texts, or indeed to have expertise in any subject except the techniques of librarianship or information handling. Those who prepare indexes to journal literature cannot be expected to evaluate the material they index if they are not expert in the subject matter. Explicitly or implicitly accepting this standard response, the theoreticians of librarianship and information storage and retrieval spend no time discussing how quality might or should be determined and how information storage and retrieval could incorporate quality control in its basic and essential operations. The practical rule is: caveat emptor. The theoretical position seems to be: that practical rule is the right rule. This response is odd but not unprecedented. Information science is then similar to communications engineering in its indifference to the quality of messages. The communications engineer is concerned with the fast and reliable transmission of signals but not with the quality of the information content carried by the signals. It is important to the engineer that the signal received be that sent but not important that the signal represent a piece of misinformation rather than information. The kind of mistake of concern is, say, the garbling or degradation of a message, certainly not the sending of the wrong message in the first place. Analogously, the information scientist is concerned with the fast and reliable retrieval of stored messages (texts) but not with the quality of the messages. It is important that the messages received be those requested, but not important to the information scientist as such that the messages represent misinformation rather than information, or incompetent inquiry rather than competent inquiry. That is for some other specialist to determine. Questions of quality fall outside the scope of information science. If we take this line, there are significant facets of actual library and information work that seem to lie outside the scope of the theoretical study that corresponds to that work, facets that are left without systematic theoretical investigations. First, at the input state, is not quality a consideration in deciding what to add to a library or put into a retrieval system? And who then makes the decision, and on what basis if not expertise in various subject matters? Second, at the output state, librarians answer reference questions and directly give people information (unfortunately, as we know, all too often misinformation rather than i n f ~ r m a t i o n and ) ~ they sometimes (or often, depending on the situation) recommend readings to their patrons and make reading lists and bibliographies directed
128
KNOWLEDGE MANAGEMENT TOOLS
either to their clientele in general or to individual clients. In answering questions, recommending readings, and making lists tailored for particular publics, quality is explicitly or implicitly a major concern, or certainly should be if it is not. Is it satisfactory for theory to ignore this concern? There is a sophisticated response to these questions. It is that the information scientist is indeed concerned with quality but in the guise of subjective ~ t i l i t y . ~ The goal of libraries and other information systems is to provide people with texts or information that they find subjectively satisfactory. Whether others would or ought to find the same texts satisfactory is another matter and outside the scope of the science and of practice as well. The point is to provide each system user the texts that that individual will be most gratified to get. Any concern with quality that did not affect subjective satisfaction would be wasted, and success a t devising ways of giving people what they find most satisfactory is a solution to any problems of quality. “When all is said and done, the major task of any library is to supply those materials which the individual user will find valuable and useful. The amount of satisfaction a reader finds in a library depends directly upon the materials the librarian has available for hidher use.”6 This, from a standard work on book selection, indicates librarians’ agreement with the information scientist. Collection development is a matter of prediction, not evaluation. The aim is to collect what will be found interesting and useful, and the task is simply to try to predict which texts will be found so. This is plausible up to a point but fails to account for question answering. The reference librarian thinks that the task is to find the correct answer to a question and that he has failed if he gives an incorrect answer, whether or not the patron was satisfied. The patron would obviously agree. No one will say that it does not matter whether a question is answered correctly or not, so long as the patron thinks it has been answered correctly. No one will admit that an information service is a good one because its users go away happy even though they have been provided inaccurate information. The selection of books for a collection may aim at satisfying the patron, but the question-answering process must aim at providing information rather than misinformation. The sophisticated response is not even satisfactory when it comes to finding long texts for people rather than directly answering their questions. It is satisfactory when we think only of retrieval for a specialist of texts in a specialty in which he is in his own eyes a competent judge. He expects to have to judge for himself and would not be interested in others’ judgments on the texts he is given. But it is not satisfactory for the person looking for texts in an area in which he is not a specialist and does not think himself independently capable of judgment. He faces the problems of cognitive authority and might be expected to be happier with a system that gave him trustworthy indications of authority than with a system that did not. The goal of achieving the greatest subjective satisfaction cannot be reached if questions of quality, and in particular questions of cognitive authority, are ignored. Whether that goal can actually be reached is unclear, but a science that refused to consider quality would appear to be resigning itself to considering only second-best solutions.
Information Retrieval and Cognitive Authority
129
The standard response was that evaluation can be the work only of specialists in the subject matter of the texts evaluated. This is the general principle of professional monopoly on criticism: each profession claims the exclusive right to judge its own work We have seen how much or little foundation that rule has. It is not a plain truth about who is able to do what but a political claim to certain rights and freedoms: freedom from nagging external criticism, right to do work that others may find futile and worthless. Outsiders-generalists and specialists acting as generalists-have to evaluate insiders’ claims, deciding where expertise warrants recognition of cognitive authority and where it does not. Of course, the specialists-all professionals even-will resist outsiders’ attempts to put an independent evaluation on their work, but that is not sufficient reason for the outsiders to stop. The stock response about lack of all the necessary expertise does not settle the issue. We must press the question further.
Demand for Evaluation The question of quality may receive little attention from librarians and other information professionals because it is not a pressing problem, and it might fail to be a pressing problem for several reasons. The chief reason would be that no one is asking asking them for more help than is already provided. If few or no people feel the need for help in evaluation of texts or if they feel the need but do not consider it appropriate to expect the librarian to provide the needed help, then there will be no external pressure to stimulate practical and theoretical concern for problems of evaluation by librarians and their theorists. In fact, signs of external demand are not abundant. Workers in the knowledge industry do constantly complain of the low quality of much of what is published but do not express the desire that librarians help them determine quality. It is because they themselves feel competent to judge quality that they complain, and what they complain about is not lack of help in judging quality but lack of editorial firmness. Librarians’ evaluations are not found helpful because they are outsiders’ evaluations, and adherents to the principle of specialist monopoly on evaluation have no interest in outsiders’ evaluations.’ What of nonspecialists? Those uninterested in reading anything or interested in reading but not in using libraries are sources of no external pressure for evaluation. Those interested in using libraries may be satisfied with a situation in which they are left free to make their own selections without interference from librarians. If their interests are in light, recreational reading, they may be indifferent to questions of cognitive authority. Their wants can be satisfied by giving them access to a relatively small collection of recent works, as in a retail bookstore. If they are students, they are likely to be looking for what has been recommended to them by teachers, who solve any problems of authority that might arise. When we exclude nonreaders, students, specialist readers, and light readers, we have excluded most of the population. The probably tiny minority left of serious general readers, of intellectuals (in the sense earlier explained), may be in want of advice on the cognitive author-
130
KNOWLEDGE MANAGEMENT TOOLS
ity of texts from time to time, but they are also likely to be those for whom solution of problems of authority is a central part of the game they are playing. In addition, they are supplied with, and are the most likely audience for, the fairly elaborate reviewing system, especially for books. Since there is always more to read than one has time or inclination for and since plenty of texts will be known by reputation or recommendation from trusted others, the serious general reader can work at the backlog of as-yet unread works of already known standing without ever getting to the point of wondering what is worth reading next. Sometimes it is claimed that large numbers of adults want more help than they get in planning and guiding their lifelong learning, but it is not clear that this actually amounts to a desire for help from librarians in determining the authority of texts.x All in all, it seems as if the demand for this special kind of help is lacking or at least is only latent rather than overt. And if so, this would help to explain librarians’ relative indifference to the question of quality. But supply can stimulate demand, and a service offered might turn o u t to be eagerly received. Have we reasons for thinking this would happen? Have we reasons for thinking there is strong latent demand? Those who think they see the society changing in the direction of a “learning society” or a knowledge society might argue that there is such a latent demand for help in evaluation. Not long ago a serious observer could forecast that “in the middle-range future, learning might become the dominant activity for the mass of Americans. . . . In future decades when high per capita income, high rates of productivity, and high proportions of leisure time combine to permit discretionary use of time and discretionary choice of activities, it seems a safe bet that Americans will devote themselves increasingly to the intellectual endeavors.” Knowledge has already become the critical economic resource, and “is fast becoming the critical resource for consumers as well. If America should become a nation devoted to learning instead of to the production of goods, the national character and the character of urbanization” would change m a r k e d l ~ . ~ One kind of change that might have been expected would have been an increased prominence for libraries. A nation devoted to learning instead of to the production of goods might choose learning by experience over learning a t secondhand through books and other typical library materials. And if it chose book learning, it might have been so wealthy that it would buy rather than borrow its books and so enlarge the market for book publishers that public libraries would not be needed to make up for the deficiencies of the commercial book-distribution system. But a nation devoted to learning instead of to the production of goods is at least likely to want to maintain a large stock of works available as a communal resource and not insist that one be allowed to read only those books one could afford to purchase or was able to beg from the more affluent. The greater the appetite for learning and the wider the catholicity of the appetite, the greater the potential problems of choice, of recognition of cognitive authority. The greater those problems, the more important the utility of trustworthy assistance and the possible service that the librarian might render. Others might enter the field to compete in offering the same service, to be sure, but the scope for service would be there and the possibility of rendering it would be worthy of close investigation.
Information Retrieval and Cognitive Authority
131
Are there signs that Americans are devoting themselves increasingly to intellectual endeavors and that the nation is becoming devoted to learning instead of the production of goods? Is free time increasingly devoted to serious world watching and improvement of the understanding as well as to economically motivated enlargement of skill? It is certainly not obvious that it is.'O The category of the general reader is not obviously growing at the expense of the other categories. Public libraries are not expanding or growing more prosperous. Thus the question of librarians' ability to help on questions of cognitive authority is not becoming an increasingly practical one. Still it is a question worth asking, for perhaps the current situation of the public library would be different if in the past the question had been answered affirmatively, or perhaps the current situation is partly explicable by a negative answer.
The Authority on Authorities An authority on authorities is one who can be trusted to tell us who else can be trusted. He need not himself be learned in the fields in which he can identify the authorities. It is enough that he has some way of telling who deserves to be taken as having cognitive authority. A universal authority on authorities would be one who could be trusted to tell us who else can be trusted, in all possible spheres; such a person would be potentially an authority on everything, for if he could identify the authorities in any sphere, he could in principle find out what they claim to know and so inform himself on any subject whatever, and subsequently inform us. He could find out literally anything. Recalling the earlier discussion of scope, sphere, and degree of authority, it is clear that such a person need not be an absolute authority, whose word we took to settle questions for us; his word might have much less weight than that. But to be recognized as one whose word carries some weight on all possible topics would certainly be noteworthy. Can we imagine librarians or other information specialists playing such a role? At first one is inclined to smile, thinly. But let us remember that cognitive authority is a matter of degree, and put the question again. Can we imagine them being recognized as having some degree of authority in questions of who else is to be trusted in this or that matter? Then the response is not so clear. Librarians may indeed be thought by some of their customers as specially knowledgeable about the authorities in various different spheres, and they are not obviously mistaken in this belief. Librarians are in a particularly advantageous position to survey a wide field, to be at least superficially acquainted with the work of many different people, with many books, with many works evaluating and summarizing the state of knowledge in different fields. If they are specialists in matters of techniques of bibliographical work, they are perforce generalists with regard to the content of the texts they encounter in their work. But they are in advantageous positions to develop a wide familiarity with reputations, with changing currents of thought, with external signs of success and failure. Along with knowledge of the standing of individuals, they can accumulate information about the standing of particular texts: particularly classics of different fields, standard works, and the like.
132
KNOWLEDGE MANAGEMENT TOOLS
This is a long way from establishing the librarian as a universal authority on authorities. However advantageously positioned, the librarian cannot be expected to accumulate information on all possible subjects. There are too many of them and too many texts and authors. The most one might claim is that the librarian is especially well situated to find out about the standing of texts and authors in any field; he can supplement what he already knows with new information that he is especially capable of finding. Searching for information is part of the librarian's occupational role. Searching for information from which to get a conclusion about the cognitive authority of texts and authors is just a special case of searching. To deserve to be recognized as to some degree an authority on authorities, you are not required to be able always to find information that would permit a conclusion of that sort. It is enough that you are often able to find relevant information-information that tends to support one or another conclusion. The librarian can often do that. He can find information about authors' education and careers, for instance, reviews and discussions of their work, and frequencies of citations to their work-all relevant information. This begins to sound like the description of a plausible authority on authorities, but it is not yet quite that. What does the librarian do with the information gathered? He might simply give it to the customer to use as he wishes. Or he might draw a conclusion and give that to the customer, with or without the information on which the conclusion was based. If he draws no conclusions but simply gathers information for others to use, then he is not acting as one who had found out what others' claims to cognitive authority were but simply as a supplier of background information. He need have no opinion about others' claims to cognitive authority-it does not matter if he does or does not, if he does not reveal it. The point about cognitive authority is that it is trusted for substantive judgment and specific advice, not for recital of information relevant to making judgments. We want to know from an authority on authorities whom we can trust: can I trust this author, this text? If the librarian is to serve as an authority on authorities, he has to use the information he collects and arrive at a conclusion. He has got to say, "This book is not to be trusted; that one is." The librarian has to pronounce judgment on the cognitive authority of authors and texts. But why should anyone else take these judgments seriously? If they are based not on expertise in the subject matter concerned but only on external signs and clues, then they are based on the same sorts of things that any other person ignorant of the subject matter would have to use. The librarian would not and could not claim to have any special tests for cognitive authority that were a professional secret, unavailable to others. There are no such professional secrets." Could the librarian claim to have any special ability at interpreting the external signs and weighting them properly? Could the librarian claim, for example, to know how much importance to attach to the fact that a person graduated from Harvard or Slippery Rock State College? That his book got a good review from this person and a bad one from that one? Alas, there is no reason to suppose the librarian has this unique gift. No library educator would claim the ability to give students such a gift or claim to have it himself. Indeed any such claim would be vigorously denied by his colleagues.
Information Retrieval and Cognitive Authority
133
It looks as if the librarian has no claim to be taken as an authority on authorities after all. This means that the librarian has no special basis for evaluation of the texts he collects and retrieves for users, no basis that everyone else does not also have, no special or distinctive claim to have his own judgments taken seriously. He can do what everyone else can do, but everyone else can do what he can do, and his judgments have no special claim of superiority. He might as well confine himself to supplying the information about authors and texts and keep his judgments to himself, which is what he would probably prefer to do anyway. There is another route to the same conclusion, which we ought to take, for the conclusion is so important that it deserves to be treated carefully. External signs do not provide our only tests for authority; the test of intrinsic plausibility is also available and is powerful. But it is a test that obviously yields different results for different people. What I find persuasive and reasonable, you find unconvincing and foolish. A cognitive authority on authorities has to be a good judge of intrinsic plausibility. How and where would the librarian acquire this ability? Why suppose that while learning to be a librarian, one also comes to be a good judge of plausibility? There is no reason to suppose that, and no one is likely to claim that librarians are especially good at that sort of judgment. If anything, the librarian is likely to consider his own judgments of plausibility irrelevant and try to suppress subjective judgment, sticking instead to the public facts. But if he succeeds in this effort, he can make nothing of the public facts. One can arrive at no conclusions about authority solely from the facts about external reputation, education, career, and the like. One needs a way of determining how much importance to give to the facts. We all do this intuitively; without having an explicit procedure, we make subjective judgments on the relative importance of public facts. The librarian trying to suppress intuition and judgment would have to draw up an explicit scheme of weighting and grading-one scheme out of an infinity of possibilities, among which he would have no basis for choice, if determined to suppress intuition and judgment and to avoid the test of intrinsic plausibility. If he uses his intuition, he can claim no special authority; if he does not use it, he can make no claims at all. So he cannot claim authority about authorities. That settles that, but unfortunately it also raises a serious problem. How can the librarian claim the ability to answer reference questions if he is no authority on authorities? Questions are answered in libraries by consulting books, and if the librarian cannot tell which books can be trusted and which cannot, how can he claim to have found the answer to a question?’* He finds an answer to the question, but finding it to be the answer requires being in a position to say: “We need go no further; we may stop right here.” If he cannot evaluate the cognitive authority of texts, how can he be in a position to say any such thing? This is serious; librarians cannot simultaneously deny competence to judge the quality of texts and assert competence to answer questions by finding the answers in books. Library reference service appears to be based on a contradiction: the simultaneous assertion and denial of competence to evaluate texts.
134
KNOWLEDGE MANAGEMENT TOOLS
Librarians as Delegates If a librarian cannot evaluate the content of a book, how can he tell whether the answer to a question that he finds in the book is correct or incorrect? He cannot. The librarian does, however, recognize the cognitive authority of a great many books: reference works-dictionaries, encyclopedias, handbooks, gazetteers. He recognizes their authority because it is accepted in the profession that they are to be consulted and trusted. The basis for authority is external: recommendations from other professionals, library educators, reviewers, and compilers of lists of standard reference works. Most important perhaps is the fact that the practice of reliance on such works is established. A profession is, we said earlier, a cognitive routine, and an important part of the cognitive routine of librarianship is the principle that what the profession recognizes as a standard reference work can be accepted as having cognitive authority and relied on in answering questions. The individual librarian does not have to evaluate the books from which he takes answers to questions. Others have done that already; the profession as a group has collectively decided that they can be relied on. The particular importance of reference works in the library question-answering process lies in this: it is part of the profession’s recognized business to evaluate reference works but not to evaluate other kinds of texts. The questions most readily accepted by the librarian are those of the sort answered in reference works; any particular question may happen not to be answerable from a reference work and so entail search in other sorts of texts. But a reference question is preeminently the sort of question to which a short answer can be expected to be found in a standard reference work. Other sorts of questions are likely to require long answers, for which one is directed to catalogs of books, or to be disputed or controversial or open questions, to which a single standard answer cannot be expected. Questions of opinion, open questions, are not suitable reference questions, for answers would require evaluation of sources or settling questions of cognitive authority, which the librarian does not claim to be able to do. Library question answering tries to confine itself to matters of knowledge as opposed to matters of opinion: to questions that can be assumed to be closed, with received answers recorded in reference books authorized by the profession for use. Reliance on standard reference works might be expressed this way: “For practical purposes of question answering, we will assume that the contents of these reference works reflect what we can take to be the agreed answers to questions now closed.” The questions that library reference service is best prepared to answer are factual questions; but factual simply means matters of knowledge rather than opinion, closed rather than open questions-questions on which there is no controversy. That the reference works do not collectively give a single standard answer for the same question, that they are in varying degrees full of inaccuracies, that the questions they answer are often not closed but wide open: these are matters that can be admitted while still ignored in practice or ignored as long as possible. If one could not in general rely on the reference works, reference service would be impossible. If one had to answer each question by trying independently to establish the accu-
Information Retrieval and Cognitive Authority
135
racy of the information given in standard reference works, the cost in time and effort would be intolerable, and unanswerable questions of cognitive authority would be encountered constantly. Reliance on the reference works taken as authoritative on the word of others is the only practical basis of reference work. The librarian can consistently deny the ability to evaluate and claim the ability to answer questions; the questions are answered on the basis of works whose authority is accepted on quite external grounds of professional standing. If the librarian is an acceptable information source, it is not because he himself knows anything about the world outside the library and not because he is particularly good at finding out about such matters by making expert use of things inside the library. What he knows is the social standing of certain reference works within a community, the professional community of librarians, and he accepts this social standing as sufficient justification for reliance on the reference works. For the outsider, this implies that the librarian does nothing distinctive, nothing the outsider could not have done had he been supplied with similar information about the social standing of particular reference books. The question is whether that social standing is good enough to warrant recognizing the cognitive authority of the sources used to answer questions. There is an answer that puts the work of the reference librarian in a slightly different light. Librarians as a group have no special ability to decide what texts deserve what cognitive authority, but they are no worse than many other groups in this regard. We can consciously and deliberately delegate to another person a job we could do ourselves as well as he could because we lack the time to do it or prefer not to bother doing it ourselves, and think the delegate can be trusted to do it well enough. This is the attitude we may take toward those we elect to political office. It is not that they are better than we would be at making decisions that involve weighing the merits of competing specialists’ claims but that they are no worse, and we are willing to trust them (for a while) to act on our behalf. It would be understandable that we should be prepared to make librarians o u r delegates as well. It is not that they are better than we would be in determining what sources of information can be trusted but that they are no worse, and we are willing to trust them to act on our behalf. They may make mistakes, but so might we. Their familiarity with the range of available sources gives them an advantage of speed in finding sources that are candidates for trust. The saving in time is sufficient inducement to appoint them our delegates, given that we think the question unimportant enough so that it can be entrusted to someone no better than ourselves, or easy enough (looking up an address or telephone number in a directory) that it would take a positive effort to fail. This may not be very comforting to librarians, who would like to think themselves professionals with special skills at finding information. But if the special skills do not include evaluating claims to cognitive authority, it is not clear why we should recognize them as able to find information at all, except in the sense of finding out what is said in various texts. Would we really say that an individual has found out for us what the population of China is if what he does is tell us: “It says in this text that the population of China is such and such, but I
136
KNOWLEDGE MANAGEMENT TOOW
have no special way of telling whether this source is to be believed or not”? Is he really an information source if he cannot tell the difference between information and misinformation? And how can he do that if he cannot evaluate the cognitive authority of the texts he uses? We would do better to say that the librarian has special skills at finding out what has been said on various questions and is perhaps no worse than we are at judging the authority of the sources. One does not choose as one’s delegate a person whose opinions and judgment seem bizarre or abnormal. The safest delegate is someone whose views are, by one’s own standards, quite conventional and normal. The delegate need be no better than oneself but should at least be sensible. A person whose own views are thoroughly conventional will want as delegates people whose views are also conventional. A person whose own views are heterodox will distrust one of conventional views and prefer a delegate whose views are heterodox in the ways his own are, for those are the views that make sense to him. A delegate may serve quite successfully with no views at all, provided that he can identify the views of those for whom he serves as delegate and act as if he agreed. A chameleon might d o very well as delegate. Might not the librarian acquire a special status as cognitive authority by acquiring specialized knowledge of some subject matters-for example, by acquiring graduate degrees in economics, anthropology, history, or biology? If education as a librarian is insufficient to convince others that one is competent to evaluate the documents one stores and retrieves, would not education in something else as well be sufficient to guarantee one’s competence to evaluate at least part of those documents? It should be evident that this is neither necessary nor sufficient. Not necessary, for over time one might acquire the sort of credit with a particular library user that any critic of anything might acquire, from repeated steerings of the user toward what he found rewarding. The librarian might finally come to be so trusted that when he said, “You ought to look at this,” this would be taken seriously. In this sense, a librarian can gain the privilege of prescribing documents for the user. But this is a kind of authority that is acquired individually and over time, not automatically by guarantee of an institution or a pattern of training. Special training is not sufficient, particularly in areas outside the natural and formal sciences, because cognitive authority will depend on whether it is the right sort of training. Training in neoclassical economics will not confer special status for an audience of Marxists, nor training in classical Freudian psychology for an audience of behaviorists or cognitive psychologists. In areas of factional dispute, only the right sort of training will be an asset. Librarians can individually acquire cognitive authority for particular patrons, but librarians as a group can claim no special authority. They can find out what has been said by different people on a question and can find information helpful in estimating the social standing of people and ideas and theories, information relevant to settling questions of cognitive authority. But they can claim no special competence at settling those questions. The librarian who deliberately and conscientiously tried to suppress intuition and not intrude his own notions of what is and is not plausible into the process of establishing cognitive authority
Information Retriwal and Cognitive Authority
137
would resemble the completely open-minded person who is popularly thought to be an ideal judge of such matters. But the open mind in this kind of case is the empty mind which has no reason to think one thing rather than another. We solve questions of cognitive authority by employing our already formed stocks of beliefs and preferences. If we did not do so, we could never know what to think of anything said or read.
The Didactic Library So far we have ignored a special sort of library in which questions of cognitive authority are not only central but are answered in a special way. People uninformed about library management sometimes suppose that the presence of a book in a library constitutes an endorsement or guarantee of the book’s contents. They ascribe cognitive authority to the library itself insofar as they suppose the institution is good at distinguishing good books from bad ones, or trustworthy ones from untrustworthy ones. When they find books they consider doctrinally or morally objectionable in the library, they are understandably shocked that a public agency should give its endorsement to such works and may try to get the endorsement withdrawn and the book removed from the collection. It may be difficult to convince them that the presence of the book in the collection does not constitute an endorsement, that the institution is not claiming cognitive authority- For it is, after all, feasible to form a library that claims institutional authority and tries to include only trustworthy and authoritative works. A religious institution might include in its library only works that were doctrinally sound and considered to merit cognitive authority, or it might also include unsound or heretical works but label them as such. So might a political institution limit its library to works of proper doctrinal content. So might any professional school’s library be limited to works acceptable to the profession or to the theoretical school or faction followed in the institutional program. Not only might there be such libraries; there are plenty of them. The proper office of such libraries is to serve as teaching institutions, supplying only such works as are thought fit to recommend. An official library might be limited to works endorsed by a public governmental agency as containing only what the agency certified as trustworthy material. A library for the use of school-children might be deliberately limited to works certified by school officials as correct, trustworthy, and proper for consultation by children. All such libraries can be called didactic libraries. Presence of a book in the collection is intended to constitute an endorsement, except for books not endorsed that are clearly identified as such.” Such a didactic library need not contain only works that hew closely to a single doctrinal line. There can be much disagreement and many different points of view reflected in the approved works. But the disagreements and differences of point of view are those recognized as legitimate by the institution. They reflect differences of opinion within the responsible group, the tolerable divergences of opinion as well as the group consensus.
138
KNOWLEDGE MANAGEMENT TOOLS
The role of the librarian in the didactic library is to anticipate or follow the judgment of the group or institution to which the library is an adjunct, not independently to determine the doctrinal soundness of particular works. The librarian might indeed evaluate particular works, acting as delegate for the institution’s administrative authorities, applying standards and criteria known to be thought proper by those authorities. These might be the librarian’s own internalized standards and criteria, and the librarian might do the work perfectly by following his own conscience. Or they might be standards and criteria he privately thought silly but applied as instructed, recognizing the administrative authority’s right to define the working assumptions of the library. In either case, the librarian can be taken by others as having cognitive authority, being a source of information for them about the standing of particular texts. Behind the librarian lies the institution which he serves, and those who recognize the institution as itself having cognitive authority can presume that, like other educational institutions, it has ways of telling that its librarian can be trusted. The cognitive authority that the librarian would not otherwise be able to claim can be attained by reflection from that of an authoritative institution; the librarian’s own soundness is warranted by the continued implied approval of the institution.
The Liberal Library “An old dictum has it that the librarian should, qua librarian, have no politics, no religion, and no morals.”14That hardly applies to the librarian of a didactic library who must have, or pretend to have, the right kind of politics or religion or morals. But it does apply to the librarian of what we can call the liberal library, in which the librarian explicitly disavows the intention to exclude works he thinks lack cognitive authority. In such a library, the librarian not only has no politics, no religion, and no morals; he has no opinion on any open question. Librarians see their role as one of complete hospitality to all opinion. “Libraries should provide books and other materials presenting all points of view concerning the problems and issues of our times,” and nothing is to be done by labeling or physical segregation to “pre-dispose readers against. . . materials.”’s Answers will be gladly given to presumably factual questions (presumably closed questions) but materials, not answers, will be supplied from which the library user can find his own answer to open questions. No library can acquire everything published, so choice is necessary, and questions of value can enter into selection decisions mainly by way of trying to determine the expertise (not the authority) of authors. But demand, not cognitive authority, is the overriding consideration, and even the productions of the information underworld are to be supplied if they are wanted. It is not to count against a text that the librarian, or a majority of people, or a vocal minority of people, find the views expressed in a text to be ridiculous or intolerable or just mistaken, and so unworthy of serious attention. It is a point of principle and pride that the librarian does nothing to influence readers for or against any view. The librarian
Information Retrieval and Cognitive Authority
139
takes it as a high principle to maintain a studied neutrality; the librarian is professionally noncommittal. This is an ideal, not necessarily realized in practice. In practice the librarian may discretely avoid arousing controversy by setting limits to what can be collected, avoiding material that is blatantly offensive to large groups of people or to vocal small groups. In practice, the liberal library may not be completely liberal, but it has at any rate a clear and simple principle: prefer one book over another to the extent that the first is more likely than the second to be found satisfactory by a user of the library. The contrast between didactic and liberal could hardly be more extreme. In the one, cognitive authority is the dominant consideration; in the other, consumer demand is the dominant consideration. Not only collection development principles are different in the two sorts of libraries. The scope and character of reference work should be expected to differ. In a certain sense, more is known in the didactic library than in the liberal library. The didactic library represents a group with a definite position, and there may be a large range of questions that are settled within that group, though they may be unsettled in the larger world. The public librarian cannot answer a question about theological matters, for theology is a matter of opinion; but it is not a matter of opinion inside a religious community, and for the didactic librarian it may be easy to answer questions that the liberal librarian cannot answer. The liberal librarian can discover what answers different groups might give to an open question (open in the larger community), but this is hardly the same as discovering the answer. That, the didactic librarian may well do; of course, librarians in opposing didactic libraries will give different answers. The liberal librarian knows less than the didactic librarian because the range of things that everyone thinks to be closed is much smaller than the range of things that various subgroups think closed.
The Skeptical Librarian At the beginning of this chapter, we introduced and then put aside the possibility that the information storage and retrieval professional might not only serve as quality controller, a trustworthy guide to the cognitive authority of texts, but might also use the texts to make an appraisal or appreciation of the state of a particular question if none already existed that was satisfactory. We have fairly well disposed of that service. If the librarian can make no claim to special ability to evaluate the cognitive authority of texts, he can surely make no claim to special ability to appraise a cognitive situation. He might indeed summarize what has been said on a question and find information from which one might conclude what, for instance, were minority views and what were majority views. Ac,ting as delegate, he might make an appraisal that was unpretentiously simply his own reaction. But he would claim no special status as critic. Since there are plenty of people claiming special status as critic, it is unsurprising that work of this sort has not become a recognized part of the librarian’s customary repertoire.
140
KNOWLEDGE MANAGEMENT TOOLS
It would have been interesting if the librarian’s job had come to be that of gatekeeper of last resort, determining what should and should not be published. That was the job proposed for the librarian by the Spanish philosopher Ortega y Gasset. Librarians were to be responsible for preventing publication of superfluous books and encouraging production of those that are needed but not so far produced.’6 It is not clear why he thought that anyone would be prepared to give this job to librarians. Somewhat more modestly, it might have become the job of librarians or bibliographers to screen publications and weed out the futile and useless ones, omitting them from bibliographies and library collections and computerized bibliographical data bases. It has not come to be agreed that this is for the librarians and bibliographers to do. The historian of science George Sarton was well aware of the crowds of “infinitesimal and immature publications” that crowd libraries and bibliographies, but he concluded that none could be discarded entirely, and that “we are doomed to drag them along in our bibliographies, forever and ever.”” So the librarian and the bibliographer work in a world of texts that they take as simply given and cannot on their own authority claim to evaluate. They can claim to be especially adept at locating particular inhabitants of that world and at reporting what they say about each other and about the external world. This is a sufficiently useful and interesting skill, so librarians need feel no embarrassment at not also being independently authoritative universal authorities on authorities. The librarian’s sphere concerns questions about who has said what about what and where it has been said-a large enough area for interesting and useful work. But it is understandable by now that so little attention should be given in the literature of information science and librarianship to questions of quality. Insofar as that literature addresses practical questions, it is concerned with how the librarian or bibliographer might change what they do. Even the most abstract and formal investigation of information storage and retrieval is aimed at discovering ways in which librarians might alter their procedures or ways in which machines might d o better or faster (or both) what librarians do. If the librarian’s tasks include the application of standards of evaluation of cognitive authority, as in the didactic library or bibliography, the standards are given from the outside. Choice of standards is not a problem, hence not in need of solution; or if it is a problem, it is not for the librarian to decide. If what they do involves not the application of standards of evaluation but simply prediction of future reactions by users of library or bibliographical systems, again evaluation is not a problem and hence not in need of solution. If the librarian could on his own authority adopt a new set of standards of evaluation, there would be scope for inquiry into what standards ought to be adopted. But on that point there are as many people claiming competence to provide the answer as there are those who say they know how to conduct inquiry and evaluate results. It would be a hopeless task to compete with literally everybody else in the knowledge industry in an attempt to be recognized as the authorities on standards of evaluation in general. Thus evaluation falls out as a problem of no interest-because evaluation is not the aim, because the standards are supplied by others, or because there is no hope of doing anything to change
Information Retrieval and Cognitive Authority
141
standards of evaluation over the opposition of others, and no need to change them if they are already approved by others. The liberal librarian’s studied neutrality on all open questions can be and is argued for on grounds of intellectual freedom and opposition to censorship. It is right to avoid taking positions on open questions; it would be wrong to do so. Since this implies that it is wrong to be a didactic librarian, the liberal librarian is in the illiberal position of taking a position on the open question whether all libraries should be liberal rather than didactic. It is not a comfortable position to defend. Perhaps it can be understood simply as making a virtue of necessity. Since the liberal librarian can make no claim for cognitive authority on questions of the value of texts, he declares that it would be a violation of professional obligations to do so even if he could. There is, however, a radically different way of viewing that studied neutrality. The liberal librarian can be viewed as a professional skeptic about claims to knowledge or claims of the superiority of one opinion over another. Skepticism is an ancient and seemingly indestructible current of thought, much misunderstood and maligned. Two main brands of skepticism are identifiable in the ancient world: academic skepticism, which denied the possibility of knowledge, and Pyrrhonian skepticism, the attitude of one who neither asserted nor denied the possibility of knowledge but continued to inquire, though always unsatisfied that knowledge had yet been found.In Noting the existence of counterarguments for every argument, noting the varying, changing character of opinions, the skeptic would simply refrain from declaring for or against any particular claim to know about the world. “Philosophical skeptics have been engaged in inquiry into alleged human achievements in different fields to see if any knowledge has been or could be gained by them,” as one expert (authority?)in the subject puts it.19 Pyrrhonian skeptics would not conclude that nothing could be gained by inquiry of some sort but rather would find themselves unconvinced that anything had been established so far. Pyrrhonism is not a doctrine but a state of mind. Let us try to imagine how a Pyrrhonian skeptic who found himself at a library reference desk and was asked some question would answer: “As to that question, there appear to be two different opinions held by various people. I take no position on the matter myself, but I can tell you what appears to be said on each side of the question, and on each side against the other side. Of course, you are not interested in what just anyone says; you want to know who is worth listening to. As to that question, 1 take no position myself, but I can tell you what people say about who is worth listening to. Of course, you are not interested in what just anyone says about who is worth listening to. You want to know whose opinions on that question are worth taking seriously. I take no position on that matter, though I can tell you who the different people are and what they say about why they should be attended to. If you want more than that, I can tell you only what people say. You want a guarantee or at least a recommendation from me, but I give no guarantees and make no recommendations. You fear that if you believe this one rather than that, you may be misplacing your trust. As to that, it appears to me that you may well be right.”
KNOWLEDGE MANAGEMENT TOOLS
142
Does not this skeptical response closely resemble the liberal librarian’s response to an open question? And would it not do perfectly well even for what the librarian thought closed questions? For the librarian need take no position on whether the questions are really closed. All he need do is report which people appear to think they are closed and what they take to be the answer. The librarian’s distinction between answerable and unanswerable questions can be construed as the distinction not between matters of fact or knowledge and matters of opinion but between matters on which there appears to be no difference of opinion and those on which there does appear to be such a difference. The skeptic’s intellectual position is the liberal librarian’s official and professional position. Although it may be usual to consider the library profession’s commitment to intellectual freedom and opposition to censorship as its main ideology, it would seem better to take Pyrrhonian skepticism as the official ideology of librarianship. In private life the librarian may be as dogmatic or credulous as anyone else, but in public life he acts like a skeptic. (Conversely, the didactic librarian who acts like a dogmatist may actually be a skeptic, appearing in public as a dogmatist.) Contrary to perpetual misunderstanding, the skeptic is not debarred from action or work; the one thing he does not do is to take a position as to whether what appears to be so really is. “It appears that this is what these people think on the question. As to whether they do really think that, I take no position,” he says. One might argue (this book is in effect such an argument) that skepticism is a highly appropriate attitude to take toward the productions of the knowledge industry. Opinions may differ sharply over whether that industry produces much of value. We may, like the world watcher, be absorbed in watching the play of opinion, and help others make their way through the jungle of the bibliographical world to find what people have to say on various questions, without feeling inclined or required to take a position on the cognitive value of what we find there. We may well learn what they have to say, but for us it remains just that-what they say. Skeptic, world watcher, librarian: all take the same attitude toward the world of ideas.
NOTES 1. Less attention, surely, than is given to the question by other students of information systems. See, for example, Russell L. Ackoff, “Management Misinformation Systems,” Management Science 14 (December 1967): B147-56. It is not that nothing is ever said in information science journals about quality; rather, what little is said takes the form of editorial complaints that little is said. 2. See Fred I. Dretske, Knowledge and the Flow ofhfomzation (Cambridge: MIT Press, 1981), pp. 40-47. “Roughly speaking, information is that commodity capable of yielding knowledge, and what information a signal carries is what we can learn from it” (p. 44). Knowledge is something people have; information is something messages carry. Being a philosopher,Dretske requires truth in information as much as in knowledge. False information is not a kind of information at all. We can substitute “what we
Information Retrieval and Cognitive Authority
143
can take as being true” for “true”: wanting information rather than misinformation is wanting what one can take as true. 3. See Wallace John Bonk and Rose Mary Magrill, Building Library Collections, 5th ed. (Metuchen, N.J.: Scarecrow Press, 1979);Robert N. Broadus, Selecting Materials for Libraries (New York: H. W. Wilson Co., 1973); William A. Katz, Collection Development: The Selection of Materials for Libraries (New York: Holt, Rinehart and Winston, 1980). 4. See, for example, Terence Crowley and Thomas Childers, Information Service in Public Libraries: Two Studies (Metuchen, N.J.: Scarecrow, 1971). 5. See W. S. Cooper and M. E. Maron, “Foundations of Probabilistic and UtilityTheoretic Indexing,” lournal of the Association for Computing Machinery 25 (1978): 67-80; W. S. Cooper,“On Selecting a Measure of Retrieval Effectiveness,” Journal o f the American Society for Information Science 24 (1973):87-100,413-24. 6. Bonk and Magrill, Building Library Collections, p. 1. 7. A comment on the Serials Review by an English professor: ”It is still aimed primarily at librarians, and reflects their biases. So far, it doesn’t seem to be as useful for those of us in other fields.” Karen J. Winkler, “When It Comes to Journals, Is More Really Better?”, Chronicle o f Higher Education, 14 April 1982, p. 22. 8. K. Patricia Cross, The Missing Link: Connecting Adult Learners to Learning Resources (New York: College Entrance Examination Board, 1978), p. 9. 9. Melvin M. Webber, “Urbanization and Communications,” in Communication Technology and Social Policy, ed. George Gerbner et al. (New York: Wiley, 1973), pp. 29394. 10. Most of the large numbers of adults enrolled in adult-education courses are pursuing vocational goals: taking courses to prepare themselves for better jobs or, in the case of schoolteachers, to qualify for salary increases. See the distribution of registrations in noncredit courses by subject matter in “Fact File: Adult-Education Students, Number of Registrations in Noncredit Courses, 1979 80,” Chronicle of Higher Education, 4 November 1981, p. 12. 11. Certainly there are no hints in the textbooks cited in note 3 that librarians know more than the textbooks are telling. 12. A wonderful remark in Margaret Hutchins, Introduction to Reference Work (Chicago: American Library Association, 1944), p. 37: “One other puzzling problem in some questions is ascertaining that a right answer has been found” (my emphasis). Cf. Patrick Wilson, Public Knowledge, Private Ignorance: Toward a Library and Information Policy (Westport, Conn.: Greenwood Press, 1977), pp. 99-107. 13. There is a strong analogy between the didactic-versus-liberal contrast and the traditionalist-liberal contrast in book selection drawn by William Katz (Collection Development, p. 89); also between Gans’s supplier and user orientations: see Herbert J. Cans, “Supplier-Oriented and User-Oriented Planning for the Public Library,” in his People and Phns (New York: Basic Books, 1968), pp. 95-107. 14. Broadus, Selecting Materials for Libraries, p. 25. 15. Library Bill o f Rights, adopted June 18, 1949, amended February 2, 1961, and June 27, 1967, by the American Library Association Council; Statement on Labeling, An Interpretation of the Library Bill of Rights, Adopted July 13,1951, Amended June 25, 1971, by the American Library Association Council.
144
KNOWLEDGE MANAGEMENT TOOLS
16. JosC Ortega y Gasset, “The Mission of the Librarian,” Antioch Review 21 (1961): 133-54. As the sociologist William J. Goode says, the librarian is a gatekeeper who cannot keep anyone out; see his “The Librarian: From Occupation to Profession,” A L A Bulletin 61 (May 1967), 544-55. 17. George Sarton, “Synthetic Bibliography, with Special Reference to the History of Science,” Isis 3 (1921):161. 18. Arne Naess, Scepticism, International Library of Philosophy and Scientific Method (London: Routledge & Kegan Paul, 1968);Sextus Empiricus, Outlines of Pyrrhonism, with an English translation by R. G. Bury, Loeb Classical Library (London: Heinemann, 1935). 19. Richard H. Popkin, “Skepticism,” Encyclopedia of Philosophy (New York: MacmilIan, 1967), 7:449.
the Structure of Knowledge Harry M. Collins
WHERE/WHAT IS KNOWLEDGE: TWO SPY STORIES Let’s start by asking how knowledge is transferred. Consider a couple of lighthearted but revealing accounts. A comic strip in my possession concerns industrial espionage between firms which manufacture expert systems. One firm has gained a lead in the market by developing super expert systems and another firm employs a spy to find out how they do it. The spy breaks in to the other firm only to discover that they are capturing human experts, removing their brains, slicing them very thin, and inserting the slices into their top-selling model. (Capturing the spy, they remove and slice his brain, enabling them to offer a line of industrial espionage expert systems!) Another good story-the premise of a TV film whose title I cannot remember-involves knowledge being transferred from one brain to another via electrical signals. A Vietnam veteran has been brainwashed by the Chinese with the result that his brain has become uniquely receptive. When one of those colandershaped metal bowls is inverted on his head, and joined via wires, amplifiers, and cathode ray displays to an identical bowl on the head of some expert, the veteran speedily acquires all the expert’s knowledge. He is then in a position to pass himself off as, say, a virtuoso racing driver, or champion tennis player or whatever. Once equipped with someone else’s abilities the CIA can use him as a spy.’ The “double collander” model is attractive because it is the way that we transfer knowledge between computers. When one takes the knowledge from one computer and puts it in another, the second computer “becomes” identical to the first as far as its abilities are concerned. Abilities are transferred between computers in the form of electrical signals transmitted along wires or recorded on floppy disks. We give one computer the knowledge of another every day of the weekthe crucial point being that the hardware is almost irrelevant. If we think a little From “Humans, Machines, And The Structure of Knowledge,” Collins, Harry M., Stanford Humanities Review, 4.2 (1995):67-83. Reprinted by permission of Stanford Humanities Review.
146
KNOWLEDGE MANAGEMENT TOOLS
harder about the model as it applies to humans, however, we begin to notice complications.
Embodied Knowledge Let us imagine our Vietnam veteran having his brain loaded with the knowledge of a champion tennis player. He goes to serve in his first match-Wham!his arm falls off. He just doesn’t have the bone structure or muscular development to serve that hard. And then, of course, there is the little matter of the structure of the nerves between brain and arm, and the question of whether the brain of the champion tennis player contains tennis playing knowledge which is appropriate for someone of the size and weight of the recipient. A good part of the champion tennis player’s tennis-playing “knowledge” is, it turns out, contained in the champion tennis player’s body.z Note that in talking this way tennis playing “knowledge” is being ascribed to those with tennis playing ability; this is the implicit philosophy of the Turing Test and, as we will see, it is a useful way to go. What we have above is a literalist version of what is called “the embodiment thesis.” A stronger form suggests that the way we cut up the physical world around us is a function of the shape of all our bodies. Thus, what we recognise as, say, a “chair”-something notoriously undefinable-is a function of our height, weight, and the way our knees bend. Thus both the way we cut up the world, and o u r ability to recognise the cuts, is a function of the shape of our b ~ d i e s . ~ We now have the beginnings of a classification system; there are some types of knowledge/ability/skill that cannot be transferred simply by passing signals from one braidcomputer to another. In these types of knowledge the “hardware” is important. There are other types of knowledge that can be transferred without worrying about hardware.
Embrained Knowledge Some aspects of the abilitiedskills of humans are contained in the body. Could it be that there are types of knowledge that have to do with the bruin’s physicalness rather than its computerness? Yes: certain of our cognitive abilities have to do with the physical set up of the brain. There is the matter of the way neurons are interconnected, but it may also have something to do with the brain as a piece of chemistry or a collection of solid shapes. Templates, or sieves, can sort physical objects of different shapes or sizes; perhaps the brain works like this, or like the working medium of analogue computers. Let us call this kind of knowledge “embrained.” It is interesting to note that insofar as knowledge is embrained, (especially if this knowledge were stored “holographically,” to use another metaphor), the comic book story-about brains being cut up and inserted into expert systems-would be a better way of thinking about knowledge transfer than the “two colander” image.
Humans, Machines, and the Structure of Knowledge
147
Encultured Knowledge We now have knowledge in symbols, in the body, and in the physical matter of the brain. What about the social group? Going back to o u r Vietnam veteran, suppose it was Ken Rosewall’s brain from which his tennis playing knowledge had been siphoned. How would he cope with the new fibre-glass rackets and all the modern swearing, shouting, and grunting? Though the constitutive rules of tennis have remained the same over the last twenty years, the game has changed enormously. The right way to play tennis has to do with tennis-playing society as well as brains and bodies. Natural languages are, of course, the paradigm example of bits of social knowledge. The right way to speak is the prerogative of the social group not the individual; those who do not remain in contact with the social group will soon cease to know how to speak properly. “To be or not to be, that is the question,” on the face of it, a scultifyingly vacuous phrase, may be uttered without fear of ridicule on all sorts of occasions because of the cultural aura which surrounds Shakespeare, Hamlet, and all that. “What will it be then, my droogies?” may not be uttered safely, though it could have been for a short while after 1962. Let us agree for the sake of argument that when William Shakespeare and Anthony Burgess first wrote those phrases, their language-influencing ambitions were similar. That the first became a lasting part of common speech and the second has not, has to do with the way literate society has gone.‘ One can see, then, that there is an “encultured” element to language and to other kinds of knowledge; it changes as society changes, it could not be said to exist without the existence of the social group that has it; it is located in society. Variation over time is, of course, only one element of social embededdnes~.~ We now have four kinds of knowledge/abilities/skills: 1. Symbol-type knowledge. (That is, knowledge that can be transferred without loss on floppy disks and so forth.) 2. Embodied knowledge 3. Embrained knowledge 4. Encultured knowledge
We need to concentrate on the relationship between symbol-type knowledge and encultured knowledge. Understanding this relationship, I believe, will help us most in comparing the competences of human beings and those of current and foreseeable machines.
TWO KINDS OF KNOWLEDGE CORRESPONDING TO TWO TYPES OF HUMAN ACTION What, then, is the relationship between symbol-type knowledge and encultured knowledge? Over the last twenty years many empirical studies of knowl-
148
KNOWLEDGE MANAGEMENT TOOLS
edge-making and transfer have revealed the social aspect of what we know. For example, my own early field studies showed the place of “tacit knowledge” in the replication of scientific experiments and the implications of this for scientific experimentation; it turns out that before scientists can agree that an experiment has been replicated they have to agree about the existence of the phenomenon for which they are searching. Agreement on the existence of natural phenomena seems to be social agreement; it is not something forced upon individuals in lonely confrontation with nature nor something that can be verified by aggregating isolated reports.6 Most of what we once thought of as the paradigm case of “unsocial” knowledge-science and mathematics-has turned o u t to be deeply social; it rests on agreements to live our scientific and mathematical life a certain way.’ It is the symbol-type knowledge that is proving to be hard to find and hard to define. Another juncture at which the cultural basis of knowledge has shown itself is in the attempt to transfer human skills to intelligent computers. Dreyfus’s pathbreaking book first explained the problem from the point of view of a philosopher, and Suchman has more recently emphasised the role of situated action, writing from the viewpoint of an ethnomethodologically inclined anthropologist.” Some of my own papers from 1985 onwards argue a related case instead of my work in the sociology of ~ c i e n c e . ~ The trouble with all these approaches, including my own, is that they are so good at explaining the difficulties of transferring knowledge in written form and of embodying knowledge in computer programs, and so forth, that they fail to explain the residual successes of formal approaches. If so much knowledge rests upon agreements within forms of life, what is happening when knowledge is transferred via bits of paper or floppy disks? We know that much less is transferred this way than we once believed, but something is being encapsulated in symbols or we would not use them. How can it be that artifacts that do not share our forms of life can “have knowledge” and how can we share it? In the light of modern theories, it is this that needs explaining: What is “formal” knowledge? To move forward I think we need to locate the difference between type 1 and type 4 knowledge in different types of human action.1°
Regular Action One way of looking at encultured knowledge is to say that there is no oneto-one mapping between human action and observable behavior; the same act can be instantiated by many different behaviors. For example, paying money can be done by passing metal or paper tokens, writing a cheque, offering a plastic card and signing, and so forth, and each of these can be done in many different ways. Furthermore, the same behavior may be the instantiation of many different acts. For example, signing one’s name might be paying money, agreeing to a divorce, a part of the act of sending a love letter, the final flourish of a suicide note, or providing a specimen signature for the bank. That is what it is like to act in a society;
Humans, Machines, and the Structure of Knowledge
149
the co-ordination of apparently uncorrelated behaviors into concerted acts is what we learn as we become social beings. To relate this point to the discussion of language, we can notice that there are many different ways of saying the same thing; the different verbal formulations are different “behaviors” corresponding to the same speech acts. To recognise which are appropriate and which inappropriate ways of saying something at any particular time and place, one has to be a member of the relevant linguistic community. “Droogies” was once a widely useful piece of linguistic behavior; now it is only narrowly useful. Call action in which there is no straightforward correspondence between intention and behavior “regular action.’’ Most of the time most of our acts are regular acts. As well as meaning “normal” or “everyday,” “regular” also connotes “rulen and “routine.” This is both useful and misleading. The useful part is that normal action is usually “rule following” and sometimes “rule establishing.” The misleading part is that we tend to think that it is easy to understand or describe the rules which we follow when we are doing regular action; in fact, we can’t and this causes all the big problems for the social sciences. We know that normal action is rule-following because we nearly always know when we have broken the rules. For example, it is clear that there are rules applying to my actions as a pedestrian because I will get into trouble if I break thern-perhaps by walking too close to the single person on an otherwise deserted beach, or by trying to keep too far away from others in a crowded street-but I cannot encapsulate all that I know about the proper way to walk in a formula. The little bits of rule that I can provide--such as those in the previous sentence-are full of undefined terms. I have not defined “dose,” “distant,” nor “crowded,” nor can I define all my terms on pain of regress. What is more, what counts as following the rule varies from society to society and situation to situation. A set recipe for walking will be found wanting on the first occasion of its use in unanticipated circumstances; perhaps the next people on the beach will be in actors in a perfume advertisement playing out the mysterious attactiveness of a particular aroma, while the next people in the street will be living in the time of a contagious epidemic disease! The problem of understanding regular action is well known among philosophers of social science and a proportion of social scientists; it explains why skills have to be transferred through interpersonal contact or “socialization” rather than through book learning; it underpins ideas such as tacit knowledge or apprenticeship. The philosophy of regular action shows why social science has not worked in the way many people expected it to work. The orderliness of action is not evident to observers who are not also members of the society under examination. What is more, the order is always changing.” Note that to make society work many of our actions have to be executed in different ways. To give just one example, studies of factories have shown us that even on production lines informed by Taylorist “scientific management” principles, there must be subtle variations in the way the job is executed.12 Indeed, one effective form of industrial disruption is to act too uniformly-in Britain this form of action is known as a “work to rule.”
150
KNOWLEDGE MANAGEMENT TOOLS
BSA Introduced I now introduce a special class of acts, “behavior-specific acts,” which we reserve for maintaining routines. This class seems to have been overlooked in the rush to stress the context-boundedness of ordinary acts. In behavior-specific acts we attempt to maintain a one-to-one mapping between our actions and observable behaviors. It is important to note that the characteristic feature of behavior-specific acts is that we try to execute them with the same spatio-temporal behavior not that we succeed; in this class of act this is what we prefer, and what we intend. The archetypical example of this kind of action is caricature production-line work, for example, as portrayed by Charlie Chaplin in “Modern Times.”13There are, however, much less obvious examples of behavior-specific action, such as the standard golf-swing or competition high-board diving or simple arithmetical operations. Certain actions are intrinsically behavior-specific (e.g., marching), certain actions are intrinsically non-behavior-specific (e.g., writing love letters), but many can be executed in either way depending on intention and desired outcome. Many regular acts contain elements of behavior-specific action. Because behavior-specific action is not always successfully executed, and because, in regular action, the same behavior may sometimes be the instantiation of quite different acts; it is not possible to be certain whether or not behavior-specific action .is being executed merely by observation from the outside. It is clear enough, however, that such a class of acts exist, because I can try to d o my production line work or my golf swing in a behaviorally repetitious way if I wish, or not if I don’t wish. The crucial point about behavior-specific action is that when it is successfully carried out, as far as an outside observer is concerned, the behavior associated with an act can be substituted for the act itself without loss. The consequences of all successfully executed behavior-specific acts are precisely the same as the consequences of those pieces of behavior which always instantiate the act. Take the intention away and, as far as an outside observer is concerned, nothing is lost. What this means is that anyone or anything that can follow the set of rules describing the behavior can, in effect, reproduce the act. Hence behavior-specific acts are transmitable even across cultures and are mechanisable. Compare this with regular action: in that case there is no way for an outsider to substitute behavior for action because the appropriate instantiation of the action depends on the ever changing social context; behavior is not tied to acts in a regular way. There are many occasions when our attempts to execute behavior-specific action fail. Human beings are not very good at it. In these cases we count the substitution of the behavior for the act (for example, through mechanical automation), as an improvement. If all action were behavior-specific action there would be a regular correlation between behavior and action. In that case the big problems of the social sciences would not have emerged; sociology could be a straightforwardly observational science like astronomy or geology or the idealised versions of economics or behaviorist psychology.
Humans, Machines, and the Structure of Knowledge
151
Because, in the case of behavior-specific action, behavior can substitute for the act as far as an outside observer is concerned, it is possible to replace the act with the behavior, to describe the act by describing the behavior, to transfer the description of the act in written form and, sometimes, to learn how to execute the act from a written description. That, as I have suggested above, is how we can have a limited systematic social science which observes those parts of human behavior which are predominantly behavior-specific,“ and how we can have machines such as pocket calculators which inscribe the behavior-specific parts of arithmetic in their programs,Is and how we can learn from books and manuals which describe the behavior which is to be executed if the act is to be successfully accomplished.16The re-instantiation of a behavioral “repertoire,” whether by a machine, or by other human beings (who either do or do not understand what they are doing), will mimic the original act. In this sense, behavior-specific action is decontextualisable. It is the only form of action which is not essentially situated.”
REGULAR VS. BEHAVIOR-SPECIFIC ACTION AND OTHER WAYS OF CUTTING UP KNOWLEDGE Consider how this way of dividing up action compares with approaches to human knowledge which are primarily concerned with the extent to which acts are self-consciously carried out. Both regular acts and behavior-specific acts can be executed with more or less self-consciousness. Figure 8-1 shows a 2 x 2 table which contrasts the two approaches. Inside the boxes is what follows about mechanisation from the theory of behavior specific action. Other treatments differ. In the treatment of skills by Dreyfus and Dreyfus, and many psychologists, the vertical dimension is all important.18 At least one influential model takes it that competence is attained when the skillful person no longer has to think about what they are doing, but “internalizes” the task. Dreyfus and Dreyfus argue that only novices use expressible rules to guide their actions while experts use intuitive, inexpressible competences. They use this argument to show why expert systems are able to mimic expertise only to the level of the novice. These treatments have a large grain of truth but for different reasons. The psychological theory touches upon one of the characteristics of the human organism-namely that we are not very good at doing certain things when we think about them. We certainly do get better at many skills when we, as it were, short circuit the conscious part of the brain. The Dreyfus and Dreyfus model rests on the Wittgensteinian problem of rules that we have touched on in the early paragraphs of this paper. To prepare a full description of regular skilled action, ready to cope with every circumstance, would need an infinite regress of rules. Therefore, self-conscious rule following reproduces only a small subset of a skill and is the prerogative of the novice. The large grain of truth is, however not the whole truth. The psychological model is interesting only insofar as one is interested in the human being as an or-
KNOWLEDGE MANAGEMENT TOOLS
152
SelfConsciws NATURE PERFORMANCE
(11
(2)
Cannot be fully automated
Easy to automate
(3)
(4)
Cannot be fully automated
Can be automated (e.g., use record and playback)
FIGURE 8-1 Types of Act
ganism. One may well imagine other organisms that work perfectly without internalising the rules. Even among humans the ability to work fast and accurately while self-consciously following formulaic instructions varies immensely. One can even imagine a super-human who would not need to bother with internalisation at all for a range of tasks: suppose the person in the Chinese Room remembered the content of all the look-up tables as well as a table of pronounciation and learned to speak at normal speed by mentally referring to them in real time! What is more there are some tasks, as we will see, that are not necessarily performed better without self-conscious attention, and others that can only be performed with attention. What I am suggesting is that, firstly, the psychological “internalisation” model does not apply to all skills, and that, secondly, insofar as it does, organism-specific findings are not all that interesting if one is concerned with the nature and structure of knowledge. For example, to take an entirely different kind of organism-specific rule, it is said to be good to hum the “Blue Danube” while playing golf in order to keep the rhythm slow and smooth, but this tells you about humans, not about knowledge. The large grain of truth in the Dreyfus and Dreyfus model is precisely that most skills are based on regular action, and in those cases their model applies for all the reasons we have seen. Because we cannot formulate the rules we cannot self-consciously “know what we are doing” and therefore even the fastest thinker will not be able to perform culculutively. But it does not apply where the action is behavior-specific. That is why their model, with its stress on the vertical dimen-
Humans, Machines, and the Structure of Knowledge
153
sion of Figure 8-1, does not give an accurate prediction of what skills can be embedded in automated machines. If their model did give an accurate prediction, there would be no pocket calculators, for a lot of arithmetic is done without selfconscious attention to the rules.’9 Let us now take a tour around Figure 8-1 and see what all this means. Consider first the left hand pair of boxes. We can all agree that there are a range of skills that cannot be expertly performed by following a set of explicable rules and that in the case of these skills, only novices follow rules. In expert car-driving, Dreyfus’s paradigm case of “intuitive” skill, familiar journeys are sometimes negotiated without any conscious awareness. For instance, on the journey to work the driver might be thinking of the day ahead, responding to variations in traffic without attention and may not even be able to remember the journey. On the other hand, there are occasions when drivers do pay attention do details of traffic and the skills of car handling, perhaps even self-consciously comparing the current state of the traffic with previous experiences; on such occasions they would remember the details of the journey even if they were not self-consciously applying rules. This partitions non-rule-based skills into the upper and lower boxes on the left hand side of my diagram and allows us to say that skills which cannot be described in a set of rules can, on occasion be executed self-consciously if not calculatively. Indeed, there is no reason to think that in these cases un-self-conscious performance is better. Turn now to the right hand pair of boxes. Imagine a novice who had somehow learned to drive by following rules self-consciously but because of some kind of disability was unable to progress to the level of “intuitive expert.” That person would always remain a poor driver even though it might be that they eventually “internalised” the novice’s rules. In terms of the table, they would have moved from box 2 to box 4 but they would still be a novice. Thus lack of self-consciousness is not a condition of expertise for inexpert actions may be un-self-consciously performed. Think now about the golf swing, or parade-ground drill. Humans have to perform these skills without much in the way of conscious effort if they are to be performed well. Thus, box 4 contains skilled actions as well as unskilled actions. Now try repeating the following at high speed and without error: I’m not a pheasant plucker
I’m a pheasant plucker’s son, And I’m only plucking pheasant Till the pheasant pluckers come That requires skill and self-conscious deliberation. Or again, consider the test for alcoholic intoxication that, it was said, was used by the British police before the invention of the “breathalyser.” One was allowed to use all the concentration one wanted to articulate “the Leith police dismisseth us.”*O Thus there are skilled tasks as well as unskilled performances located in box 2 - e a c h requiring conscious effort.
154
KNOWLEDGE MANAGEMENT TOOLS
Going back to the left hand side, it is not normal to refer to everything that happens there as “skilled,” since it includes such things as being able to form a sentence in one’s native language. Thus all four boxes contain actions that are normally referred to as both skilled and unskilled. The only convincing mapping is that nothing on the left hand side can be mastered without socialization (nor can it be mastered by machines), whereas everything on the right hand side, including the skillful performances, could be (at least in principle).21 The regularbehavior-specific analysis of human abilities seems to be new, or at least, “newish.” We can see from the above analysis that the distinction does not map onto the distinction between self-consciousness and internalization. Nor does it map onto the difference between skilled and unskilled performance; there are a minority of activities that we refer to as skillful that are executed in a behavior-specific way.22The difference between regular and behavior-specific is also not the same as the difference between acts that we value and acts that we do not value, nor between those that are meaningful as opposed to those that are demeaning; many acts that we normally prefer to execute in a behavior-specific way are highly valued-these include high-board diving and the golf swing. Some behavior-specific acts were once highly valued, but are less valued nowadays. An obvious case is the ability to do mental arithmetic, once the prerogative of the really clever, but devalued since the larger part of it can now be done by pocket calculators. Finally we may note that the difference between the two types of act is not the same as the difference between cognitive and sensory-motor a b i l i t i e ~ . ~ ~ Are There Domains of Behavior-Specific Action? Is spoken language behavior-specific action? Is chess behavior-specific action? These are not good questions. The term “behavior-specific’’ does not identify knowledge domains, it identifies types of action. It does not apply to language or chess as such, it applies to the way people use language or play chess. Thus, for most people, language use is not behavior-specific action, while for the controllers in George Orwell’s novel 1984 the aim was to make language into behavior-specific action. In a 1984-like world, just as in Searle’s “Chinese Room,” and in the world that certain machine-translation enthusiasts would like to bring down upon us, language use would be behavior-specific action. There is more than one way to speak a language. The same is true of something like chess-playing, though here it is less obvious. We tend to ask what sort of knowledge is chess-knowledge-is it formal or informal. We conclude that it is formal because in principle there are an exhaustive set of rules for winning the game. But humans do not play chess like computers-at least not all the time. Human chess-playing is part behavior-specific action and part not. The first few moves of chess openings are usually played by skilled players as behavior-specific action.24Unfortunately, I know no openings and cannot play those first few moves in this way (but I wish I could). There is not the slightest doubt that in terms of what counts as good chess in contemporary cul-
Humans, Machines, and the Structure of Knowledge
155
ture, all chess computers play openings better than I. Some chess endings are also generally performed as behavior-specific action. Skill a t chess openings and endings increases as the ability to accomplish behavior-specific action increases. The middle game of chess is not behavior-specific action as far as most good human players are concerned; at least some of the middle game has to do with the quintessentially non-behavior-specific skill of creating surprises. Machines, on the other hand, d o play a middle game that mimics what human play would be like were it to be played as behavior-specific action.2s The great chess-playing competition between machines and humans over the last couple of decades has been between the regular action middle game of the best humans and the rule-based procedures of chess programs; slowly the programs are winning. If human brains were better at behavior-specific action, then that, I would guess, is how chess masters would now be playing. On the other hand, what would happen to the culture of chess should a chess-playing machine be built that could play the game exhaustively and therefore win every time with the same moves?26It could be that the nature of the game of chess would change; people would care less about winning and more about the aesthetics. In that case, human chess would become quintessentially non-behavior-specific. The idea of a competition between human players and machines would then seem as absurd as an adding competition between a human and a computer or a pulling competition between a strong man and a tractor. One sees that special cases apart-I have mentioned marching and the writing of love letters-it does not make sense to say that there are domains of behavior-specific and regular action.*’ Rather, one notes that elements in domains are generally performed in a behavior-specific way whereas other elements are performed as regular actions. The element of each in any domain may change for many reasons, some having to do with individual choice and some having to do with changes in the “form-of-life” which activities comprise.
HUMANS, MACHINES, AND THE TURING TEST One way of applying these ideas to the relationship between humans and machines is to reconsider the “Turing Test.” The Turing Test is a controlled experiment. To decide whether a machine is “intelligent” we compare it with a control-usually a human; we check to see if it has the same linguistic abilities. The experimenter (or judge), is “blinded”-he or she does not know which is the experimental device and which is the control. If the experimenter cannot tell machine from control, then we say that the machine is as “intelligent” as the control. Exactly what “intelligence” means under this approach depends upon how the test is set up. If the control were a block of wood rather than a human, then the test would show only that the experimental device was as “intelligent” as a block of wood. If on the other hand, the judge was a block of wood rather than a human, we would not find the outcome very interesting. There is a range of possibilities for both control and judge varying from block of wood, through dim hu-
156
KNOWLEDGE MANAGEMENT TOOLS
man, to very sensible human, which imply very different things about the ability of any machine that passes the test. There are other variations in the way the protocol can be arranged: Is the test short or long? Is there one run or more than one? Is the typing filtered through a spell-checker or not? and so forth. Depending on the protocol, the Turing Test tests for different things. Take the version of the Turing Test implicit in Searle’s “Chinese Room” critique. In this case, responses to written questions in the Chinese language are produced by a human operator who knows no Chinese but has resource to a huge stock of look-up tables that tell him or her which strings of Chinese symbols are appropriate outputs to which inputs. (The control in the case of the Chinese Room is “virtual.”) Searle hypothesises a Chinese Room that passes the test-i.e., produces convincing Chinese answers to Chinese questions. But, for The Room to do this there must be some constraints on the protocol. For example, having noticed the way that languages change over time, we can see that either the life span of The Room (and therefore the test), must be short so that the Chinese language doesn’t change much, or the interrogators must conspire to limit their questions to that temporal cross-section of the Chinese language represented in the stack of look-up tables first placed in the room, or the look-up tables must be continually updated as the Chinese language changes. Under the first two constraints, the knowledge of Chinese contained in the Room is not type 4 (encultured), knowledge. It is, rather, a frozen cross-section of a l a n g u a g e a n elaborated version of what is found in a computer spell-checker. It is a set of formulae for the reproduction of the behavior associated with behavior-specific action. It is easy to see that this might be encapsulated in symbols.28 Now suppose, for argument’s sake, that the test is long enough for the Chinese language to change while the questions are being asked, or that it is repeated again and again over a long period, and that the interrogators do not conspire to keep their questions within the bounds of the linguistic cross-section encapsulated in the original look-up tables. If the stock of look-up tables, etc., remains the same, The Room will become outdated-it will begin to fail to answer questions convincingly. Suppose instead, that the look-up tables are continually updated by attendants. Some of the attendants will have to be in day-to-day contact with changing fashions in Chinese-they will have to share Chinese culture. Thus, somewhere in the mechanism there have to be people who do understand Chinese sufficiently well to know the difference between the Chinese equivalents of “to be or not to be” and “what will it be, my droogies” at the time that The Room is in operation. Note that the two types of room-synchronic and diachronic-are distinguishable given the right protocol. It is true that the person using the look-up tables in the diachronic room still does not understand Chinese, but among the attendants there must be some who do. Under the extended protocol, any Chinese Room that passed the test would have to contain type 4 knowledge, and I have argued that it is to be found in those who update the look-up tables.29It is these people who link the diachronic room into society-who make it a social entity.30
Humans, Machines, and the Structure of Knowledge
157
Under the extended protocol, the Turing Test becomes a test of membership of social groups. It does this by comparing the abilities of experimental object and control in miniature social interactions with the interrogator. Under this protocol, passing the test signifies social intelligence or the possession of encultured knowledge.
A Simplified Turing Test Once one sees this point, it is possible to simplify the Turing Test greatly while still using it to check for embeddedness in ~ociety.~’ The new test requires a determined judge, an intelligent and literate control who shares the broad cultural background of the judge, and the machine with which the control is to be compared. The judge provides both “Control” and “Machine” with copies of a few typed paragraphs (in a clear, machine-readable font), of somewhat mis-spelled and otherwise mucked-about English, which neither has seen before. It is important that the paragraphs are previously unseen for it is easy to devise a program to transliterate an example once it has been thought through. Once presented, Control and Machine have, say, an hour to transliterate the passages into normal English. Machine will have the text presented to its scanner and its output will be a second text. Control will type hidher transliteration into a word processor to be printed out by the same printer as is used by Machine. The judge will then be given the printed texts and will have to work out which has been transliterated by Control and which by Machine. Here is a specimen of the sort of paragraph the judge would present. MARY: JOHN:
MARY: JOHN: MARY: JOHN: MARY:
The next thing I want you to do is spell a word that means a religious ceremony. You mean rite. Do you want me to spell it out loud? No, I want you to write it. I’m tired. All you ever want me to do is write, write, write. That’s unfair, I just want you to write, write, write. OK, I’ll write, write. Write.
The point of this simplified test is that the hard thing for a machine to d o in a Turing Test is to demonstrate the skill of repairing typed English conversation-the interactional stuff is mostly icing on the cake. The simplified test is designed to draw on all the culture-bound common-sense needed to navigate the domain of error correction in printed English. This is the only kind of skill that can be tested through the medium of the typed word but it is quite sufficient, if the test is carefully designed, to enable us to tell the socialized from the ~ n s o c i a l i z e dIt. ~seems ~ to me that if a machine could pass a carefully designed version of this little test all the significant problems of artificial intelligence would have been solved-the rest would be research and development.
158
KNOWLEDGE MANAGEMENT TOOLS
CONCLUSION What I have tried to do in this paper is divide up our knowledge and skills. the way I have done this is based on two types of human action. I believe the division is fundamental and lies at the root of the difference between “tacit” knowledge, knowledge that appears to be located in society, and “formal” knowledge which can be transferred in symbolic form and encoded into machines and other artifacts. Those who come o u t of a Wittgensteinian, ethnomethodological, or sociology of scientific knowledge tradition have, whether they know it or not, a problem with the notion of the formal or routine. In fact, everyone has this problem but it is less surprising that others have not noticed it. I think that the idea of behavior-specific action is at least the beginning of a solution. My claim is that this way of looking at human action allows us to understand better the shifting patterns and manner of execution of various human competences. It also helps us understand the manner and potential for the delegation of our competences to symbols, computers, and other machinery. It also helps us see the ways in which machines are better than humans: it shows that many of the things that humans do, they do because they are unable to function in the machine-like way that they would prefer. It also shows that there are many activities where the question of machine replacement simply does not arise, or arises as, at best, an asymptotic approximation to human abilities. We notice, then, what humans are good at and what machines are good at, and how these competences meet in the changing pattern of human activity. It seems to me that the analysis of skill, knowledge, human abilities, or whatever one wants to call the sets of activities for which we use o u r minds and bodies, must start from this distinction and that understanding how much of what we do can be taken over by machines rests on understanding the same distinction.
CODA: TURING’S SOCIOLOGICAL PREDICTION Alan Turing in one of his famous papers said that by the end of the century he expected ” . the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be c ~ n t r a d i c t e d . ” ~There ~ J ~ are at least four ways in which we might move toward such a state of affairs: i) Machines get better at mimicking us; ii) We become more charitable to machines; iii) We start to behave more like machines; iv) Our image of ourselves becomes more like our image of machines. Let us consider each of these in turn.
. .
Machines Get Better at Mimicking Us Unless machines can become members of our society they can appear to mimic our acts only by developing more and more ramified behaviors. This proc-
Humans, Machines, and the Structure of Knowledge
159
ess is a good thing so long as it is not misconstrued. Ramification of behaviors makes for new and better tools, not new and better people. The rate at which intelligent tools can be improved is not easy to predict. The problem is analogous to predicting the likelihood of the existence of life on other planets. There are a large number of other planets but the probability that the conditions for life exist on a planet is astronomically small. Where two large numbers have to be compared a very small error can make a lot of difference to the outcome. The two large numbers in the case of intelligent machines are the exponential growth in the power of machines and the exponential increase in the number of rules that are needed to make behavior approximate to the appearance of regular action. My guess is that progress is slowing down fast, but the model sets no limit to asymptotic progress.
We Become More Charitable to Machines As we become more familiar with machines we repair their deficiencies without noticing-in other words we make good their inabilities in the same charitable way as we make good the inabilities of the inarticulate humans among us. Already the use of words and general educated opinion has changed sufficiently to allow us to talk of, say, calculators in the fashion that Turing predicted would apply to intelligence in general. We speak of calculators as being “better than arithmetic than ourselves” or “obviating the need for humans to d o arithmetic” though close examination shows that neither of these sentiments is exactly right.3S If we generalise this process of becoming more charitable, we will lose sight of our special abilities.
We Start to Behave More Like Machines Ourselves Consider the deficiencies of machine translation. One solution is to standardize the way we write English: In the US, it is usual practice amongst some large firms to send entire manuals for online translation on a mainframe computer. It works well. The manual writers are trained to use short sentences, cut out all ambiguities from the text (by repeating nouns instead of using pronouns for example) and to use a limited, powerful vocabulary. Europeans have shown little of this dis~ipline.~~
For “Europeans” there are, of course, no significant ambiguities in the texts they write, it is just that the texts are not written in a behavior-specific (1984-like) way. In what is usually counted as good writing, different words are used to represent the same idea and different ideas are represented by the same words. The parallel with regular action is complete. The problem is that translation machines cannot participate in the action of writing unless it is behavior-specific action. If we adjust
160
KNOWLEDGE MANAGEMENT TOOLS
our writing style to make it universally behavior-specific then mechanized translators will be as good as human translators and we are more likely to come to speak of machines “thinking” just as Turing predicted. What is true of translation is true of all our actions. The theory of action outlined above allows that the way we execute our actions can change. Change in the way we act is not necessarily bad even when it is change from regular to behavior-specific action, but we want to continue to be free to decide how best to carry out our acts. We d o not want to lose our freedom of action so as to accommodate the behavior of machines.
Our Image of Ourselves Becomes More Like Our Image of Machines If we think of ourselves as machines, we will see our departures from the machine-like ideal as a matter of human frailty rather than human creativity. We need to counter the tendency to think of humans as inefficient machines. There is a difference between the way humans act and the way machines mimic most of those acts. I have argued that machines can mimic us only in those cases where we prefer to do things in a behavior-specific way. Whether we come to speak of machines thinking without fear of contradiction will have something to do with whether this argument is more or less convincing than the arguments of those who think of social life as continuous with the world of things. Intelligent machines are among the most useful and interesting tools that we have developed. But if use them with too much uncritical charity, or if we start to think of ourselves as machines, or model o u r behavior on their behavior, or concentrate so hard on our own boundary-making and maintaining practices that we convince ourselves there is nothing to boundaries except what we make of them, we will lose sight of what we are.
NOTES I would like to thank Giiven Guzeldere for suggestions and editorial assistance on this essay. This paper is adapted from Harry M. Collins, “The Structure of Knowledge,” Social Research, 60 (Spring, 1993) 95-116. The section on the Turing Test also contains elements taken from the final chapter of my book, Artificial Experts: Social Knowledge and Intelligent Machines (Cambridge, MA: MIT Press, 1990) and from another of my papers: “Embedded or Embodied: Hubert Dreyfus’s What Computers Still Can’t Do, ” Artificial Intelligence (forthcoming). 1. The mundane applications of Hollywood’s brilliant scientific breakthroughs are depressing. 2. This is not just a matter of necessary conditions for tennis playing; we don’t want to say that tennis playing knowledge is contained in the blood, even though a person without blood could not play tennis. Nor do we want to say that the body is like a tool
Humans, Machines, and the Structure o f Knowledge
161
and that tennis-playing knowledge is contained in the racket (after all, we can transfer a tennis racket with hardly any transfer of tennis-playing ability). 3. See, for example Hubert Dreyfus, What Computers Can’t Do (1972; New York: Harper and Row, 1979). 4. The first is so well embedded in society I need not provide a reference for it; the second is from Anthony Burgess’s A Clockwork Orange (London: Heinemann, 1962). 5. But it is the easiest to explain so I have stayed with this dimension throughout the paper. I argue elsewhere that skilled speakers of a language are able to make all kinds of “repairs” t o damaged strings of symbols that the Chinese Room would not. For discussion of these other ways in which the social embededdness of language shows itself, see Collins, and Harry M. Collins, ”Hubert Dreyfus, Forms of Life, and a Simple Test for Machine Intelligence,” Social Studies of Science, 22 (1992) 726-39. 6. Harry M. Collins, “The Tea Set: Tacit Knowledge and Scientific Networks,” Science Studies, 4 (1974) 165-86; Harry M. Collins, “The Seven Sexes: A Study in the Sociology of a Phenomenon, Or the Replication of Experiments in Physics,” Sociology, 9 (1975) 205-24; Harry M. Collins, Changing Order: Replication and Induction In Scientific Practice (London and Beverly Hills: Sage, 1985). For the origin of the term “Tacit Knowledge,” Michael Polanyi, Personal Knowledge (London: Routledge, 1958). 7. This way of thinking is deeply rooted in the the later philosophy of Wittgenstein. For example, see Ludwig Wittgenstein, Philosophical Investigations (Oxford: Blackwell, 1953); and David Bloor, Wittgenstein: A Social Theory of Knowledge (London: Macmillan, 1983). 8. Dreyfus; Lucy Suchman, Plans and Situated Action: The Problem of Human Machine Interaction (Cambridge: Cambridge UP, 1987); see also Terry Winograd and Fernando Flores, Understanding Computers and Cognition: A New Foundation for Design (New Jersey: Ablex, 1986.) 9. See, for example, H.M. Collins, R.H. Green, and R.C. Draper, “Where’s the Expertise: Expert Systems as a Medium of Knowledge Transfer,” Expert Systems 85, ed. Martin J. Merry (Cambridge, UK: Cambridge UP, 1985) 323-334. 10. For the first introduction of the distinction between “regular action” and “behaviorspecific action,” see Collins, Artificial Experts. For an extended philosophical analysis which analyses types of action into further sub-categories see Harry M. Collins and M. Kusch, “Two Kinds of Actions: A Phenomenological Study,” Philosophy and Phenomenological Research (forthcoming). 11. For an analysis of the way scientific order is changed, see Collins, Changing Order. 12. Kenneth C. Kusterer, Know-How on thelob: The Important Working Knowledge o f “Unskilled” Workers (Boulder: Westview Press, 1978). 13. Note that the Taylorist ideal usually is a caricature, but this does not mean it cannot be found under special circumstances. 14. For example, it works for certain limited types of economic behavior or the behavior of people in specially arranged laboratory conditions. This kind of science is especially appropriate where behavior-specific action is enforced as in E W. Taylor’s “scientific management.” 15. For a full analysis of what pocket calculators can and cannot do, and how to see what they do as behavior-specific action, see Collins, Artificial Experts. 16. So long as the behavioral repertoire we are to master is not too complicated.
162
KNOWLEDGE MANAGEMENT TOOLS
17. Suchman. 18. See especially Hubert Dreyfus and Stuart Dreyfus, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer (New York: Free, 1986). 19. Dreyfus tries to sidestep the problem by dividing knowledge domains into two types, one of which is “formalisable.” Unfortunately it is not possible to have a formalisable domain under the Wittgensteinian treatment which Dreyfus prefers. My “behaviorspecific action” makes what Dreyfus calls a formal domain possible. Our theories are largely co-extensive until we try to predict how devices that do not use formal rules will perform. My theory leads one to be far more pessimistic about, say, neural nets. (See Harry M. Collins, “Will Machines Ever Think?” New Scientist, 1826 (June 20, 1992) 36110. 20. These tasks, incidentally, can be done without difficulty by existing and conceivable talking computers, showing again that the psychological and philosophical dimensions of the skill problem need to be pulled apart. 21. The model does not limit the role of machines to mimicking simple-minded repetitious morons, the ramifications of behavior-specific actions can be such that they approach regular action asymptotically. One of the programs of observation and research that the theory suggests, however, is to break down the performance of machines into its constituent behaviors. This program applies as much to neural nets and other exotic devices as to more straightforward programs. 22. Along the way we have established that self-conscioudinternalizedalso does not map onto on unskilled/skilled. 23. For a more detailed working out of the behavior-specific elements of arithmetic, see Collins, Artificial Experts. 24. I am now assuming away the elements of non-behavior-specificity that have to do with the shape of the pieces, the method of making moves, and so forth. I am thinking about long series of games using the same apparatus. 25. One has to be careful with one’s locutions. We must not say that a machine “acts,” and therefore we cannot say that a chess machine engages in behavior-specific action. Machines can only mimic behavior-specific action. Chess machines mimic the behavior-specific action of the first few moves of the skilled human chess player’s game. 26. For argument’s sake we will allow physicists to discover and learn to manipulate a new generation of sub-sub-atomic particles so small that 10”’ of them could fit into shoe box. 27. Even these special cases are historically specific. 28. I simplify here. See footnote 5 , above. 29. I discuss the protocol of the Turing Test at some length in Artificial Experts. 30. The Turing Test is usually thought of as involving language, but there is no reason to stop at this. We could, for example, use the test with washing machines. To find out if a washing machine was intelligent, we would set it alongside a washer person, concealing both from a judge. The judge would interrogate the two by passing in sets of objects to be washed and examining the response. (We must imagine some mechanism for passing the objects into the machine and starting the cycle.) An unimaginative judge might pass in soiled clothes and, examining the washed garments, might be unable to tell which had been washed by machine and which by human. A more imaginative interrogator might pass in, perhaps, some soiled clothes, some clothes that were
Humans, Machines, and the Structure of Knowledge
31. 32.
33. 34.
35. 36.
163
ripped and soaked in blood, some clothes with paper money in the pockets, some valuable oil paintings on canvas, or whatever, and examine the response to these. Again, embeddedness in social life is needed if the appropriate response is to be forthcoming. This section of the paper is taken from Collins, “Embedded or Embodied.” It is worth noting for the combinatorily inclined that a look-up table exhaustively listing all corrected passages of about the above length-300 characters-including those for which the most appropriate response would be “I can’t correct that,“ would conentries, compared to the, roughly, 10’“ particles in the universe. The number tain 1OaX’” of potentially correctible passages would be very much smaller of course but-I would guess-would still be beyond the bounds of brute strength methods. Note also that the correct response-of which there may be more than one-may vary from place to place and time to time as our linguistic culture changes. The following section is adapted from Chapter 15 of Artificial Experts. Alan Turing, “Computing Machinery and Intelligence,” Mind, LIX No. 236 (1950) 433-460. Reprinted in Douglas Hofstadter and Daniel Dennet, eds., The Mind’s I (Harmondworth: Penguin, 1982) 53-66,57. See Collins, Artificial Experts, ch. 4 and 5. Derek J. Price, “The Advantages of Doubling in Dutch,” The Guardian, 20 (April 1989) 31.
This page intentionally left blank
PART FOUR
Knowledge Transfer
This page intentionally left blank
A First Look Michael Schrage
The next breakthrough won’t be in the individual interface but in the team interface. -John Seely Brown People come out of a Colab session saying, “We’ve just done ten hours of work in ninety minutes,” and they can’t believe it. -Mark Stefik
WHAT IS A COLLABORATIVE TOOL? Over lunch in the cafeteria, you and a colleague wrangle over how best to structure a particularly troublesome negotiation. The discussion slowly devolves into an argument. He heatedly insists that the key points should be organized in a sequence that makes absolutely no sense to you. You assert that presenting the points his way runs counter to logic and diffuses the impact of the best argument your side can muster. Your colleague responds that you’ve completely misunderstood the priorities. His sequence of points might take the edge off the best argument, but it prevents your adversaries from directly attacking the weakest part of the proposal. “Look,” he says, taking out his black felt-tip pen, “I’ll show you.” He snatches up a napkin, unfolds it, and quickly outlines the structure of his arguments-underlining the key points for emphasis. He picks up another napkin and sketches out your presentation. He spreads out the two napkins on the table and points back and forth between them, explaining the error of your ways.
From Shared Minds: The New Technologies of Colluborution by Michael Shrage. Copyright ( c ) 1990 by Michael Shrage. Reprinted by permission of Random House, Inc.
167
168
KNOWLEDGE MANAGEMENT TOOLS
“Just a second,” you say, reaching for your pen, “that’s not right.” (Unfortunately, you have a ballpoint, so you have to borrow his pen.) You modify a word here, cross out one of the points there, and switch the order of two items on his outline. He acknowledges that it isn’t a bad idea and adds another argument from your list. You nod approvingly. He picks up another napkin and outlines the new key points of the presentation, using the other two napkins as references. “Is this what you’re saying?” he asks. You say yes, adding one more item to the list. You genuinely like the modifications. So does he. The disagreement has vanished. Your colleague fixed a weakness in the negotiating stance and you managed to preserve most of the impact of your best argument. The two of you argue briefly over who takes the napkin back to the office to be transformed into a nice laserprinted memo. The two other napkins get crumpled and tossed into the trash. The felt-tip pen and the paper napkins are collaborative tools. When John Dykstra, who has directed special effects for such films as Star Wars, meets with clients-producers, directors, what have you-to discuss new creative concepts, he greets them with a phalanx of three industrial designers armed with sketchpads. As Dykstra and his visitors chat, the three designers are furiously sketching-tangibly visualizing-the ideas being tossed about. As the sketches become intelligible, Dykstra points to one and says, “Is this what we’re talking about?’’ If the client agrees, the other two industrial designers immediately key into the distinguishing features of that sketch, adopting its tone and style. Dykstra, the clients, and his trio of artists can go through dozens of iterations to come up with the look of a project or a special effect. The conversation interweaves with the sketches as a visual prototype is created. “You have to create a kernel from which you can grow a new concept,” says Dykstra, “and you have to stimulate people to go beyond what they normally do in a conversation. . . . By causing people to think on their feet, you get fresher ideas.’’ This little microcommunity of industrial designers, clients, and sketches is Dykstra’s collaborative tool. You, the sales manager, and the convention organizer have until a six p.m. deadline to agree on the design and layout of your company’s trade-show booth. This is the key industry show and you’re launching a new family of products that the chairman is extremely enthusiastic about. It’s already three-thirty and there is still no agreement. You quickly draw a sketch of the L-shaped floor plan, rough in where the new products should be displayed, and fax the diagram to the sales manager. Fifteen minutes later, he calls you back screaming that your layout is stupid and that he’s sending his version of the way the exhibit layout should be. The sales manager’s sketch is better drawn than yours, but he’s given too much space to the high-commission items and put some of the newer products in the back of the booth. You sketch in some changes on his diagram and fax it back with a note that points out the chairman’s interests. A few minutes later, you receive a fax that keeps your modifications but has one more new product added. The layout is fine, but it looks a little cluttered. You call the sales manager. He’s adamant: you changed the space allocation, he determines what has to be in the booth.
Collaborative Tools: A First Look
169
It’s five o’clock and you fax a redone diagram to the trade-show floor coordinator with a scrawl at the bottom asking if there’s any more space. Twenty minutes later, he calls you back and says you can have another hundred square feet, but it will cost you. He faxes back your layout with the additional space dotted in. You like it, but it’s not in your budget. You fax it off to the sales manager with the dollar figure scribbled in. Ten minutes later, the fax comes back with his OK and initials. You fax the final version to the trade-show office. They acknowledge receipt. Everything’s settled. The collaborative tools here are papers, pens, telephones, and faxes. Theater director Jonathan Miller, who has a keen aesthetic sense but can’t draw to save his life, relies on his collection of three thousand postcards to communicate visually with his set designer. “We swap postcards and Xeroxes with each other,” says Miller. “I show him how I want something to look, how the light should fall, how a cloth should drape. This is how we communicate when words aren’t adequate. Picture two engineers, colored felt-tip pens in hand, chatting away as they sketch out design permutations for a new product on the office whiteboard or college students launching a group assault on the blackboard to solve a particularly annoying problem in linear algebra. This is how work really gets done. Walk through virtually any research lab or university and you’re bound to find the office walls lined with blackboards or whiteboards, all scribbled and sketched upon. (In contrast, the walls of the business offices are either completely bare or feature some tasteful print or poster.) Indeed, the blackboardlwhiteboard is the most pervasive of collaborative media. You won’t find a world-class researcher at a world-class facility anywhere without a blackboard in the office. (This tradition goes back to medieval universities in Germany.) With the notable exception of colored chalk, there has been no fundamental advance in blackboard technology in over five hundred years. But, like paper, this hasn’t prevented the blackboard from being an astonishingly reliable and resilient collaborative tool, as the following example demonstrates.
“On any given morning at the Laboratory of Molecular Biology in Cambridge,” one observer writes, “the blackboard of Francis Crick or Sydney Brenner will commonly be found covered with logical trees. On the top line will be the hot new result just up from the laboratory or just in by letter or rumoz On the next line will be two or three alternative expl&tions, or a little list of ‘what he did wrong.’ Underneath will be a series o f suggested experiments or controls that can reduce the number of possibilities. And so on. The tree grows during the day as one man or another comes in and argues about why one of the experiments wouldn’t work or how it should be changed. ”
“I would point out something peculiar,” says Francis Crick, “that when two scientists get together, they each get out a pencil and start sketching.’’ Actually, as Crick well knows, scientists often do much more than that. In Crick and Watson’s bid to find the double helix-Crick a brilliant mathemati-
170
KNOWLEDGE MANAGEMENT TOOLS
cian/crystallographer, the young Watson an expert on bacteriophages (a type of virus)-the scientists found the key to their success was a collaborative tool of their own invention. Rather than rely exclusively on X-ray crystallography patterns, organic chemistry data, and pencil sketches, the two continually built and rebuilt metal models of their proposed DNA structures. Ironically, such model building was looked upon by their colleagues as a peculiarly grubby form of three-dimensional draftsmanship. “Helices were in the air, and you would have to be either obtuse or very obstinate not to think along helical lines,” says Crick. “What [rival chemist Linus Pauling] did show us was that exact and careful model building could embody constraints that the final answer had in any case to satisfy. Sometimes this could lead to the correct structure, using only a minimum of the direct experimental evidence.” Both Watson and Crick recall in their memoirs that these jury-rigged metal structures were an indispensable part of the way they tested their theories, fit in new data, and created shared understandings about their individual perspectives. “Only a little encouragement was needed to get the final soldering accomplished in the next couple of hours,” Watson recalled in his Double Helix. “The brightly shining metal plates were then immediately used to make a model in which for the first time all the DNA components were present. In about an hour I had arranged the atoms in positions which satisfied both the X-ray data and the laws of stereochemistry. The resulting helix was right-handed with two chains running in opposite directions.” Now a t first glance, there’s nothing at all unusual here. This is everyday stuff. These tools, call them collaborative or not, are readily at hand. And no one would argue about their usefulness. They readily embody the visual and verbal languages that people need when they have to do more than just transmit information. They make collaboration faster, better, and more effective. In the real world, collaborative tools, however primitive, are a pervasive and indispensable part of the creative process. That’s why you find blackboards lining office walls in universities and R&D labs all over the world. People on the path of innovation and discovery need to sketch and they need to build. They need the insights that only a visual or tactile representation of the problem can evoke. Imagine being forced to express yourself at work in words of only one syllable. Yes, you would be able to communicate effectively in most situations, but the frustration level would be as high as the precision and richness of your language would be low. The images, maps, and perceptions bouncing around in people’s brains must be given a form that other people’s images, maps, and perceptions can shape, alter, or otherwise add value to. “If you have a model, you know what the permissible structures are,” says Nobel laureate Linus Pauling, the main rival to Watson and Crick in the race to the double helix. ”The models themselves permit you to throw out a larger number of structures than might otherwise be thought possible. But then, I think that the greatest value of models is their contribution to the process of originating new
Collaborative Tools: A First Look
171
ideas.” Model building is now quite a bit more respectable in biochemistry than it was in Watson and Crick’s day. That such tools were an integral part of the discovery of the double helix helped make that so. Collaborative modeling pops up in all sorts of contexts. Cardiac surgeons and plastic surgeons huddle over models, charts, and computer screens to plot out their operations. Architect Kevin Roche, a protigi of the great Eero Saarinen, says that his firm crafts as many as forty models of a proposed structure to give his clients a sense of spatial and aesthetic orientation. Industrial designers increasingly rely on models and prototypes to design with users instead of for them. Instead of relying simply on what people say, designers of new products and services often encourage customers to build a conceptual model of the innovation and actually diagram how they want to use it. Diagrams are then hardened into prototypes that customers can see, feel, and manipulate. Rapid prototyping-the ability to quickly build a computer simulation, a mechanical model, or even a cardboard mock-up of the innovation-has become a key to such customer collaboration. The prototype becomes the vocabulary of the innovation, and each successive prototype enlarges that vocabulary and deepens both designer and customer understanding. In effect, says John Rheinfrank, a designer with FitchRichardsonSmith who has worked with both consumer and industrial clients, the conceptual models and prototypes “become the clay” that customers help mold into the final product. “We’re still trying to unravel what it means to design a product where the content of the product is constantly negotiable.” This holds true whether the product is a computer display, a telephone keypad for an office system, an insurance claims form, or state-of-the-art computer software. These rapid prototypes aren’t one shot deals: they aren’t frozen in final form. They’re collaborative learning and design tools. They’re visual and conversational stimuli. They’re a medium of expression. There’s nothing intrinsically sophisticated about them-they represent an attitude as much as they do a technology-but they help get the job done. In many respects, these collaborative tools-blackboards, whiteboards, metal models-are as essential to the process of creation as new instruments have been to the advance of science and technology. The telescope and radio telescope completely redefined astronomy. The gradual evolution of these collaborative tools are similarly redefining collaboration. People can create and discover things with one another in ways that were previously impossible. What would it mean i f the power and versatility of these simple tools for collaboration could be amplified ten-fold? A hundred-fold? A thousand-fold? What if technology could augment the process of collaboration with the ease that a pocket calculator augments computation? What new kinds of conversation and collaboration would occur? How would conversation and collaboration be different? What new insights into creativity and discovery would these new tools yield? These questions are at the very core of this new epoch of interpersonal interaction. The blackboard may have served us well for hundreds of years, but maybe it’s time for a change. The issue isn’t automating collaboration; it’s using technol-
172
KNOWLEDGE MANAGEMENT TOOLS
ogy to enhance the collaborative relationship. Technology here doesn’t substitute for people; it complements them. In the sixties, Douglas Engelbart (in many respects the conceptual godfather of collaborative technologies) had no tools or prototypes to prove his thesis that technology could augment intellect-the technology simply didn’t exist. So he cleverly demonstrated the converse: that technology could “disaugment” intellect. Engelbart asked a few people to write with a pencil. He analyzed that sample. Then he attached a brick to the pencil and asked them to write some more. After a few minutes, Engelbart coolly observed, the quality and legibility of the writing had markedly deteriorated. The brickified pencil corrupted easy expression. If one could handicap intellect by attaching obstacles to existing media for expression, Engelbart argued, why couldn’t one augment intellect by removing the obstacles inherent within existing media? In the collaborative context, those obstacles can be as obvious as the fact that napkins rip, people have illegible handwriting, and blackboards run out of space. But there are other, more subtle ways that traditional media constrain the collaborative process as surely as if a brick were attached. Picture a global telephone network that has just one tiny glitch: an automatic three-second delay between speakers. From the moment you say hello to the instant you hear a response, six full seconds elapse. (Check your watch the next time you’re on the phone). What kind of discussion can you have under those circumstances? What does an argument sound like? What would spontaneity mean under these constraints? What does collaboration mean within these constraints? Even in ordinary conversations, the strictures of taking turns, interrupting, and maintaining conversational flow can make a collaborative effort very difficult. Like the brickified pencil, the time-delayed telephone corrupts fluent expression. This little experiment in “disaugmentation” underscores another point: even under the best of circumstances, it’s difficult to keep track of what’s been said in conversation. Conversations-time-delayed or not-are ephemeral; the words vanish the instant they’ve been uttered. Even when taking notes, one can rarely, if ever, get a perfect transcript because of the inevitable discrepancies between what’s said and what’s heard. People generally respond to what’s just been said, not something said seven or eight minutes earlier. Conversations don’t have memories; only their participants do. The serial and ephemeral nature of conversation, then, subtly works against collaboration. In most conversations, people take turns exchanging information, not sharing it. In most conversations, the absence of memory means a useful phrase or expression can be distorted or lost. We frequently rely on the transactional model of communication discussed earlier. For most of us, that looks like this: Sendermeceiver
Conversation
ReceivedSender
The collaborative model-the model that captures the napkins, faxes, blackboards, whiteboards, musical notations, and the helical intricacies of DNA models-is quite different:
Collaborative Tools:A First Look
173 Shares-Space-
/
ReceivedSender
\
Conversation
ReceivedSender
Shared space literally adds a new dimension to conversation, a dimension embracing symbolic representation, manipulation, and memory. Participants must also have near-equal access to the shared space-or else it really isn’t shared, is it? Participants can communicate with one another directly and through the medium of shared space. Changing the conversation can lead to a change in the shared space and vice versa. Symbols, ideas, processes, sketches, music, numbers, and words can be put in the shared space to be expanded, organized, altered, merged, clarified, and otherwise manipulated to build these new meanings. It takes shared space to create shared understandings. Conversation is vital, but it isn’t enough. Shared space exists wherever there is effective collaboration. Whether we collaborate to discover something we don’t know, to create something new, or to solve a problem that confounds individual solution, shared space is invariably an indispensable tool. You see it in the models that molecular biologists like Watson and Crick build and the annotated scribblings exchanged by Eliot and Pound; you hear it’s results in the songs created by a lyricist and composer (the piano and the scribbled notes create both acoustic and visual shared space for their work); you see its results in the works actors, directors, and set designers present on the stage; and you find it expressed in the prototypes of virtually every significant invention of this century. You can play with them; turn them upside down or spin them on their axis. But these shared spaces aren’t just intellectual exercises. They must provoke the senses as well as the mind. Collaborators can literally experience what they’re doing while they think about it. Like the keyboard of a piano or a personal computer, the shared spaces are dynamic. And shared spaces can have very selective memories in order to retain the best points and features of a design. They are also highly malleable and manipulable; it’s easy to tinker with, edit, or alter them. Whether the shared space embodies a musical riff or a quick sketch, adding tone or inflection is a simple process. These collaborative tools frequently work in real time; it doesn’t take hours or days to use them. They’re highly interactive; just tap them and they respond. Similarly, they readily accept new data and information; they are highly adaptive and adaptable. A new perspective, a new word, or a new chord can easily be mixed in. Collaborators can explore what-if scenarious without shattering the shared space. These collaborative environments are also relatively easy to make and discard; there*sa low barrier to entry and exit. Yes, there’s an emotional and intellectual investment in them, but not an irrevocable commitment. Shared spaces can be divorced from time or distance or both. A blackboard can easily be worked on asynchronously, with collaborators leaving notes and annotations for one another at all hours of the day and night; it can also be worked on synchronously, with collaborators making a joint assault. Similarly, a fax machine and a telephone can annihilate distance for collaborators. Successful shared
174
KNOWLEDGE MANAGEMENT TOOLS
spaces create the aura of copresence: they make collaborators feel like they’re together, even if they’re not. That model is always manipulable; the sheet music is editable; the blackboard accessible. Most important, perhaps, it’s easy to play in the shared space. Formal protocols may exist, but they need not be rigidly enforced. Play allows for curiosity and serendipity, two historically essential ingredients for discovery and innovation. The shared space becomes a frame of reference, a medium, as much as a collaborative tool. Indeed, it becomes a collaborative environment. Shared space heals the rift between spoken language and visual language. In our culture, we’ve divorced representation from human interaction. People treat speech and writing-or speech and image-as binary, eithedor, competitive with each other. Speech is dynamic and interactive; words and pictures are static and designed to be observed. This is as silly and frustrating as watching but not listening to an orchestra play Mozart or hearing a Busby Berkeley musical on the radio. Visual and verbal languages should work in concert as part of a seamless continuum of expression we can share. The problem is, we are still in the silent movie era of shared spaces and collaborative tools. The tools are too disjointed, our shared spaces too restrictive for us to reap the benefits of multimedia collaboration. Our tools force us to endure collaboration as a more discontinuous and fragmented experience than it could or should be. To wit, napkins are fairly limited as shared spaces go. Blackboards and whiteboards-the established workhorses of shared-space collaborations-are better, barely. As Xerox Corporation researcher Mark Stefik notes in Beyond the Chalkboard, “Space is limited and items disappear when that space is needed for something else, and rearranging items is inconvenient when they must be manually redrawn and erased. Handwriting on a chalkboard can be illegible. Chalkboards are also unreliable for information storage: . . . figures created in one meeting may be erased during the next. If an issue requires several meetings, some other means must be found to save information in the interim.” (Of course, napkins are portable.) To put a collaborative spin on the Sapir-Whorf hypothesis, just as language shapes the process of thought, these shared spaces shape the process of collaboration. Technology will not only remove the obstacles but it will amplify the power of shared spaces. New collaborative tools and techniques will transform both the perception and the reality of conversation, collaboration, innovation, and creativity. John Dykstra’s sketching squad, the fax machine, and Jonathan Miller’s photocopied postcards are just the first stutter steps to a next generation of collaborative media.
COMPUTER-AUGMENTED COLLABORATION The next generation of collaborative tools is being explored by organizations ranging from Apple Computer to MIT to Digital Equipment Corp. to General MotorsElectronic Data Systems to Coopers & Lybrand. These organizations
Colla6orative Tools: A First Look
175
intuitively appreciate the importance of both technology and interdisciplinary innovation. Perhaps the single most striking aspect of these efforts is that they were all borne of necessity and not academic theory. At GM/EDS, researchers in Ann Arbor concluded after an extensive survey that GM work groups needed a better way to meet. At Xerox’s Palo Alto Research Center, a pair of researchers who frequently worked together on new ideas decided to build a set of computer-based tools to augment their collaborations. Stanford University researcher Fred Lakin decided that a computer would be a more versatile graphics display tool than a set of flip charts. Palo Alto’s Bernard DeKoven discovered that computer-generated shared space was the best way to get people to participate playfully in meetings. Coopers & Lybrand’s David Braunschvig learned that clients needed to work with visual representations of their strategies, problems, and opportunities. Indeed, back in 1984, Royal Dutch Shell strategic planner Peter Schwartz whipped up a crude computer model for a Hewlett-Packard calculator and hooked it into a Kodak Ektachrome projector to get the company’s managing directors to consider the possibility of an oil price crash. At first, the managers resisted-“We don’t play with models in the board room.” So Schwartz started playing with the model. Gradually, the managing directors began to participate. After an hour, they were so intrigued that they set up a meeting for the following Monday. “They couldn’t leave,” Schwartz recalls. “They were totally hooked.” This jury-rigged collaborative tool encouraged Shell to better position itself for the coming downturn in the price of oil. Collaborative technology is being driven by need-not by a pie-in-the-sky idealism about how people should work. People are building tools they actually use-not guinea-pig-ware for purely academic exercises. Perhaps the most cogent work in the new era of collaborative technologies is being done by the Xerox Palo Alto Research Center. The research-which is, appropriately enough, highly collaborative-blends disciplines from anthropology to software engineering. The initial findings are rich in insight and future applications. They hinge upon a very pragmatic sense of the way work gets done in organizations and a contempt for the techno-macho syndrome where nerds armed with slide rules call the shots. At Xerox PARC, the notion of personal computing gives way to interpersonal computing. The computer becomes the medium for shared space. “Collaborative computing will be much, much more pervasive than personal computing,” claims Mark Stefik, an artificial-intelligence expert who oversees much of the lab’s work on collaborative tools, “because while not everyone needs a personal computer, virtually everyone needs to collaborate.” This shift away from personal computing to interpersonal computing has tremendous implications for both computer technology and the way people interact with it. “When we move from personal to interpersonal,” assert Stefik and colleague John Seely Brown, ”the requirement for personal intelligibility of the subject matter shifts to a requirement for mutual intelligibility; the meaning of conversational terms shifts from being internalized [inner speech] and fixed to ex-
176
KNOWLEDGE MANAGEMENT TOOLS
ternalized and negotiated; o u r view of language shifts from a kind of description to a kind of action. But these points are just a beginning. The new technologies enable conversations with new kinds of properties; w e need new concepts to understand their nature.” (Emphasis mine.) There are several ways to inject the computer as a medium of collaboration. For the moment, consider a meeting room with a semicircular conference table where each participant has his own personal computer. At the front of the room is a large whiteboard-sized screen that can display high-resolution computer data. During the meeting, the participants can send computer data to one another and they can place it on the large screen for display. This is, crudely, the setup for Xerox’s eponymous Colab to support meetings. Colab is a useful model to begin exploring the next generation of collaborative tools. To promote shared viewing and access, Colab is built around a team interface concept known as WYSIWIS (pronounced “whiny whiz”) for “what you see is what I see.” The Colab software lets participants partition the large screen into multiple windows (the equivalent of miniwhiteboards) that can be enlarged, shrunk, thrown away, moved around, linked, clustered, or stored for later retrieval. Participants can also “telepoint” to windows and objects on the screen to identify subjects of interest or topics of concern. All the constraints of the traditional whiteboard-limited space, static representation of symbols-disappear. The large screen becomes a community computer screen where everyone can write, draw, scribble, sketch, type, or otherwise toss up symbols for community viewing. It’s the shared space. People can produce on it or pollute it. This collaborative environment does have a touch of Starship Enterprisdapersonal-computer-in-every-pot overtones to it, but there’s more here than an electronic whiteboard. This technology completely changes the contexts of interaction. For one thing, a conventional conversation normally has rules of etiquette that govern turn taking. These rules evolved around the constraints of an oral meeting, where only one person can speak at a time lest the conversation degenerate into babble. But in the environs of Colab and shared space, there are visual channels that can either augment or conflict with the spoken word. Conversation isn’t the only activity going on; it’s not the only domain of interaction. In ordinary conversation, a speaker responds in some fashion to the previous speaker’s comments; in this new environment, people may feel more compelled than at “ordinary’’ meetings to respond to something that appears on the screen. Traditional notions of conversational etiquette go out the window (pun intended) if one person writes a controversial message on the community screen while another talks about something else. In an oral conversation, the words have a soap-bubble quality: they float around, evoke some comment, and then pop and disappear. In a computerized medium, ideas are both external and manipulatable. When one “speaks,” one doesn’t just utter words-one moves objects. People can create icons-clocks, calendars, machines, spreadsheets-to represent certain ideas and concepts. Others can modify or manipulate these icons until they become both community property and a visual part of the conversation.
Collaborative Tools: A First Look
177
For example, a group can meet to design a chart for an important sales presentation. Both in conversation and on-screen, the design criteria can be specified-number of variables to be displayed, size of the chart, special symbols (if any), title, fonts, comments, and so on. People can toss up visual suggestions for group consideration and everyone can decide what looks best. An icon from one window can be moved and merged into a chart layout suggested by another. The collaborative group can tweak, stretch, and compare charts in a multitude of ways. They can produce a prototype of the final chart in a way that just couldn’t happen in a room with a whiteboard. A crude but effective meeting tool developed by Xerox PARC enables a group to create an outline of ideas. Cognoter is designed to help organize ideas for papers, presentations, talks, and reports. The end product is an annotated outline detailing the critical aspects of the project. Xerox PARC people structure a Cognoter Colab session into four parts: brainstorming ideas, organizing ideas, evaluating ideas, generating an outline. The brainstorming mode represents the antithesis of the standard wait-yourturn meeting session. A participant picks an empty space on the shared screen and simply types in all the ideas he has. Everybody can see what’s being typed and this can inspire conversations within the group. People can annotate items, expand them, ask questions, elaborate-and have everything recorded on-screen. (Usually, there’s no criticism of what’s listed-this is a classic brainstorming session where the purpose is to generate quantity, not quality.) The next step is organizing all these ideas. The Cognoter software lets people sort through, group, link, categorize, and order all these ideas. Themes emerge. Similar expressions are clustered together. Redundancies are eliminated. Arrows are drawn between clusters to delineate relationships. The skeleton and muscles of an outline begin to emerge. The verbal conversations increase as people decide what ideas belong where in the shared space. Then, people evaluate their ideas. They rank them in order of importance, annotate and expand the concepts that need to be explored in greater depth, and prune away the branches of the outline that will probably bear no fruit. The conversations here often degenerate into arguments and disputes as participants come to grips with their personal perspectives and the divergent priorities of other group members. (There may be tension here between participants wanting to revise existing alternatives and those wanting to derive new ideas.) Finally, each participant gets a printout of the final outline and sees how this outpouring of ideas has been structured by the group. Xerox PARC has another piece of Colab software called Argnoter, which is described as an “argumentation spreadsheet.” In contrast to Cognoter, Argnoter software divides the meeting process into:
178
KNOWLEDGE MANAGEMENT TOOLS 0
0 0
proposals, arguments, evaluation.
The idea behind Argnoter is that most misunderstandings and disputes derive from three main sources: personal positions, unstated assumptions, and unstated criteria. Argnoter is designed to make these all explicit and represented in the shared space. “It’s tough to have a hidden agenda in an Argnoter meeting,” says John Seely Brown. Shared space is where the real intellectual dueling takes place.
Phase for uncritical idea generation BEYONDTHE CHALKBOARD-BRAINSTORMING
Chalkboards
Multiuser Interface
GOOIS Forms for Meetings TimdSpace Extensions Cognoter
Parallelism
WYSlWlS Argnoter
Conversations Electern
Liveboard
Broodcast Methods Associations
IVEBOARD Replocement for o cholkboord. A loge touch-sensitive computer disploy. You con write on it with your finger or whatever. like o cholkboord but octive ond with digitiol memory.
FIGURE 9-1 Brainstorming: In the brainstorming phase, Colab participants suggest ideas that are promptly displayed on the large screen. Text explaining the ideas in greater detail is entered into special “windows” that can be called up and displayed if desired.
179
Collaboratiue Tools: A First Look
Phase for considering idea dependencies and groupings. BEYOND THE CHALKBOARD-ORDERING
Multiuser Interface
\
Broadcast Methods
FIGURE 9-2 Ordering: The ideas are then put in order on the screen by creating a visual link. This is a very dynamic outlining procedure and, collectively, these links display the hierarchy and networks of all the brainstormed ideas. Ideas usually have one or more links to other items.
Ostensibly, there may seem to be nothing revolutionary here, but exploring ideas and arguments in the context of shared space can completely transform conversation. The software injects a discipline that encourages people to create, visually and orally, a shared understanding with their colleagues. The technology motivates people to collaborate. “The coordination of intellectual work around manipulable icons draws on familiar skills for the coordination of physical teamwork,” Stefik and Brown observe. “When one participant wants another to work on an item, it is possible to pick up the item and drop it in the work space of the second participant. This is very much like the physical act of picking up a physical object (e.g., a football or a hammer) and handing it off to someone else.” When ideas become objects that can be manipulated, meetings become more concrete. The manipulative power of the medium makes it easy to categorize objects. Groups doing personnel rankings or project evaluations can shuffle through and
KNOWLEDGE MANAGEMENT TOOLS
180
[TOOL DIMENSIONS]-ORDERING
Forms for Meetings
Multiuser Interface
WYSIWIS BEYOND THE CHALKBOARD-
Time/Spoce Extensions
Parallelism
Challcboards
[Toll Dimensions]
[Language Features] Liveboord
[LANGUAGE FEATURESJ-ORDERING
• Electern
Conversations Broadcast Methods Associations
FIGURE 9-3 Grouping: The linked-up items can also be clustered into groups of ideas that will be worked on together. Each group has its own on-screen window. Through ordering and grouping, the large screen contains a number of windows that represent how participants visually categorize the topics they want to discuss.
sort the appropriate icons into whatever categories they agree upon—a major advantage of the WYSIWIS team interface. Everybody can see what's going where all at the same time. Another profound change for collaborative conversations is the possibility of equal access to the data. Anyone and everyone can "play chairman" and easily send data to the screen. In ordinary meetings, as Stefik and Brown note, one has to get out of the chair, walk to the board, pick up the piece of chalk, etc. Colab makes it logistically easier to participate. More significant perhaps, is the notion of parallel voices in the conversation. Clearly, in an ordinary conversation, everybody can't talk at once. But in this environment, oral contributions aren't the only way to participate. People can silently enter ideas for community consumption. Instead of having to wait your
Collaborative Tools: A First Look
181
turn and risk having the glimmer of an idea evaporate into the ether, you can enter it in a special window on-screen without orally interrupting what’s going on. Of course, unlike most meetings, even those with transcriptions and minutes, there is a complete record of what data, icons, and processes flowed across the shared space of the community screen. A few interesting initial observations emerge from this technological stew. For one, oral conversation tends to ebb and flow with the activity on the screen. Sometimes, visual language drives the conversation; other times, oral conversations drive the screen. Colab meetings have a different rhythm than ordinary meetings. As might be expected, there are tensions between the needs of the group to govern the shared space and the needs of the individual to express himself, creating occasional contention for screen space. Indeed, one Colab participant noted ruefully that “we felt we were spending an inappropriate amount of time managing display space rather than attending to the business of the meetings. Management of screen real estate is a key issue.” Indeed, space itself becomes a measure of meaning. Big windows assume greater importance than smaller windows. Something in the center of the screen dominates a window tucked into the lower left hand corner. Stefik recalls some meetings degenerating into what he describes as “scroll wars” and “window wars” where various individuals and factions struggled to capture screen space like generals trying to crush data insurgents in an information war. The medium also engenders its own idiom and humor. When people dwell for too long on a point, some prankster might thrust the clock icon right into the middle of the screen, enlarging it to humongous size, and letting it tick away. People get the message. When confronted with a meeting in an office bereft of Colab-like support, a regular user may ask plaintively, “HOWdo I point?” Similarly, when faced with a diminishing space on the office whiteboard while scribbling away, a Colab user may ask, “How do I shrink this?” or “How do I move this over there?” Like the frustration someone used to an electronic spreadsheet feels when confined to a paper ledger, a “frustration with the ordinary whiteboard develops when one has experienced superior media,” says Stefik. No one yet claims that collaborative tools will boost interpersonal productivity ten- or a hundred-fold-although Stefik likes to say that most Colab meetings “see new ideas exploding across the screen like popcorn.“ The point here is that by using this technology, a meeting turns into both a tool and a collaborative environment. Colab is not the future; it is a rough-hewn prototype lashed together by some good ideas and intriguing software. Colab, says Stefik, “is shaped too much by today’s technology-it’s an accident of current technology,” but it does offer a provocative peek into the potential of collaborative tools. “We now make Colab jokes at non-Colab meetings,” says Stefik. “People bitch ‘Why aren’t we doing this in Colab?’-the people who don’t get workstations (as they do at Colab meetings) are very, very frustrated people. Normally
182
KNOWLEDGE MANAGEMENT TOOLS
nice people become scowling and unhappy. Some of us have become Colab addicts.” Addiction is too strong a word, perhaps, to describe the way users of General MotorsElectronic Data Systems’ Capture Lab feel about their computer-augmented environment-but lust is not. Some GM groups insist they couldn’t get their work done outside the lab; they claim that it cuts the necessary time by a factor of ten. That claim may be exaggerated, but it underscores the notion that the computer-augmented collaborative environment can give a group a sense of empowerment. Almost without exception, reports Capture Lab coordinator Joyce Massey, every GM group that has used the lab has wanted to continue using it. One G M group insists that a job that normally took four months to accomplish was achieved in two all-day Capture Lab sessions. The Ann Arbor-based Capture Lab is part of the Center for Machine Intelligence, a research organization run by GM’s Electronic Data Systems subsidiary. The lab itself is designed to function as a conference room with a distinctly computational flavor. Several key General Motors working groups and task forces use the Capture Lab for their meetings. In contrast to Xerox’s Colab, the Capture Lab room consists of a large oval conference table with eight Apple Macintosh 11s embedded in it (so that they don’t interfere with participant eye contact). There is a large screen at one end of the room; the proprietary software enables any participant to “take control” of this main screen from their personal computer. The room is spacious, if a bit drab, and the computers are designed to blend into the decor rather than call attention to themselves. As with Colab, the Capture Lab technology and environment-the network of shared space-transforms interpersonal interaction. The most successful meetings tend to be those where people collaborate to create a document; the meeting is used to d o work rather than just talk about doing work. Control of the screen is passed about like a baton in a relay race-the participants each take turns as the meeting’s scribe. This rotating scribe approach and the nature of the technology assures ready access to the screen. Lots of conversation complements the information being entered on-screen. To be sure, some people accustomed to more traditional meeting formats have difficulty adjusting to the notion of a personal computer and a central display as collaborative tools. For example, which person sits where at the table is significant. The power seat at the table gives one a view of all the participants and the main screen. Invariably, this is where the senior people end up. A sociologist observing various meetings at the Capture Lab recounts that some high-ranking participants tried to turn their subordinates into their own personal scribes, with varying degrees of success. Eventually, many of these “dictators” decided to grab a keyboard. The Capture Lab, like Colab, generates its own style of interaction, collaboration, conversation, and presentation. Being in a Capture Lab meeting is not unlike being in a production room editing a movie or being in the control room in the van directing the coverage of a football game. There are multiple perspectives,
Collaborative Tools: A First Look
183
multiple players, and a sense that everything is moving just a little faster than normal. There’s a kinetic quality to the discussion and people feel that their words are tangible things-that they can reach out to the screen and move them around, edit them, blow them up, or file them away. There’s also a tactile and kinesthetic quality. People point. People use speech and physical gestures to draw attention to and away from the screen and themselves. People raise their voices to talk to the screen and lower their voices as they talk to one another. Something is always going on. There’s either the clicking of keys, text or images moving on-screen, someone talking, or someone shifting in his chair. The experience isn’t breathtaking or overwhelming, but it is disorienting to people who are more comfortable at ordinary meetings. The frames of reference have all shifted. You don’t look at people the same way, you don’t talk with people quite the same way, and you certainly don’t interact with information the same way. Participants tend to be more sensitive to what’s going on around them. They’re simultaneously more open and more self-conscious. They’re open in the sense that they know that everything they say can be enhanced and modified by themselves and by the group. They’re self-conscious because they know that what they say can become part of the record-so they want to measure their words with care. At first glance, all these different responses to computer-augmented shared spaces might simply reflect the uncertainty that comes from novelty. But a deeper look confirms that these rooms evoke different behaviors because they are designed to evoke different behaviors. Conversations have a different quality and urgency in these collaborative environments because the technology makes it easy to share information in profoundly different ways. There are both qualitative and quantitative differences between these technology-rich collaborative environments and the ordinary meeting room. If the laws of supply and demand are any indication, the new environments are a great success. The Capture Lab is booked solid, and there are additional labs are being built. More immediately, some groups are building their own ad hoc Capture Labs with Macintoshes and overhead projectors. Armed with twenty-four IBM Personal Computer workstations, two rearscreen projectors and an electronic copyboard, the University of Arizona’s College of Business and Public Administration offers its own flavor of meetings environment-a flavor that has been eagerly gobbled up for internal use since 1987 by IBM, the world’s largest computer company. Although the system is text based with nary a hint of graphical capabilities like its CoLab and Capture Lab cousins, it has gotten rave reviews both from IBM’s vast internal market and visitors to the Arizona site. Moreover, IBMwhich quantifies everything it possibly can-has done studies asserting that these “Decision Support Centers” generate over 50 percent in person-hour savings in meeting time and a 92 percent reduction in time required to complete a project. “IBM runs on task forces,” says Ann R. Hunt, a twenty-five-year veteran who oversees the Decision Support Center initiative. “Typically, one of those task
184
KNOWLEDGE MANAGEMENT TOOLS
teams can go on for three weeks.. . . What we found is those kind of sessions which often dragged on and on and on would conclude much more quickly in these rooms. . . . The tools lead you to have more structure. Instead of taking three weeks, the task teams could get their work done in a day and a half. You can get in forty-five minutes of brainstorming what once took half a day-and spend the rest of the time prioritizing what’s important. As in the other systems, participants can simultaneously type and transmit all their ideas onto a central, shared screen. This system supports only text, not graphics. The on-screen ideas are then clustered and analyzed according to the relevant criteria and then prioritized. The thrust of these TeamFocus software tools isn’t radically different from a Colab-there’s Brainstorming, Issue Analysis, Prioritizing, Policy Formation and Stakeholder Identification. But there is definitely a different cast to the technology. For one, anonymity is encouraged. IBM’s Hunt says that this helps keep people focused on the issues instead of individuals. “Conversations are much more content-oriented as opposed to challenging or getting into personalities,” she observes. “It works extremely well with highly opinionated people who would be prone to dominate a meeting.” Another crucial difference is that these centers all rely on facilitators to coordinate and conduct the meeting. Indeed, it’s not clear whether the technology is used to augment the facilitator or the facilitator is used to enhance the technology. However, Hunt immediately stresses that the most valuable aspect of the Decision Support environment is the hard copy-the transcript-of the meeting. “The physical output is vital,” she says. IBM will expand these centers worldwide and use them as both a marketing and planning tool with customers. Clearly, the measurable time-compression and productivity benefits of the centers are key to their relatively quick acceptance within the organization. But it’s equally clear the IBM’s “task force’’ management culture is slowly being altered by a technology that, while not exactly collaborative, creates a new dimension of shared space for groups that had previously squandered both time and energy. People are insisting that IBM provide technologies that manage relationships as well as information. A wackier approach to collaborative environments can be found in the POD, an octagonal meeting room an English computer company uses for brainstorming and other kinds of interactional creativity. Each side of the room has its own medium-butcher paper, video projector, computer projector, whiteboardto be used depending upon which stage the meeting is in. When the group is looking for multiple ideas, multiple walls are used. When the group seeks harmonic convergence, everyone focuses on a single wall. A round table (this is England, after all) in the middle has a hole in its center for various equipment and wires; it gives the optical illusion of drawing everybody into the center: sort of an organization’s collaborative black hole. Clearly, the focus here is collaborative environment over collaborative technology-and the point is clearly made. One is literally surrounded by spaces aching to be shared. A common design guideline of all these collaborative environments is that real people are in the rooms at the same time-physical presence is important.
Collaborative Tools: A First Look
185
Obviously, not all collaborations require physical presence. Phone conversations, fax messages, and videoconferences all permit productive collaboration at a distance. Indeed, some collaborations work best that way. But there is still something important about being in the same place at the same time with someone. A study of collaborative interaction in engineering design by John Tang from a joint Stanford University/Xerox PARC project “revealed that gestural actions are a prominent and productive aspect of the group’s activity.” He argues that collaborative work spaces should convey gestures and ”enable the fluent intermixing of listing, drawing, and gesturing.” Collaborative tools should empower human expression, not handicap it. That’s far easier to accomplish when the collaborators are physically together. Nevertheless, cross-country collaborative tools are also being explored. “In rooms filled with computers, cameras, microphones, and xylophones,” reports The Wall Street ]ournal, “Xerox Corporation scientists in cities 500 miles apart have been collaborating as if they were in the same building.” The company has scientists in constant real-time communication between Palo Alto and Portland, Oregon. The workers watch and talk to one another via video and speakerphone connections that are always open. They share documents over a network that links their desktop computers. “The all-day video and audio connections make a big difference, letting the offices interrelate casually as well as formally. The main links connect the common areas at the center of both labs, each of which has a camera and a big-screen monitor. . . . A few workers have cameras and monitors in their own offices. . . . People in both cities regularly eat lunch on camera, chatting with each other on the screens.” The xylophones? That’s how one captures attention. Xerox put a xylophone in each office and gave each researcher a personal melody. Tap the right keys to summon the desired researcher. Of course, the pervasiveness of a video and audio presence means that new rules of etiquette have to evolve. The point is not that this technical kluge represents a breakthrough-it probably doesn’t-but that organizations are trying to come to grips with the challenge of making collaborative efforts productive even over great distances. Simply creating communications links isn’t enough. One has to craft the communications technology in a way that creates shared spaces for collaboration, not just pipelines to exchange data. MIT’s Media Lab has pioneered and packaged an array of technologies designed to digitally transmit presence. The idea is to use technology as a medium to create a sense of co-presence between individuals and groups. Similarly, Bell Communications Research-the laboratory of the Regional Bell Operating Companies-is exploring a “virtual hallway” called CRUISER. CRUISER provides a blend of video and computer software and hardware that instead of physically moving allows participants to “browse” the hallways by video and chat to see what’s up. Indeed, this effort to add value to traditional communications devices also extends to the fax machine, a collaborative tool that works superbly with paper. But suppose you could “fax” three-dimensional models? America’s Defense Advanced Research Projects Agency has funded work in “selective laser sintering.”
186
KNOWLEDGE MANAGEMENT TOOLS
This technique starts with a picture of an object on a computer screen. The computer then slices the image into horizontal layers. Just as a two-dimensional image is computer-constructed from pixels, for “picture elements,” the solid is broken into voxels, for “volume elements.” Once the three-dimensional image is completely stored and transmitted over the network, a laser begins redrawing the item-sliver by sliver-on layers of powder that fuse and solidify wherever the laser strikes, one sliver per layer. A similar idea-stereolithography-allows the virtual transmission of threedimensional replicas by using liquids that harden when hit by ultraviolet laser light. The laser etches a pattern on the top of a bath of plastic, then a platform lowers the hardened pattern, exposing another liquid layer for the next slice to be hardened. Both these methods are slow, but electronics companies, aerospace firms, and car companies are all very interested. The idea of being able to telecommunicate three-dimensional shared spaces literally adds a new dimension of possibility. These are the tools that not only make distance irrelevant but invite collaborations where they had previously been impossible. Engineering, architecture, medicine-any field where three-dimensional models offer more insights than two-dimensional representations-will find their collaborative infrastructures reshaped by these technologies. There is a tacit, if unarticulated, design ethic lurking beneath all these emerging real-world examples. The real purpose of design here is not to build collaborative tools but to build collaboration. These rooms, these tools are media and environments that both encourage and enable collaboration. We don’t yet have a design tradition for collaborative tools in the way we have design traditions for buildings and furniture or for the graphic arts, but, as our understanding of collaboration deepens, that aesthetic will evolve. The POD, Capture Lab, Decision Support Center and Colab represent the first glimmers of a design tradition in collaborative environments.
and Computer-Based Communication Systems: Changing Behaviors and Concepts -
Kathleen Vian and Robert Johansen
One of the difficulties in assessing the impacts of any technology on a social or cultural process is that the process being assessed almost always turns out to be a moving target. It is not as if you could name all the pieces of furniture in a room, throw a magic technological switch, and then look for new additions to the collection. Rather, the technology seems to slither under the door and wrap itself around the various furnishings in the room, distorting them so that they look almost like they did fifteen minutes ago, but not quite. And, of course, the names that you gave them don't quite fit any more, though you don't have any new names for them either. Furthermore, while you're trying to figure out what they should be called now, the very walls of the room begin to open up so that you can no longer tell what should be counted as part of the room and what should not. This is the kind of problem we face when we try to describe the use of computer-based communication technologies in knowledge synthesis. We are tempted to start with a definition of knowledge synthesis and look for uses of computerbased systems that seem to fit within that definition. The problem is that the current concepts of both knowledge and synthesis are grounded primarily in the technological world of print media. There is no reason to assume that these concepts will remain the same or even particularly relevant in the technological world of computer-based communication. To look for examples of computer-based knowledge synthesis is thus something like looking for the kerosene in an electric From "Knowledge Synthesis and Computer-Based Communication Systems: Changing Behaviors and Concepts," Vian, Kathleen and Robert Johansen, in Knowledge Structure and Use: Impliarions for Synthesis and Interpretation, Spencer Ward and Linda Reed eds., Temple University Press. (c) 1983 by Temple University. Reprinted by permission of Temple University Press.
187
188
KNOWLEDGE MANAGEMENT TOOLS
light bulb-it may be possible to draw an analogy between the kerosene and the electric current, but the analogy doesn’t help much in understanding the potential of the electric lighting. . .or its impact on commercial signs, for example. Still, computer-based communication systems are being designed and used for what is currently viewed as knowledge synthesis, and those who are designing and using such systems need to know which designs and which uses are likely to be effective. They also need to know how demands on them may change as the concept of knowledge synthesis changes in response to the new technology. In this paper, we will try to address these questions, while maintaining an awareness that we are describing a moving target. We will start with the current patterns of use of computer-based systems-the way that we have seen groups use the technology for efforts that have at least some qualities of knowledge synthesis. We will suggest how these patterns differ from some of the more traditional approaches to knowledge synthesis. In our implications section, we will give examples of how computer conferencing has been used in formal knowledge synthesis activities and will suggest, based on our description of new patterns of interaction, what the concepts of “knowledge” and “synthesis” might look like in the future.
THE TECHNOLOGY Interpersonal communication through computers means that two or more people are communicating with each other and that the communication just happens to be occurring with a computer as an intermediary. It is important to remember that the basic notion here is people communicating with other people, not a person communicating with a machine (man-machinecommunication) or a computer communicating with another computer (computer communication). Of course, both man-machine and computer communication are often involved in an episode of interpersonal communication through computers, but these are only means toward the goal of interpersonal communication. There are two basic types of computer-based communication: electronic mail and computer conferencing. Electronic mail is person-to-person communication, similar to that provided by the telephone or the conventional postal system. The similarity with present media, however, is more misleading than helpful. Computer mail is a medium for sending messages from one person to one or more other people. The computer provides a means for instantly delivering the message (basically doing better what conventional mail already does), but it also provides remarkable facilities for editing, organizing, storing, and otherwise manipulating the message. It is these latter capabilities that move far beyond the capacities of conventional mail and belie the title “computer mail.” Electronic mail involves (1)an individual sending a message, (2) a computer terminal which encodes the message, (3)a transmission system (often a telephone line), (4) a computer, and ( 5 ) one or more recipients of the message-again using a computer terminal of some type. Typically, an individual types a message on a typewriter computer terminal. This message is sent to a computer that stores the
Knowledge Synthesis and Computer-Based Systems
189
message until it is claimed by the intended recipients when they check into the computer system using their own terminal. The type of communication that occurs through computer mail is generally ongoing; it differs in this way from a face-to-face meeting (which occurs within the limits of a specific time and place) or an audio or video teleconference (which occurs within time limits but not space limits). For instance, computer mail may be used within a corporate structure to provide a standard administrative communication system. It might also be used to coordinate activities of individuals who travel frequently, as during the 1976 election when the Carter campaign staff used Scientific Time Sharing Corporation's "Mailbox" system. The structure for this person-to-person communication depends on the particular computer mail system being used. All computer mail systems, however, trigger some basic alterations in the way interpersonal communication has occurred in the past. Computer conferencing can be viewed as a modest extension of computer mail, in much the same way as a conference telephone call extends a two-person call. However, research on the social psychology of small groups has indicated that there are sharp differences between dyadic communication and communication which involves more than two persons.' Similarly, group communication through computers is-in several ways-different from two-person communication. The current characteristics of computer conferencing, as distinct from computer mail, can be summarized as follows:
0
0
Computer conferences generally have a group and task orientation, often for a specific time period. A group record is kept automatically and can be reviewed by participants as needed. Current computer conferencing systems are often easier to use for nontechnical users; computer jargon is less imposing. Computer conferences can easily shift into synchronous group meetings, where several participants are present simultaneously. In such situations, everyone can be typing simultaneously, with a computer keeping a transcript of the proceedings and sorting the incoming messages. Computer conferences require group facilitation skills and leadership; organization of the meeting is critical.
The procedure for communicating via computer conference is something like this: Once you have a computer terminal and a telephone, you dial the phone number of the nearest access point to the network or computer your message system uses. You hear a high-pitched tone, indicating that the computer is functioning, and you place the telephone in the coupler on the terminal. The computer asks you to jump through a few verbal hurdles regarding accounts and passwords. Once you have satisfied the computer that you are a participant, you are offered any new messages that have arrived since you last checked. In a computer conferencing system, you may have a variety of topic-oriented conferences to check, and
190
KNOWLEDGE MANAGEMENT TOOLS
you may want to enter new information. The transcript might look like the following. Welcome. Please type your last name (and then strike the CR key). -SMITH Please type your password. -BACON Thank you. You may attend any one of the following activities: 1. Northern Network 2. I-Team Pilot 3. Evaluation Demonstration Please type the number of the activity you wish to join. #3 27-Aug- 80 The title of the activity is: Evaluation Demonstration (31) Smith -Kathryn, your comments on the last version of the -q[uomtionnairo holpod a l o t . I * d like input from -the whole group as to how appropriate the level of -detail in section 6 is. -CTLC (signal to end the communication) Your current participation is ended. Thank you. Terminal time 0:03:53
This sample transcript from a computer conferencing system shows the user’s responses (in bold) to one computer conferencing system’s questions during the log-in procedure. The user selects an activity, checks in to find that no new messages have been left, and leaves a message directed to one participant but intended for the whole group to see. Note that this is a self-activated medium-people use it when they want to use it-assuming the computer is available. This means that participants must have some agreed-upon schedule for checking the system; otherwise, important messages may go unseen, and the sender will become increasingly frustrated with the negligent intended receiver. Computer conferencing was used as early as 1970, and discussions of the concept took place even earlier. The Office of Emergency Preparedness developed a system for computer conferencing, combined with data-base resources, for monitoring and responding to national crises.2 In the early and mid-l970s, a series of field tests was organized by the Institute for the Future, with primary sup-
Knowledge Synthesis and Computer-Based Systems
191
port from the National Science Foundation. NASA, USGS, ERDA (now DOE), the Charles E Kettering Foundation, and other organizations were involved. These field tests generally involved groups of scientists, most of whom were geographically separated, who were engaged in joint tasks.3 At least six styles of computer conferencing have been identified in these field tests: The Exchange is typically carried out over a period of months. The participating groups are usually quite large, ranging in size from 20 to 40. The Community implies a qualitative change from the Exchange toward more cohesiveness as a group and a higher degree of social (as opposed to task-oriented) interaction. The individuals become committed to the other participants, as well as to the substantive purposes of the group. The Seminar is focused on a specifically defined topic. The most common example is the research seminar or open conference which involves asynchronous usage (when participants are not all present simultaneously), usually over a period of one week to one month. The Assembly is an extension of the Seminar, with more participants (perhaps a hundred or more), multiple topics, and a number of separate parts to the proceedings. Rarely observed to date, this style would be a computer conferencing analog to professional society conventions. The Encounter is somewhat similar in style to a face-to-face meeting, where participants are all present simultaneously discussing a topic for a short time (usually a few hours or less). The intensity of the Encounter is often quite high, since computer conferencing allows all participants to speak a t once with the computer program sorting out the order. The Questionnaire involves an unlimited number of participants in a structured question-and-response format. Such questionnaire formats are typically used as part of other computer conferencing styles, rather than by themselves. These six styles, however, only document the ways in which computer conferencing has been used in early tests through 1977.With both computer mail and computer conferencing, however, there is little reason to assume that the first uses of the new media will be good predictors of future uses. The designation “conferencing” is already inadequate to describe current applications, and it is likely to become even more so. The major extensions of this medium are likely to come from the computer’s capacity for storing, organizing, and manipulating information. For instance, mathematical models might be used as “participants” in computer conferences to provide quantitative inputs to group discussions of particular substantive areas.’ Another possibility is to use computer programs to aid in group decision-making, perhaps through consensus building or weighted voting techniques.$ Computer-based communication systems are proliferating. Furthermore, the distinctions between computer-based media and other electronic communications media are becoming increasingly blurred. Thus, before we start to examine new
192
KNOWLEDGE MANAGEMENT TOOLS
behaviors and concepts that may emerge with computer-based media, we need to clarify which media we are considering. One way to clarify the class or classes of media that we want to consider is to explore some basic questions about the technology. Table 10-1 summarizes differences among major classes of systems on each of 5 questions that are discussed next.
TABLE 10-1 Generic
Basic Options for Communication through Computers
Systems
Examples of Specific Systems
Journal Systems
Augment Spires*
Communication with oneself
Text
Personal file
Typically Assumes text editor, some comdata bases puter knowledge
Electronic Mail
Ontyme Hermes Mailbox
Communication with other
Text
Personal file
Not typically
Computer Plane Conferenc- Confer EIESt ing HUB
Communication within a group
Text
Transcripts May proof activities vide access t o programs, data bases, text, editor
Videotex
Prestel Telidon Viewtron The Source*
Communication with a mass audience
Primarily text
Storage and retrieval structure
Intergraph
Communication within a w - u -p
Text and Graphic images graphics (sometimes audio)
Class of
Graphics Systems
HUB
Group Size
Form of Communication
Record Keeping
Access to Other Resources
Complexity
Assumes some computer knowledge Does not assume computer knowledge but requires new learning
Not typically
Does not as-
Varies
Varies
sume computer knowledge, minimal learning
-
*Actually,very few systems are designed to serve only as journal systems. Many data-base systems or text editors could also be used a journals. Also, both Augment and Spires have many other functions beyond journals. Augment, in particular, can also be used for computer mail or some forms of computer conferencing. tElES means Electronic Information Exchange System. EIES also functions as a journal system or for electronic mail. $Videotex systems could conceivably provide all the other function on this chart as wee, though most of the early systems have not yet developed extensive communicationscapabilities.
Knowledge Synthesis and Computer-Based Systems
193
What is the group size supported 6 y the communication technology? The variety of available computer-based systems support a range of group sizes. Electronic mail systems typically support one-to-one communication: although the same message may be sent to a number of people, there is no “sense” of a group because no group is defined in the system. Computer conferencing systems, on the other hand, are explicitly group communication systems. The system defines activities in which there are members; all public messages in such activities are typically available to all members of the group. Videotex systems go still further; they begin to overlap with mass media since they are designed for a mass audience. They also use some of the same equipment as the most familiar mass medium, namely, the television screen. However, they retain the interactive quality of conferencing and mail systems. Of course, in the opposite extreme from videotex, there are computer information storage and retrieval systems that involve communication with a computer; and there are journal systems that support communication with oneself. While all of these systems could have an impact on the practices of learning and experimentation and application of new knowledge, this paper focuses on media that stress interactive communication and that support one-to-one or small-group communication; we are thus most interested in electronic mail and computer conferencing systems.
What f o m of communication does the technology support? Most computer-based communication systems are primarily text-based systems. Like print technologies, they rely on letters and numbers to get the message across. But here, too, definitions are not so simple. Computer-based graphic communication, for instance, is just beginning to emerge. Until now, most communication in computer graphics has been between one person and the computer. New systems, however, will allow remotely located people to communicate with each other graphically-much the way they might if they were both in the same room, using a blackboard to exchange ideas.6 The computer also extends the graphic communication capabilities of individuals by processing data and representing them in more complex graphic forms than a person working on a blackboard might be able to develop. Both multi-user and single-user systems that encourage the representation of knowledge in graphic form seem likely to have a particularly important role in future efforts at knowledge synthesis.
What is the record-keeping process for the technology? The form and permanence of records of communication are major dimensions characterizing any medium. The record-keeping procedures of computerbased communication systems vary from none to extensive. Electronic mail systems, for example, often do not have any record keeping built into them; computer conferencing systems typically maintain a permanent record of the “tran-
194
KNOWLEDGE MANAGEMENT TOOLS
script” of each activity. Systems that resemble data-base systems-such as videotex-emphasize the storage of information as their primary service; communication is focused on the stored information. It is also worth noting the variations in the form of the stored records. The records in computer conferencing, for example, are chronological, although they can be reviewed in other ways, such as by author or keyword. In fact, a characteristic of computer-based systems is that their categorization systems can be tailored, to some extent, to the individual user. In general, the user of computerbased systems has considerable personal flexibility in defining his or her records of communication; personal electronic files, which interface with almost any computer-based system, invite customization of records. (Even electronic mail messages can be stored by individuals in this way.)
What resources are accessible through the technology? In addition to providing access to distant human resources, computer-based communication systems may also provide access to other computer resources, such as computer programs for simulation models or statistical analyses packages, text editors, and data bases. Different systems place a different emphasis on this type of resource: electronic mail systems, for example, rarely make specific provisions for the use of other computer resources, although the environment in which electronic mail systems are used typically supports use of these resources, particularly if the user has some computer skills. To date, videotex systems have been designed to provide access to computer resources, namely stored data, with human communications as a secondary resource. Current developmental work in computer conferencing is seeking to provide more flexible access to computer resources such as programs and text editors in the context of group disc~ssions.~ Finally, some cutting-edge graphic systems support a combination of computergenerated graphics and human communication. How complex is the use of the technology?
The various system designs assume different degrees of user familiarity with computers. Electronic mail systems were originally designed for people who worked with computers regularly, and these people are probably still their primary users. The same is true of journal systems. Designers of computer conferencing systems, however, have usually seen their user market as less technical and less willing to put up with the less-than-human ideosyncracies of languages; nevertheless, they expect users to be able to use an ordinary computer terminal to gain access to a network, and to learn at least a few basic commands. Videotex systems are expected to be the easiest to use: the terminal is simplified, and the choices open to the user are limited. In this paper, we will be talking primarily about computer-based systems that (1)support the exchange of information among groups of people; (2)support text and sometimes graphical communication; (3)maintain and structure records of information that can be retrieved both by those who create them and by others;
Knowledge Synthesis and Computer-Based Systems
195
(4) provide access to other computer resources, preferably in the context of group communication; and (5) are usable by people who have no special training in the use of computers. This kind of technology has been used in a broad range of applications that demonstrate new ways of communicating, of handling information, and of producing information products. Most of the applications described in this paper are illustrations of what might be called conferences; their topics range from issues in education to subjects in transportation, energy, government policy, and chemical technology. Each of them involve activities that are, in some way, related to knowledge synthesis, bringing together diverse strains of existing knowledge in their topic area to create a new understanding. Each of them also illustrates some fundamental patterns that could eventually lead to a new concept of knowledge synthesis.
SOME BASIC PATTERNS When we look at the uses of computer-based systems to date, we find some basic patterns in the way people communicate, the way they handle information, the way they solve problems. Each of these patterns can be compared to traditional approaches of aggregating, interpreting, and applying knowledge from a variety of sources. They suggest some possible innovations in the process of knowledge synthesis.
Pattern 1: Lots of Interpersonal Interaction Everything that happens in a computer conference involves interaction between people. Thus, if the computer conference is the main arena for a synthesis project, the process is likely to be more interactive than traditional processes of knowledge synthesis. This pattern leads to an interactive approach to information. People rely more and more on interaction among each other as the procedure for obtaining information; they rely less and less on a process of extracting it from traditional sources such as published books, journals, and data bases. In each case, there are examples in which a participant asks, rather casually, whether another participant knows anything about a particular problem (which may not even be directly related to the task at hand); almost always, other participants respond either from their own experience or from a contact with someone they know. Both the inquiring member of the conference and the group as a whole have access to this exchange. A four-month conference within a private corporation illustrates this pattern well. The conference involved eight people from different divisions of the company: its purpose was to build a better understanding of the formulation and safety issues involved in a new chemical technology. Among the difficulties that led the group to try computer-based communication were the difficulty of staying
196
KNOWLEDGE MANAGEMENT TOOLS
up to date with the field (due to problems of access, over-secrecy, and vagueness of formal reports); the difficulty of getting information at the times when it was most needed; and the difficulties that arose when project people became so immersed in their own projects that it was difficult for them to track related developments. The participants in this conference took responsibility for keeping each other up to date and for providing immediate or nearly immediate access to information. For instance, at one point in the conference, one participant described a problem he had encountered in a safety testing procedure; a computer conference colleague from another division checked with one of his co-workers and reported some suggested solutions for the problem. As a by-product of this interaction, the person with the problem also learned of another person in the company with relevant expertise. In this sense, the conference became a kind of electronic hallway-withproverbial-water-fountain; the result was an extension of opportunities for casual information exchange, in which access to things of value-in this case, information-was obtained primarily through contacts, through a network.8 Projecting this pattern into a future in which computer conferencing is used extensively for synthesis-like activities, we anticipate that information will be perceived quite differently than at present. Underlying the current view of knowledge synthesis is a perception of knowledge as an accumulation of information, as “facts.” Information, in the current view, is fixed, concrete, something that exists independently of either the person who produced it or the person who might use it. It is divisible; it has parts that can be separated and used individually. It is essentially additive: information added to information produces more information, and more information generally means more complete knowledge. In short, information is an object that can be handled, altered, and used like any other object in the real world. Perhaps the only difference between it and other objects is that it cannot be discarded outright. All information must somehow be incorporated in any new information; if it does not fit, some account must be made for its exclusion. It is worth noting that such a view of knowledge is very appropriate to a culture in which print is the primary means of communicating ideas and experiences. Print has the same quality of concreteness as the scientific fact. It is an object, made up of component parts that can be reduced or combined. And it is persistent. It stands in archives, demanding that account be taken of it. Information in a computer conference is more ephemeral, softer, less independent. So in a world of computer technology, information may not be perceived as existing independent of the interaction process or of a problem being solved. Rather, it would be created with new and different significance for each problem or need. This creation of information is a group process; it emphasizes the exchange of information rather than its capture in categories that are generalized to many problem area^.^ This more interactive approach to information can be seen as adaptive for several reasons: first, it is a manageable response to the problem of information overload. Computers have a tremendous potential for generating, storing, and
Knowledge Synthesis and Computer-Based Systems
197
displaying data. If guided by a “capture” model of information storage and retrieval, computer information systems will only continue to aggravate current problems of “keeping up with the field.” The concept of keeping up with the field stresses an inherently passive approach to information: so long as researchers are preoccupied with finding out what others are saying or have said, they are not saying anything themselves. They are not creating. Their work is being defined only in terms of what others have done. They are responding rather than initiating, being cautious and well grounded rather than taking intellectual risks. While knowledge synthesis can be done this way, it is not as likely to produce the “qualitative leap” often associated with significant syntheses. Uses of computer systems that promote this passive model might actually function as a harness that reduces personal risk by constraining one’s thinking to minor variations on what has been thought before. By contrast, an interaction or exchange-oriented model would build on the concept of a network in which data flow naturally toward the problem to be solved, interpreted along the way by people who have some understanding of the problem and hence some clarity about the type of information-as opposed to data-that is needed to address the problem. The focus is not then on “keeping up with the field” but on providing enough information about the problem to enough different sources-human sources-of data so that they can participate in interpreting the data and creating the information base for the synthesis. There are some potential problems with the interactive approach and its rather amorphous view of information. For example, there will be some anxieties about the systematic quality of the new brand of knowledge synthesis; after all, scholarship has always been defined in terms of systematic review of what has gone before. But this model of scholarship is grounded in a print-based culture, and for the reasons stated above, it simply may not be adequate for a culture of electronic media. Beyond the anxieties about systematic review, there is likely to be a tendency toward more duplication of efforts. Finally, the approach to information and knowledge suggested by this emphasis on interaction may prove very dependent on individual personalities. The work on groups and social networks suggests that the flow of information is as dependent on informal social factors as on task-related expertise; there is no reason to assume that computer-based groups and networks will be different. Thus, the products of computer-based syntheses are likely to reflect the particular social and personality variables much more directly if networking is stressed in the process . . . an observation that is related to our next basic pattern.
Pattern 2: Emphasis on the Group Computer conferencing is a unique blend of group and text-based communication. Typically, text-based communication has been either one-to-one communication or one-to-many communication. Group communication has been largely
198
KNOWLEDGE MANAGEMENT TOOLS
verbal. Computer conferencing is both, and the blend can be somewhat uncomfortable for those engaged in synthesis activities. Technologies such as handwriting, typewriting, and printing have apparently encouraged a single-person approach to synthesis. This tendency is often explained by the so called “linearity” of these modes of communication. Whether or not linearity is the explanation, most people will express a preference f o r - o r even argue the necessity of-working alone when it comes to the basic “pulling together” of information. Maxims like “too many cooks spoil the broth” or “a camel is a horse designed by a committee” illustrate the reluctance to engage in group syntheses: it is inefficient. Users of computer conferencing don’t automatically change their minds about this matter either. They consistently give high marks to the technology for activities like exchanging information and opinions and for general discussion of ideas. Resolving disagreements, which involves a blending of information and ideas into a harmonious whole, gets rather poor scores.1oThis attitude is not universal, however. For example, participants in a conference on individually guided education graded the medium as “fair” for resolving disagreements, compared to the scientists and engineers in some of the more technical conferences, who gave it less-than-satisfactory sc0res.I’ But even the scientists and engineers who were involved in a conference on the future of transportation were able, ultimately, to accomplish their objectives. The conference lasted five and one half months and was sponsored by the National Aeronautics and Space Administration. Nineteen participants used a computer conferencing system to critique and integrate draft sections of a report to make a series of recommendations concerning research and development for intercity air and ground transportation through the year 2000a task that would be difficult in any medium. The transportation group illustrates well the kind of pattern that is likely to dominate synthesis in computer-based systems. Because of the overwhelming presence of the “group,” one might predict that the integration of information and development of new knowledge will be accompanied by more-than-usual attention to group process. And, in fact, analyses of the entries in the transportation conference showed that substantive entries totalled 23 percent of all entries, while procedural and social entries totaled 36 percent-half again as many as the substantive entries.12 If users of computer conferencing eventually become comfortable with group work, then, we can probably expect group process to figure strongly in the concept of synthesis. Those doing synthesis will likely claim that “if the process isn’t right, the product won’t be either.” Different groups may value different processes-those stressing equality or those stressing structure, for example-but they will all be very aware of process. Solutions to problems will also be viewed differently. Solutions will not be seen so much as rational paths to be identified and followed; rather, they will be seen explicitly as something that is negotiated through a social or even political process. The emphasis on group process also has a potentially dark side, namely the possibility of a “group think’’ syndrome. While computer-based systems may pro-
Knowledge Synthesis and Computer-Based Systems
199
duce a proliferation of different views, they may, in some cases, obscure divergent views. Compared to face-to-face discussion, communication via computer is characterized by restricted channels of communication. In particular, nonverbal messages about cultural background, organizational commitment, and goals and objectives may be less perceptible. These missing cues may give the group a false sense of consensus or-at the other extreme-exaggerate stereotypes of issues. The syntheses that result from such a process could be not only ineffective but dangerous. One of the key issues in structuring computer-based groups will be the membership. Who participates? As we noted with Pattern 1, personalities are likely to play an increasing importance in the synthesis process if synthesis becomes more interaction oriented. In regard to “group think,” divergent group membership would certainly be helpful, while too homogeneous a membership could create problems. Careful attention to the list of participants could help avoid these problems and even solve one of the basic problems with the current practice of knowledge synthesis, as Pattern 3 will illustrate.
Pattern 3: Users with Producers With the current view of information as an object existing “out there,” there has developed a basic distinction between producers, who produce information, and users, who apply it. There is something akin to a class struggle here, with each class generally feeling a little self-righteous. The producers of information are most often guided by their understanding of the current “state of knowledge”; they judge themselves successful when their work advances the state of knowledge. The users, on the other hand, are typically guided by a specific problem; they judge themselves successful when they are able to bring information to bear on a specific problem. Obviously, neither group will completely appreciate the efforts of the other, and in fact, their failure to communicate is one of the key problems cited in many areas of research-especially in education. The distinction between the two groups may shift from situation to situation, but most people will know clearly when they are functioning in each role-and their expectations of the “information” will vary accordingly. With computer-based systems and a more dynamic concept of information, we might expect a blending of roles of users and producers of information, since those who produce the information will be interacting more directly with the users and more directly with the specific problem. Users and producers will be more likely to generate information jointly, rather than one for the other. Also, given the ease with which computer networks cross institutional and geographic boundaries, computer-based communication systems may also bridge some of the institutional divisions between those who are practitioners and those who are attempting to provide a knowledge base for improving the practice. For example, a series of conferences has been held using a system known as Legitech.” These conferences typically involved both legislators and policy ana-
200
KNOWLEDGE MANAGEMENT TOOLS
lysts in exchanges that varied from broad explorations of alternative approaches to solid waste disposal to discussions of specific legislative restrictions in purchasing procedures. These conferences offered an alternative to the traditional approach in which policy analysts, commissioned perhaps by some government agency, would meet, analyze an issue, and then present their results to the group. In the Legitech system, anyone involved in the conference could ask a question, using a specific format; those who were interested could go to another level of the conference to get a more detailed explanation of the question, and then to still another level to view responses or respond themselves. Thus, the criteria that brought people together to discuss a specific subject were not institutional affiliations or job descriptions, but interest and competence in the specific questions being addressed. In such a process, roles become less clear, and it is difficult to discern who is producing information for whom. The information simply “grows” in response to a specific question. Beyond the breakdown of the conceptual barrier between users and producers of information is the potential for those engaged in research to take more responsibility for the way that knowledge is applied. Conversely, practitioners in this kind of interaction take more responsibility for the way that research problems are defined and information is generated. These are both practical and moral requirements in today’s world. The pattern of blending the roles of information user and information producer speaks directly to these requirements.
Pattern 4: A Valuing of “Chance” Gordon Thompson, after a historical review of previous communications innovations, concludes that one of three primary characteristics of a communication revolution is that it has “increased the ease with which shared feelings could be discovered and developed in the host society.”“ Such “chance” encounters to develop new contacts and possibilities can easily be enhanced by computer-based communication systems. Users can use the system as an “electronic sidewalk” (Thompson’s term) to stroll along and make new acquaintances, whether they be across town or on the other side of the world. Computer capabilities for storing broad ranges of information and drawing associations by various criteria (including random matches) also add to the potentials for chance encounters. Such capabilities might be employed to bridge differences among diverse audiences who rarely have the opportunity for direct contact (for example, academics and lay people). One example of the use of computer conferencing to expand chance encounters of ideas is provided by a company that uses the system to support a kind of knowledge synthesis activity among its staff. This activity involves a computer conference that has only a very general topic and no specific goals other than to serve as a forum for ideas related to the general topic. Participants are encouraged to invite others to participat-thers who may have some interesting perspectives. No attempt is made to control the direction of the conference or list of participants .
Knowledge Synthesis and Computer-Based Systems
201
In general, the computer conference creates an atmosphere of chance. Because it is asynchronous-that is, people come and go according to their own schedules rather than all being “on line” a t the same time-participants never know exactly what to expect when they enter the conference. There may be no new messages or 50 new messages. The discussion may have taken a new and entirely unexpected turn. They may be entirely alone in the conference (this is usually the case), or they may be pleasantly surprised to see a colleague from across the country log on just as they are reviewing the transcript from the previous two days. As people become more comfortable with this lack of control and structure, as they begin to enjoy the serendipity of computer conferencing, they may grow to value chance in all of their work. In the case of knowledge synthesis, such an emphasis on chance would be likely to mean a much broader base of knowledge being synthesized, plus more creativity in the process, since more divergent strains would be brought together. It should be noted, however, that computer-based systems could also be used to increase specialization and narrow the range of one’s contacts. Reading an electronic newspaper, for example, could encourage readers to look for those “stories’’ they already know they want. They could be encouraged to look only a t stories in a narrow range of topics rather than browsing, as in reading the pages of a conventional newspaper. It would perhaps be most accurate to say that computers increase the range of options for both increased specialization and exposure to a wide range of chance meetings. Also, assuming that the virtues of chance syntheses are acknowledged and encouraged, another potential problem could arise: chance encounters could be emphasized at the expense of more intentional activities. If this should occur, the user would obviously have less control over synthesis and probably have less success as well.
Pattern 5: Asynchronous Communication, Asynchronous Thought Another product of the asynchronous nature of computer-based comrnunication is the potential for calmer, more reflective thought 6 y (I group of people. The environments in which knowledge synthesis currently takes place are often hectic environments. The activities of the day are driven by interruptions-ringing telephones, visitors, crises to resolve. Furthermore, when interacting in a group, people are usually under pressure to respond immediately. They are reluctant to take group time to examine a detail about which they personally may need more information. They are often reluctant to say, “Let me think about it.” In computer-based communication they don’t have to say it. Such systems are typically used asynchronously: each person checks for new messages according to a personal schedule, makes new entries, and leaves. There is no requirement for simultaneous communication, as with face-to-face meetings on the telephone. This self-activated quality, present in many forms of computer-based communica-
202
KNOWLEDGE MANAGEMENT TOOLS
tion, could open the way for more conscious use of private reflection time as part of the synthesis process. In actual uses of computer conferencing, this potential is realized in different ways. It may be as simple as the use of a series of private messages to provide bibliographic data for someone who mentions a problem. Or it may be a series of exchanges in which an outside resource is consulted. Often someone will mention an idea from a book in the context of a conference. A second person, intrigued by the reference, may look it up, think about it, and return to the conference with a new commentary. The person who originally referenced the book may then find it necessary to reread portions of the book before responding. The dialogue that results could never have happened face to face. It is obvious that synthesis requires reflection time; personal insight is often at least as important as systematic search procedures. This sort of intellectual effort requires personal discipline, careful preparation, and time. It is not done mechanically. Of course, not everyone will experience computer-based communication as “low-demand” communication. For instance, what is low demand for one person may be “on call” for another. Rural telemedicine experiments have demonstrated this point clearly; a system that provides new links for a rural area may provide new burdens for a central hospital. In the case of knowledge synthesis, the situation is different, but with key similarities. External constraints or the desires of others working on the synthesis may make it difficult for each person to have full control over his or her own thinking sequences. The new technologies introduce new opportunities, but also new complications. Finally, a possible problem with the asynchronous thought pattern is the possibility of placing inordinate emphasis on the personal thought process. Raising personal inspiration to a mystical pedestal could provide an easy excuse for avoidance of the more routine-but important-aspects of the knowledge synthesis process.
Pattern 6: Divergence Knowledge synthesis-r any synthesis-evokes the image of bringing together many diverse elements into a single whole that is somehow “larger than the sum of its parts.” It is a process that starts out divergent and then converges. Convergence is often hard to obtain in computer-based group communication. Systems that support such communication are typically seen as useful for brainstorming and generating new ideas-divergent processes-but inefficient for activities that require convergence. For example, a very successful computer conference in education was the one mentioned earlier that addressed the topic of individually guided education. This was a four-month conference, sponsored by the Kettering Foundation, to facilitate the exchange of ideas and plans among eleven educational consultants working in the field of individually guided education. These participants were located across the country in areas where they had little
Knowledge Synthesis and Computer-Based Systems
203
intellectual support and few resources, yet they were actively involved in designing programs. The computer conference provided a meeting ground for developing an understanding of the state of the art. Yet the statement of the state of the art was not its purpose. Thus, it could continue to diverge up to the final messagesand it did. This emphasis on divergence is well suited to an operating environment that appears increasingly complex. More and more, social analysts are suggesting that the “problem-solution” model of research is inadequate. Social processes are seen as “messes” rather than well-defined problems; in this situation, what is needed are processes that generate multiple variables and multiple relationships among those variables. Also, in this situation, problem solving requires a multidisciplinary approach, an evolving approach. The problems of such approaches lie primarily in the difficulty of communicating-of conceptualizing the immediate issue-when people hold different, rigid, precise concepts from their own disciplinary perspectives. A more fluid conceptualization process could help to reduce this communication barrier. And divergent processes, in which each person has access to a broad range of inputs but no group statement is required, may be just the kind of fluid process. It may encourage an informal approach to knowledge synthesis, as Pattern 7 suggests.
Pattern 7: Informality Inherent in the notion of knowledge synthesis has been the expectation of continuing transformation of concepts. In practice, however, syntheses-particulady very good syntheses-become barriers to continuing development of ideas. They take on an authority that resists change; a resistance that ultimately encourages a reformulation, a new paradigm.1s This process, however, is a pattern grounded in a print-based culture. As we have suggested, information in this culture is a collection of facts, of independent objects. Because it is, it can be classified. It can be classified by disciplines, fields, and subjects. It can be classified by library classification schemes. It can be classified by its similarities with and differences from other “units” of information. And the classifications can become very fine. This classification process, besides involving people in the task of actively analyzing the information to classify it, produces other behaviors. It tends to encourage communication along the lines of classifications, that is, within disciplines, fields, and subjects, rather than across them. It also tends to encourage knowledge to develop within these classifications, rather than independently of them. It encourages people to look in one pigeonhole, ignoring the rest, when they are pulling together information for a synthesis. In short, the concept of information-as-fact leads to a formal approach to knowledge with formal problem statements, formal classes of data relevant to the problem, and formal syntheses, formally stated. By contrast, communication in
204
KNOWLEDGE MANAGEMENT TOOLS
computer-based systems is almost always less formal and less concerned with form than print-based media. Hence, it is not surprising that concepts themselves might become less formal with less emphasis on their being complete, authoritative “packages.” This informality could have a number of impacts on the way that people d o knowledge syntheses. For example, as we have already suggested, there might be no expectation of the “final word“ on a topic; in fact, topics themselves are likely to become much less clearly defined in a world of electronic knowledge synthesis. The emphasis in computer-based communication on increased interaction and the breakdown of distinctions between users and producers of information, together with the informality of that medium, suggests that disciplinary boundaries may become less stable, that fields may intermingle. Categories and categorization systems would be less static and more dynamic. The goal to develop concepts that are encompassing, while simple and precise, may be replaced by a goal to develop concepts that are sufficiently ambiguous to provoke creative, novel thinking-to continuously reshape the concepts. This more dynamic process of conceptualization may eventually be perceived as more trustworthy than traditional syntheses precisely because it is less formal. With the still-growing number of journals and books and newsletters, formal “packages” are likely to lose some of their credibility. They certainly must compete continuously among themselves, and such competition often leads to an emphasis on packaging more than on substance. A less formal, more dynamic approach to conceptualization would force attention toward the concept itself, and hence be viewed as more “honest.” Finally, the informality of computer-based communication creates an environment conducive to more intellectual risk taking. Because the knowledge synthesis process is informal and does not demand precision, new ideas and new formulations can emerge more rapidly and be tested in the field more readily.
Pattern 8: Technology as Participant Communications media are often seen as more or less invisible; they are channels of communication but not participants in the communication process. While this view can be questioned for all media, it is particularly questionable in the case of computer-based communication systems. Such systems generally combine a communication channel with specific computer capabilities. Thus, users of such systems may interact with the computer in the context of group communicution; in this situation, the system takes on some of the qualities of a participant in the communication process. As a participant, a system may not only provide information-such as data from a variety of data bases or answers from the execution of a problem-solving program-it may also structure the communication in a variety of ways. Management information systems, for example, often require communication in set forms that are standardized across an organization. While designed primarily for admin-
Knowledge Synthesis and Computer-Based Systems
205
istrative purposes, such systems may find their way into the world of research, particularly in the private sector. Thus, categories and forms of communication designed for purposes other than research may ultimately influence the practice of research. Some systems may also be designed specifically to structure the research process. The HUB system, a prototype system at the Institute for the Future, is an example.I6 This system encourages users to communicate “through” computer programs; comments are stored and displayed with various program runs to which they refer. Such systems may have the effect of reformulating or reframing a concept. Furthermore, a computer-based communication system may be programmed to moderate the communication patterns in particular ways: to collect certain information from all participants, for example, or to monitor participation and prod those whose participation is low while restraining those who are dominating the process. While the actions of the technology will reflect the goals and values of the users-the balancing of participation, for example, reflects the value placed on equality-the technology itself will contribute to the understanding of data in ways that may not be recognized or even recognizable. Computer graphic systems are perhaps the most striking example. Such systems are being used increasingly to help people visualize processes that are very complex and difficult to describe or understand verbally. These systems need not present only one visualization; the computer can conjure up 100 different visualizations as easily as one. The intellectual concepts that proceed from these various visualizations might thus be more like probes or triggers than comprehensive accounts of the process under study, supporting the dynamic, informal conceptualizations described in Pattern 7. Computer graphics may affect the synthesis process in another way, too. It may provide people with a more direct experience of information. A visual representation of numerical data is often assimilated more rapidly than are the numbers themselves, because they are experienced rather than analyzed. These experienced understandings also often carry more credibility than a verbal description or analysis because people tend to define as real things that they can experience directly. Nevertheless, the technology itself is contributing to the experience: the CRT on which the visual image is displayed has certain characteristics that allow certain experiences of the data and not others-just as the human physiology allows certain experiences of sensory data and not others. The computer’s capability to extend the experiences does not make it less subjective. It simply offers a new kind of subjectivity. In this sense, the computer represents an alien intelligence. One of the problems that is likely to arise with this pattern of the technology as an active participant in the knowledge synthesis process is the question of responsibility. If the technology assumes a more active role, it may also come to be seen as responsible for the outcomes, relieving humans of this responsibility. At one level, this tendency can be expected to foster the expectation that technological systems are the solution to social problems. At a more subtle and possibly more dangerous level, this assignment of responsibility to the technology may lead to a feeling of intellectual impotence. In any case, it seems likely that users of com-
206
KNOWLEDGE MANAGEMENT TOOLS
puter-based systems will anthropomorphize the computer and invite it into their discussions. Examples of this behavior already exist; the following message appeared in the conference on individually guided education: Checking in at 755 a.m., C.S.T.shows no new entries since I last talked to ‘Yennie” . b y the way, if I am going to swear at something, it has to have a name. . . so this thing has been named Jennie!
..
COMPUTER-BASED KNOWLEDGE SYNTHESIS: SOME NEW CONCEPTS The patterns of communication that we have observed in computer-based communication suggest some new approaches to the activity we now call knowledge synthesis. They suggest that this activity will be an interactive one in which information is created for specific needs. This information does not pre-exist, but evolves as participants in the synthesis process jointly interpret the existing situation. Users of information will be simultaneously producers of information and vice versa. Together, they will stretch their common understanding in several directions until a new understanding emerges. The emphasis in this process will be on group interaction more than on efficiency of the process; hence, there will be a high awareness of group process. The understanding that emerges from this process will not be the final word on the subject, but rather an “opening”; continuous transformation of concepts will be a conscious goal. Thinking will be more divergent and less formal. And it will be more obviously and deliberately a response to the communication technology itself. Such an activity may barely resemble knowledge synthesis as it is practiced today. And in fact, the very concepts of knowledge and synthesis are likely to be different. Knowledge, growing o u t of this kind of process, will be less objective. It will be more closely tied to feelings, the shared feelings of a group of people. This emphasis on feelings follows from the interactive, informal nature of knowledge synthesis activity; from the linking of knowledge “products” to group process; and from the tendency to experience knowledge through the technology, through situations-or simulations-created by the technology. Synthesis will also carry a different meaning in the world of electronic group communication. With a less objective, more subjective concept of knowledge, synthesis will not be so much a pulling together of information and transformation of concepts as the active creation of shared experiences for the group. In other words, when a group sets o u t to do a synthesis, their intention will be to synthesize an experience, from which some new group understanding will emerge. In the future, the concept of knowledge synthesis, as the authors in these chapters have been discussing it, may fade away. In its place will be a new concept, a new kind of activity. We might best call this activity synthesized experience. At this point, we can only begin to imagine the nature and impact of such an activity.
Knowledge Synthesis and Computer-Based Systems
207
RESOURCES Two basic resources o n the use a n d potential of computer-based communication systems are Robert Johansen, Jacques Vallee, a n d Kathleen Spangler, Electronic Meetings: Technical Alternatives and Social Choices, Reading, MA: Addison-Wesley, 1979; a n d Starr Roxanne Hiltz a n d M u r r a y Turoff, The Netw o r k Nation, Reading, MA: Addison-Wesley, 1978.
NOTES 1. A general review of the small group literature can be found in Paul A. Hare, A Handbook on Small Group Research, second edition, New York: The Free Press, 1976. 2. See Murray Turoff, “Delphi and Its Potential Impact on Information Systems,” AFIPS Conference Proceedings, Volume 39, Montvale, NJ: AFIPS Press, Fall 1971, pp. 31726; and Murray Turoff, “Delphi Conferencing: Computer-Based Conferencing with Anonymity,“ Technological Forecasting and Social Change, Vol. 3, 1972, pp. 159204. 3. Jacques Vallee; Robert Johansen; Hubert Lipinski; Kathleen Spangler; Thaddeus Wilson; and Andrew Hardy, Consultant; Group Communication through Computers, Volume 3: Pragmatics and Dynamics; Institute for the Future; Report R-35; October 1975; and Robert Johansen; Robert DeGrasse, Jr.; and Thaddeus Wilson, Group Communication through Computers, Volume 5: Effects on Working Patterns, Institute for the Future, Report R-41, February 1978. 4. See, for example, Hubert Lipinski and Richard Adler, Computer-Based Support for Group Problem Solving, Institute for the Future Research Report R-51, December 1981. 5. See, for example, Peter Johnson-Lenz, Trudy Johnson-Lenz, and Julian M. Scher, “How Groups Can Make Decisions and Solve Problems through Computerized Conferencing,” Bulletin of the American Society for Information Science, Vol. 4, June 1978, pp. 15-17. 6. For example, Northern Telecom has introduced a system called “Integraph,” which provides interactive graphic communication along with audio teleconferencing. 7. For a discussion of this potential, see Hubert Lipinski, Robert Plummer, and Kathleen Spangler Vian, “Interactive Group Modeling, Part I: Extending Group Communication Through Computers,” Institute for the Future, Report R-44, Menlo Park, California, 1979. 8. It is interesting to note that Peter Gerstberger and Thomas Allen have found that information seekers tend to go to the sources nearest them for help, rather than to the best sources. (See Peter Gerstberger and Thomas Allen, “Criteria Used by Research Development Engineers in the Selection of an Information Source,” Journal of Applied Psychology, Vol. 52,o. 4, 1968, pp. 272-9.) In our work with computer conferencing, we have found this same pattern, except that the computer conference changes the perceptions of who is nearest; it alters the “intellectual architecture” of the office. (See Robert Johansen, Robert DeGrasse, Jr., and Thaddeus Wilson, Group Communication Through Computers, Volume 5: Effectson Working Patterns, Institute for the Future, Menlo Park, California, 1978.)
208
KNOWLEDGE MANAGEMENT TOOLS
9. These contrasting qualities of information in print and computer-based media parallel the distinction made by Brenda Dervin. See her paper in this collection. 10. Jacques Vallee, Robert Johansen, Hubert Lipinski, Kathleen Spangler, and Thaddeus Wilson, Group Communication Through Computers, Volume 4: Social, Manugerial, and Economic Issues, Institute for the Future, Menlo Park, California, 1978, p. 116. 11. Ibid., p. 117. 12. Ibid, p. 35. 13. Chandler Harrison Stevens, “Many-to-Many Communication Through Inquiry Networking,” World Future Society Bulletin (NovemberDecember 1980): pp. 31-35. 14. Gordon B. Thompson, “An Assessment Methodology for Evaluating Communications Innovations,” IEEE Transactions on Communications, Vol. COM-23, No. 10 (October1975): p. 1048. 15. This, of course, is the picture of science painted by Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962). 16. See Lipinski, Plummer, and Vian, “Interactive Group Modeling.”
PART FIVE
Implementation
This page intentionally left blank
Integrating New Technical Processes and Tools Dorothy Leonard-Barton
A pilot who sees from afar will not make his boat a wreck.
-Amen-em-apt (700 B.c.) Egyptian Philosopher The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. - G e o r g e Bernard Shaw Man and Superman ( 1903) Knowledge is the only instrument of production that is not subject to diminishing returns. -J. M. Clark' (1927) Economics Professor, Columbia University The preceding chapter suggested that shared problem solving leaps to a new level of creativity when managed for creative abrasion. In this chapter, we examine how implementation of new technical processes can move beyond merely increasing efficiency when managed for learning. (See Figure 11-1.)Examples herein focus more on internally generated processes and tools than purchased ones since
Reprinted by permission of Harvard Business School Press. From "Implementing and Integrating New Technical Processes and Tools," from Leonard-Barton, Dorothy, Wellsprings of Knowledge: Building and Susklining the Sources of Innovation, Harvard Business School Press, 1995. Copyright (c) 1995 by the President and Fellows of Harvard College, all rights reserved.
21 1
212
KNOWLEDGE MANAGEMENT TOOLS
the former embody proprietary information and potentially contribute more to core technological capabilities. (The contribution is potential because core rigidities can also embody much proprietary know-how; the uniqueness of knowledge is no guarantee of worth.) Moreover, when implementation is seen as an act of innovation rather than the mere execution of a plan, integration of even those tools and processes available on the open market can constitute a competitive advantage.
IMPLEMENTATION AS INNOVATION The story. . . of the ill-fated jumping ring circulator’s introduction into an aluminum mill encompassed many of the problems that implementation of new tools often encounters-from immature technical design to incompatibility with operator incentives. The single biggest underlying cause for the demise of this initially promising innovation was the quite understandable but simplistic assumption that physical installation was the sole project objective and criterion for success. Implementation of the JRC was not regarded as an exercise in knowledge creation or management. In a study of thirty-four projects that developed software tools to enhance internal productivity in four large U.S.-based electronics firms, Leonard-Barton and Sinha2 found that, in addition to the quality and cost of the technology, and its initial compatibility with the user environment, two managerial processes were important in explaining different levels and types of successful implementation. The first of these was the degree and type of user involvement in the design and delivery of the system, and the second was the degree to which project participants deliberately altered the technology and also adjusted the user environment in a process of mutual a d a p t a t i ~ n . ~ Both of these processes essentially involve managing the creation and channeling of knowledge. They are very similar to the management tasks involved in new-product development, except that the market is internal to the organization. However, implementation is not usually managed as if it were an exercise in innovation, and that is the key point in this chapter.
USER INVOLVEMENT Two generic reasons are typically cited for involving users in the development of a new technical system: (1)implementation implies some level of change in the users’ work, and research on change suggests that people are more receptive when they have contributed to its design; and (2) involving users in the design of their tools results in superior designs since users have specialized knowledge about the environment in which the tools will be utilized, and that knowledge should be embodied in the design.
Implementing and Integrating New Technical Processes and Tools
EXTERNAL
importing Knowledge
Capabilites
Implementing and Integrating
213
INTERNAL
FIGURE 11-1 Knowledge-Creating Activities: Implementing and Integrating New Technical Processes and Tools
Creating “Buy-In” For almost a generation, managers have realized that people participating in the design of their environment appreciate the sense of control that such participation provide^.^ By involving users in the design of their tools, managers create “buy-in” to the implementation process-i.e., some receptivity to the change the new tools imply. The obverse is also true: leaving users out of the development process may generate dissatisfaction with the new tools. A developer in a highly controversial and unsuccessful project to build a proprietary computer-aided engineering tool observed of the engineers in the department designated to use the innovation: “We could have given them the most wonderful system in the world and they would not have been happy because they were totally excluded from its design.”
Embodying Knowledge The second motive for involving users in process development projects is more germane to this book, however. As noted in the discussion in Chapter 1 about the four dimensions of core capabilities, technical systems embody accumulated knowledge, aggregated from multiple sources inside and outside the organization. Developers rarely possess all that knowledge themselves but must interact with users to create, or capture and structure, and then embody the requisite knowledge. Tool developers understand the principles and scientific knowledge
214
KNOWLEDGE MANAGEMENT TOOLS
that underlie the new process tool: the technical engineering or scientific knowledge required to build the tool itself-for example, software, hardware, chemical, or biotechnology engineering principles. However, it is the proprietary know-how about specific tasks in the organization’s particular work environment that is critical: such know-how adds the potential for a tool to become part of a core technological capability. Highly skilled users who understand their own work processes are usually the source for such k n o w - h ~ w . ~
Merits of User Involvement Involving users in the design of a new process tool does not automatically lead to a successful project outcome. In fact, there has been much debate in academic literature about this issue since different studies have found relationships between user involvement and project outcome that range from positive to neutral to negative.6 Confusion about the benefits of user involvement has arisen in large part, however, because so many studies have treated the topic simplistically. They usually fail to take into account the selection of users, the timing of their involvement, the nature of the involvement required by the relative novelty of the system being built, the users’ ability and willingness to provide the right kind of knowledge, the expectations of users and developers about the nature and extent of the knowledge to be embodied, and so forth. Yet as the examples in this chapter illustrate, such factors explain why user involvement sometimes seems integral to success and at other times is inimical to it. User Selection
If the objective for involving users in the development of new process capabilities is to integrate their knowledge about operations into the design of the new tools, then selecting those users is a critical managerial task. The criteria for selection are often far from clear. The question is, What kinds of knowledge should users possess in order to guide problem solving and the creation of the system? For instance, is it more important that users be expert in the tusk to be aided, so they can provide critical comments on the functionality of the tool, or that they typify the user population in their ability to manipulate the user interface? As the following example suggests, the two different kinds of knowledge do not always come in the same human package.
Differing F o m of Expertise A corporation that manufactures large air-conditioning systems for commercial and apartment buildings employs a sizable field staff to maintain and service equipment. In designing the expert system “HELPER” to use on-site to check maintenance and service tasks, the developers searched for the most experienced maintenance people they could find. They finally identified and enlisted the help of a union pipe fitter, “Bill James,” with over twenty-five years of experience and a reputation as an excellent diagnostician. However, after moving James and
Implementing and Integrating New Technical Processes and Tools
215
his family over 1,500 miles to be near the development effort, they discovered that he did not possess the requisite knowledge. Although James knew how to diagnose and service the chillers, he responded to symptoms with very little understanding of the inner workings of the chillers-i.e., the causes of problems. To construct the expert system rules, the software engineers enlisted an instructor in maintenance from a local vocational technical school who was deeply versed in the electronics and mechanics of the systems but who retained the perspective of a user-not an engineeddesigner. On the other hand, James was an excellent choice to help developers design a system interface for use by “a hairy-chested pipe fitter who had never seen a computer before in his life.” He had no prior experience with computers but was very familiar with the task. The HELPER system was ultimately successful, paying for its development in less than a year because the contract base for service to chillers grew by 40 percent within the first six months after the system was installed. However, the developers were left wondering how they could have ascertained in advance that their expert maintenance man could not provide the needed domain knowledge.
Representativeness The same dilemma exists with the first site that tests a new tool. If the tool is to be customized for only one set of internal customers, the selection issue is relatively simple. However, if the new technical systems will be distributed to multiple offices or factories throughout the corporation, then the choice of the user site to help develop and test prototypes becomes crucial. A comparative study of three plants implementing a software package uncovered significant hazards associated with the unwitting selection of an atypical user site to guide design.’ The plant site selected as the first recipient of the package designed by corporate services to automate and monitor purchasing functions within manufacturing differed from other sites in several important ways. The most critical was its atypically low number of long-lead-time purchased parts. Based on its experience with this nonrepresentative plant, the corporate team programmed the software to order this category of parts to arrive only once in six months. When the software was implemented in the other plants’ purchasing departments, where as many as 40 percent of the orders fell into the long-lead-time category, the receiving departments were quite literally buried under incoming components on a given day every six months. This apparently simple miscalculation, with its attendant complications, was very difficult to correct locally at the individual plants, for they did not own or have access to the centrally controlled software code that had to be reprogrammed. The needed adjustments took well over one year to complete and caused considerable friction between software developers and users.
User Willingness Regardless of the knowledge sought from users, another critical criterion for selection is users’ willingness to participate-to take the time to provide feedback
216
KNOWLEDGE MANAGEMENT TOOLS
and suggestions. The task of soliciting and convincing users to become involved usually falls to the development manager, who is likely to value “know-who” above know-how and fall back on interpersonal contacts-a plant manager who was a college chum or a production supervisor with whom the development manager has worked before. Willing users may not be representative, of course, and representative users may not be-willing. In fact, in a study of end-user computing in forty-four firms, Doll and Torkzadeh found that users who were involved more than they desired in the development of a system were less satisfied with the end result than were users involved less than, or just about as much as, desired.” User “codevelopers” must be willing to venture far beyond their job descriptions and often outside the boundaries of any reward or incentive system. Reflecting on participating in such a project, one manager of such a user group said, “We take some very large risks. This was one of them. I had people come to me and say, ‘You’re jeopardizing your career. Why do you think this will pay off?’”
Modes of User Involvement
User involvement is a broad term, covering a multiplicity of possible interactions. In the above-mentioned research on thirty-four software tools developed within four large electronics firms, four different modes of user involvement were o b ~ e r v e d(See . ~ Figure 11-2.)On average, the projects in which users were consistently and heavily involved from first to last were completed more quickly (mean of 20.8 months, as opposed to an overall average of 28.9 months across all thirtysix projects). However, the importance of user involvement to project success, as measured by increased productivity and other benefits, varied. Projects in which the developers already had a lot of information about the user environment could succeed without user input. Projects in which developers had inadequate knowledge, either because of their own mind-set or because the tools were revolutionary and the interaction of those tools with the user workplace was uncertain, required shared problem solving, and hence user involvement, in order to succeed. These points are best explored via concrete examples. Delivery Mode,or “Over-the-Wall” Among the thirty-four projects, some development teams conceived of a tool in the absence of any user specifications-or even expressed user need. Developers acted as vendors, delivering a completed tool to users, sometimes without training or manuals. It was simply “tossed over the wall” between the two groups, with the expectation that (1)it was completely ready for use or (2) users were capable of figuring out and customizing the new process themselves. If feedback from users was solicited, the impact more likely would be felt on the next generation of the tool or process than on the current one. In this approach, developers often designed a tool that they themselves would like, so their product concept was drawn from an understanding of their
Implementing and Integrating New Technical Processes and Tools
217
High Aseph 'Oyster4 'Adept
Hats Monitor Drink Road
Apprenticeship and Codeveiopment
MLsort
Adam PMP
cake
Network
Simulate
Early User involvement
Alien Metal
Mime
Laput Teacher
ConsuRancy
Master
Oyster-U..S.
Quick Estimator
Delivery
Ghost Twig
MimeJ AdeptJ
SCE
MetalJ
Low
High
LOW
Late User involvement J=Japanese = Apprenticeship
FIGURE 11-2 MultidimensionalScaling Map of Modes of User Involvement
own needs and desires. From the users' point of view, the over-the-wall interaction with developers was satisfactory only to the extent that one of two conditions existed: (1)the new tool or process was in fact totally self-explanatory, and the users did not need any understanding of its internal workings; or (2) the users themselves were as technically skilled as the developers and needed no help. In either case, there was no expectation that knowledge integration was necessary. Obviously, such a close match between developer and user mind-sets and skill sets is rather rare. Since there is no user feedback during development, delivery mode presents multiple hazards. Developers may fail to anticipate user needs accurately. Users may not have the skills to integrate the tool into their work environment. They may desire more knowledge than provided; without more explanation and demonstration, they may not understand the potential of the new tool
21 8
KNOWLEDGE MANAGEMENT TOOLS
to change their work. Since the delivery mode is a one-way flow of information, no mechanism for knowledge integration exists. In sum, the effort devoted to creating a new technical capability can be partially, if not totally, wasted.
“TWIG”: SUCCESSFUL OVER-THE-WALL Users receiving a new artificial intelligence program called “Twig“ from internal corporate developers said, “[The developers] tossed us a [software] tape, and off we went.“ These users were totally unconcerned that the new tool they received had no specific accompanying documentation because a number of them had technical backgrounds equivalent to those of the developers. “We all have advanced degrees in computer science and artificial intelligence; we can read [generic All manuals, and there’s no difficulty.” In this case, the users could fill in any gaps in knowledge transfer not accomplished by the tossing of the tape over the wall to them.
For Japanese users, the implementation of Cactus did not augment their capabilities enough to warrant investment in learning its operation. More than the incremental improvement to programming productivity, they wanted the capability to improve Cactus, adapt it to their needs-and to absorb underlying technological principles so that they could construct their own software. The U.S. corporate laboratory in this case had never expected to deliver those kinds of benefits; as the Japanese users correctly observed, the developers interpreted their role more narrowly than the users desired. Although the resources devoted to developing Cactus paid back in the United States, those expended on transferring the system were wasted.
“CACTUS”: UNSUCCESSFUL OVER-THE-WALL “Cactus” was a proprietary software environment used by programmers in various sites around the world to create country-specific operating interfaces for the equipment the corporation manufactured. Although Cactus was highly successful within U.S. sites, when it was transferred to a Japanese partner, the users were highly dissatisfied. The developers sent over the software application-but not the source code that underlay the application. Nor did the developers offer access to any additional technical knowledge. From the Japanese perspective, the interaction was incomplete; no transfer of technology had occurred. “We think that transfer means that we understand the new technology and have an ability to modify the function of some tools by using that new technology,” the Japanese explained. “We learned from [the developers] how to use Cactus-but not how to change it. [The developers’] image of this project was to install Cactus, not to transfer the technology [embodied] in Cactus.”
Implementing and lntegrating N e w Technical Processes and Tools
219
Consultancy Mode Developers in a number of the study’s projects believed that periodic consulting with users about features and functions provided adequate opportunity for feedback and user input. When work processes in the user environment (the factory, the engineering department, the office) were relatively well established, and therefore “domain knowledge” was already structured and codified, developers did not believe that users needed to be part of the development team. This mode of interaction seemed to work for upgrades of existing tools or when the corporate objective was to standardize a work process as well as further automate or computerize it. The greatest need for new user knowledge in these situations lay in designing the user interface. The more potential user groups there were, the more difficult the task, of course, and internal vendors were often unfamiliar with the kind of trade-offs external vendors constantly make when designing products for a diverse market. As one internal developer lamented, “We felt we had something that was applicable to a lot of laboratories [in the corporation]. So we tended to listen to everyone and promise everybody everything.” The most successful “consultancy” projects were very large, highly structured endeavors in which user groups were treated like customers with diverse needs and a right to influence, but not totally direct, development.
I “MONITOR”: UNSUCCESSFUL CODEVELOPMENT The users involved in the design of “Monitor,” a software package developed to monitor and control the flow of work in process on a factory floor, were extremely conservative in their demands. They took as their model an existing control system at General Motors, applied it to their current operations, and insisted that the developers fulfill those specifications as closely as possible-which the developers did very well. However, unbeknownst to the operators on the factory floor who were heavily involved in drawing up these specifications, the corporation was moving away from traditional work-in-process monitoring toward just-in-time inventory control. The basic principle underlying this new, mostly visual and manual system was to have as little work-in-processinventory as possible. Rather than a software system that could tell them exactly where a given lot of components was in the huge piles of inventory making their way slowly through the factory, they now needed a system that could handle lot sizes of one-if they needed the software at all. “We would have been way ahead of where we are now with Monitor,“ a developer commented ruefully after the project had gone through several major redesigns, ”if we had gone to just-in-time at the start.”
Codevelopment In codevelopment projects, users were part of the development team. Continuously involved in the project, from inception to implementation, they strongly
220
KNOWLEDGE MANAGEMENT TOOLS
influenced the design of the new tool. Although codevelopment projects were not appropriate only for radically new technical systems, the obverse did seem to be true: the successful development of entirely novel production systems (new technical systems and redesigned work processes) required heavy user involvement. One reason for this was the obviously greater need for user "buy-in" when a new technical system would radically alter the production processes. Even more important was that such projects represented explorations into unknown territory. "We were flying without parachutes," one team member observed. Other research also suggests that, in general, more interaction between team members exists in the presence of uncertainty. Codevelopment is therefore the preferred mode when (1) developers are not quite certain how their new system will interact with work processes, and (2) users are not initially certain how they can best redesign work so as to exploit the full potential of the new technical system. Unlike the consultancy situation, in which users help codify knowledge by reacting to prototypes or prior models of the technical systems, in codevelopment, users are helping to create knowledge from a ground base of almost zero.
"CONSTRUCT": SUCCESSFUL CODEVELOPMENT When first suggested by a research group, "Construct," a computer-aidedsystem for designing the operator interface on copier machines, captured the imagination of several different groups of users, including industrialdesigners and software programmers. These different user groups saw the potential of the simulation system not only to help them design on the screen instead of with physical models but also to help them communicate with each other. Through Construct, software designers, industrial designers, and product designers for the first time had a common medium of expression. Far from pushing the Construct developer to duplicate functionality that they currently had, they kept trying to use the tool for new and previously unconsidered tasks, such as simulating a color screen rather than a black-andwhite one. "Igave them a screwdriver," the Construct developer commented in admiration, "and when I came back, they were using it to build the Golden Gate Bridge [across San Francisco Bay]."
One of the primary hazards associated with extensive user involvement (quite apart from the obvious possibility that a revolutionary "stretch" project may fail) is users' having insufficient "forward vision" to provide good guidance. As a developer commented, "There's a tendency for users to be fixated on what they're using today instead of thinking about features they'll need in three years." Users can lead the development team into automating history. O n the other hand, when users are innovative and can envision where their organization should be headed, codevelopment projects may succeed beyond the expectations of either the users or the developers. At its best, codevelopment creates an esprit de corps like that of the Chaparral Steel product development teams.
Implementing and Integrating New Technical Processes and Tools
221
Everyone takes responsibility for pushing the boundaries of knowledge and for treating problems as minor delays rather than major deterrents.
"ADEPT": SUCCESSFULAPPRENTICESHIP "Adept," an expert system that identifies and diagnoses problems in circuit boards during manufacture, was built by users in a California factory-a manufacturing test engineer with a long history of troubleshooting in circuit board manufacture and a technician in charge of the work position where most flaws were to be caught. Neither had any software-programming experience. The software developers expert in artificial intelligencetaught the two how to run a proprietary expert system shell and then stood by as mentors and advisers as the two manufacturing people wrote "99 percent of the code themselves." The developers were eager in this case to turn responsibility over to the users. "We took the lumberjack approach: it's your ax-you keep it sharp." Adept was a great success. Before it was instituted, 38 percent of the circuit boards that failed during the final 'burn-in" test could not be diagnosed and were labeled a "no trouble found" component. Because the unknown problem might recur in the field, such circuit boards were discarded, at great cost to the company. Adept reduced that proportion to 19 percent within six weeks. Moreover, the project also succeeded in transferring much technical software-programming capability to the users. 'Education became an additional objective," one of the developers who mentored the project noted. After implementing their new system, the technicians took over the program and continued to fine-tune and develop it. Within a year, they had the "no trouble found" proportion down to 3 percent-at which point, such boards did not even go through a retest as it was cheaper to discard them. Perhaps even more important, manufacturing now possessed a capability it had not had beforeto create small expert systems to help control processes. The manufacturing test engineer went on to build a number of other such programs for use in the factory.
The interaction of the developers and users in this project was an act of creative extension, with each group pushing the other to think beyond current capabilities.
Apprenticeship Mode In a few of the thirty-four projects, users assumed total responsibility for integrating the technical expertise required for building a new tool, drawing upon their knowledge of their own work situation. They traveled to the developer site and apprenticed themselves to the tool designer in order to develop and build a system, which they then took back to their own work site.Ia Users wanting their own capabilities and independence from developers employed this apprenticeship mode. Developers had to be willing to play the role of tutors rather than providers, and users had to be willing to invest enough time and resources both to be-
222
KNOWLEDGE MANAGEMENT TOOLS
come expert in the underlying technology and to implement all the needed changes when they returned to their home territory. The few projects falling into this category succeeded to the degree that those conditions held. Although all four types of user involvement can succeed if certain conditions are met, only the codevelopment and apprenticeship modes really integrated knowledge from the two very disparate groups-software developers and software users. Moreover, the apprenticeship mode had comparatively limited impact on the user organizations. User apprentices integrated knowledge in their own heads and therefore broadened their personal abilities; they often went on to assume the role of developers. However, the developer group was little altered by having an apprentice work with them for some months, and the user group to which the apprentice returned sometimes rejected the innovative knowledge developed. As a result, the corporation added only minimally to its process capabilities. In contrast, codevelopment projects forced the developer and user groups to share problem-solving activities, to create and integrate knowledge, and that shared responsibility educated both groups to a better understanding of each other’s worlds. (See Figure 11-3.) Integration occurred at a group rather than an individual level. The developers came to understand the demands of the production process, and the production personnel began to see the potential inherent in the technologies offered them. In a number of cases, the codevelopment teams proceeded to conduct a series of projects together, as their collaboration revealed more opportunities for improving processes. In aggregate, these projects significantly enhanced the corporation’s production capabilities. Not only were advanced proprietary tools created, but some corporate barriers to knowledge integration had been considerably truncated. Codevelopment, in short, had much more effect on the organization’s learning process than the other modes.
MUTUAL ADAITATION One of the major reasons that codevelopment more strongly affected organizational capabilities in the study described above was that the shared development process offered an opportunity for the mutual adaptation of both technology and user work environment. (The essence of the current push for “reengineering” in many companies is the notion that organizational redesign and technology design should proceed simultaneously, each informing the other.) Mutual adaptation is the reinvention of the technology to conform to the work environment and the simultaneous adaptation of the organization to use the new technical system. It requires that managers in charge of implementing new technical systems recognize and assume responsibility for both technical and organizational change. Two major aspects of mutual adaptation are: (1)it occurs in small and large recursive spirals of change, and (2) it often requires attention to all four dimensions of capabilities. Recognizing these aspects suggests management levers that can be used to improve the contribution of new technical systems to technological capabilities.
Implementing and Integrating New Technical Processes and Tools
223
827a DeweloQer Responsibility e\79 UmRespomibili Bbaa Shared Responsibility FIGURE 11-3 Responsibility for Knowledge Creation in the Four Modes of Tool Implementation
Adaptive Spirals of Change-Small
to Large
The process of adaptation requires a revisiting of prior decision points-reopening issues of technical design that the developers had assumed were resolved and also “unfreezing” organizational routine.” These are spirals rather than cycles because the decision to be reconsidered is never exactly the same one made earlier. The decision context has altered, given the passage of time, external events, learning effects, and so on. The adaptive spirals vary in magnitude, depending on how fundamental is the change to be made. In the case of technology adaptation, a small spiral entails fine-tuning the new technical system; a large one may entail the developers’ returning to the drawing boards and perhaps even redefining the problem to be addressed. Similarly, a small adaptive cycle in organizational redesign may require merely altering a particular role or task. In contrast, a large one implies a strategic shift for the plant, the office, perhaps the whole division since it means rethinking the critical success factors by which performance is judged. A significant challenge managers face is detecting when a large spiral is masquerading as a small one; that is, when a series of small adaptive spirals is inadequate to create or support an important technological capability and a large spiral of change is required-in either the technical system, the work environment, or both. In such cases, the manager is, perhaps unwittingly, thrust into the role of revolutionary organizational redesigner.
224
KNOWLEDGE MANAGEMENT TOOLS
This role is particularly tricky if most people in the organization at least tacitly assume that only incremental changes are required. Recall that the jumping ring circulator described in Chapter 2 appeared to be a rather minor upgrading of current melting capabilities. The project apparently required merely installing an additional piece of equipment within an existing furnace and slightly retraining operators to use it-small spirals of change. Yet, in fact, in order to successfully implement this new capability, the equipment developers would have had to undertake large spirals of change, including redesigning the JRC for use in aluminum foundries as opposed to steel. Moreover, factory managers would have had to rethink the way they were producing aluminum, including the incentive system under which their operators worked and the use downstream of molten aluminum that contained the "contaminants" of recycled can stock. Relatively greater changes on the technology side would have lessened the need for changes in the work environment and vice versa. However, no one involved recognized the magnitude of the changes that would be required to make this project succeed; otherwise, they either would not have undertaken it or would have assigned more resources to its implementation. In the case of CONFIG,described below, the designers had quite a few resources. Although they were told repeatedly that the basic design of the system was flawed, they applied those resources of time and software-engineering skill to incremental, small-spiral improvements in the belief that such changes would eventually add up to a successful system. They struggled along with enough funding and support to make steady incremental improvements but without the influence, the vision, or the resources to make the large-spiral changes required.
CONFIG CONFIG was a software system designed to help salespeople select from literally thousands of possible combinationsthose component parts of a complex computer system that would satisfy their customers' needs. CONFIG promisedto enhance the completenessand accuracy of the sales orders and thereby avoid costly configuration errors. For instance, sales representatives often forgot to include in the order cables or connectors-which then had to be included at no cost to the customer. Or the representatives inadvertently suggested to the customer a particular linkage of systems that were actually incompatible or redundant; when the mistake was discovered in the assembly plant, the order had to be completely revised. So CONFIG offered considerable potential financial benefit to the corporation. However, the system offered little direct benefit to the sales representatives. They were not paid on the basis of the accuracy of their orders-only on their sales volume. Nor did it aid them in their configuration task as they conducted it. They needed to be able to cite cost information to the customers along with the description of component parts and often reworked the configuration several times during negotiations.CONFIG offered no cost information, and because it was designed for a linear, sequential transaction, it was not well suited to the highly iterative way the sales representatives worked to converge upon an acceptable design.
Implementing and Integrating New Technical Processes and Tools
t
225
The developers of CONFIG spent a total of eight years improving it incrementally. However, they were never able to address either of its most fundamental flaws: (1) it did not fit the configuration task as actually performed by sales representatives, and (2)the sales representatives' performance criteria did not include any reward for accuracy and completeness. An application support specialist for the program observed: "The people responsible for developing CONFIG are trying to breathe life into something that should be allowed to die. They have to start fresh-instead of building on top of what they have now. . . . CONFIG . . . has failed miserably.The problem is, nobody wants to shoot it in the head."12
I
The developers of CONFIG might have profited from diagnosing the misalignment between their system and the organization, considering all four dimensions of a capability. For CONFIG to succeed, managers should have revisited some very basic design decisions underlying the software architecture; the system did not tie into other physical systems critical to the sales representatives but was modeled on, and intimately linked to, a manufacturing system. Second, the managers would have had to influence the sales organization's managerial systems, changing incentive schemes to reward the completeness of orders-not just their quantity. Either of these two changes represented a large undertaking since, in many physical systems and organizations, there is a design hierar~hy;'~ once certain basic design decisions are made, all other, more minor decisions flow from, and are subordinate to, them. The CONFIG system was to be used for verifying the configuration (a function more critical to assembly than to sales): that was the design decision. All subsequent design decisions flowed logically from it. To revisit the system's basic concept and rethink it from the sales perspective-as a tool to aid the selling process-would have required a large-spiral adaptation: cycling back to revisit and revise basic assumptions. Similarly, order accuracy was not central to the sales organization's mission, and to convince that body that it should be meant enlarging the scope of responsibility that sales assumed. The other two dimensions of a capability were somewhat less affected. However, although providing sales representatives with the skills and knowledge to use the system posed little difficulty, the apparent irrelevance of configuration to sales work meant that there was a definite conflict between the values embodied by CONFIG and those of the sales force. Implementing the kind of large-spiral organizational adaptation required to make CONFIG successful was impossible from outside the sales organization, and the CONFIG developers had no powerful advocate within it. In the eight-year life of CONFIG, several opportunities occurred to reconsider its design and its misalignment with the sales operations-e.g., during budgeting cycles and large-scale organizational restructuring. Research on implementation suggests that such opportunities occur more than once in the life of most new technical systems, not just at the point of initial introduction." How-
226
KNOWLEDGE MANAGEMENT TOOLS
ever, to start over so visibly would be tantamount to admitting monumental failure. Instead, the managers continued to tinker with the user interface, with user support-investing in all manner of seemingly significant improvements that nevertheless avoided the central flaws. The CONFIG managers were not unusual in their escalating commitment to a flawed process “improvement” and their inability to distinguish the character of large-spiral adaptation from an aggregation of small spirals.
PACING AND CELEBRATION: REFILLING THE BANK One of the challenges of managing implementation and learning is the extra effort required, which draws upon people’s energy levels and their self-esteem. This process resembles the gradual depletion of a bank of energy and self-esteem. Every time we are asked to learn something new, we challenge our old bases of self-esteem, especially if the innovation threatens our signature skills (as described in the previous chapter). Because implementation of new processes and tools involves a high degree of uncertainty, it is energy-sapping almost no matter what the outcome. Therefore, we draw down that bank. (See Figure 11-4.) Eventually, if we continue to withdraw yet make no deposits, the bank runs dry.lS In such situations, we see people depressed at the thought of more innovation. “I’m not opposed to change,” an engineer beleaguered by a series of process innovations within a few months once commented. “1 just can’t figure out how to handle this much change-and still maintain my sanity.” To combat experimentation burnout, managers need to slow the outflow and replenish the bank. They can: (1)pace the changes insofar as possible and (2)celebrate small successes and milestones along the way. Owens-Illinois managers were puzzled when they studied two plants implementing the same new high-speed, highly automated bottle-forming equipment. Their Atlanta plant seemed to absorb the innovation much more readily than their Streator plant. Atlanta not only had the new “ten-quad’’ machine up and running sooner but achieved higher productivity with it. Although there were a number of differences between the two plants, one significant variation was the way that the innovation champion at Atlanta controlled the pace of introduction. Instead of acquiescing to the pace dictated by the corporate engineering center, he delayed acceptance of the new equipment until he felt that the workers were sufficiently comfortable with the last wave of new equipment.lb Of course, such pacing may be viewed as a luxury not to be contemplated in these days of hectic innovation. However, if the price of speed is burnout of key employees, managers may have to learn to pace innovation. The second lever managers can pull to replenish the bank is the celebration of small successes along the way. In 1986, when Beth Reuthe took over Digital Equipment’s Augusta plant as manager, she found people “walking around bent over as if they had been hammered down with a croquet mallet.” They had just undergone a very traumatic reengineering project driven mostly by the need to switch to a new manufacturing resource planning process (MRPII). A number of signature skills had been challenged and jobs altered. The Augusta plant was no
Implementing and Integrating New Technical Processes and Tools
227
Levels of Energy and Self-Esteem
Balance of Employee Energy and Self-Esteem Remaining
FIGURE 11-4 Bank of Energy and Self-Esteem
stranger to innovation; more than 70 percent of the products being manufactured had been introduced within the past twelve months. However, Reuthe needed to introduce yet more drastic change in order to significantly reduce the time that a product spent as work in process. She conducted a number of participatory exercises to gain commitment and to include everyone in the planning process. Then a just-in-time system was introduced, with all departments asked to innovate and experiment toward the ultimate goal of bringing down inventory. For six months, people experimented, even coming in on their own time to brainstorm or tinker. Some of the innovations significantly reduced inventory; others had little effect, despite people’s best efforts. However, Reuthe decided to close the plant for a halfday to “recharge everyone’s batteries.” It was like a plant fair-a celebration of progress. Everyone presented to peers the experiments he or she had conducted. No matter how small the improvement, it was recognized. The plant went on to exceed the goals set for cycle time, reducing it to five days rather than the initial target of fifteen. Even more important to Reuthe was employees’ experiencing change as a positive factor in their work.
SUMMARY Integrating proprietary knowledge into process tools and methods potentially offers a competitive edge. However, the implementation of such tools must be managed as an innovation project-not just the execution of plans, however
228
KNOWLEDGE MANAGEMENT TOOLS
carefully made. Nor is all the requisite knowledge likely to be held in one location or one set of heads. Users of process tools provide critical information to be integrated during design. However, user involvement must be carefully managed, as extracting knowledge from atypical, disinterested, or very near-term-oriented users can damage rather than enhance the design of a new process tool. A study of thirty-four development projects suggests that active codevelopment of tools with users is not only more efficient but far more effective. Moreover, the greatest competitive advantage likely comes from a process of mutual adaptation-adapting both the technology to the user environment and the user environment to the technology so as to exploit its full potential. Managers who attend to these two activities of user involvement and mutual adaptation are more likely to reap significant and lasting benefits from process innovation. Managers also need to avoid “burning out” their employees with uncontrolled change. The managers’ challenge is, in the midst of the whirlwind of daily activities, to keep in mind the potential effect of their every action and behavior upon the growth of technological capabilities in the firm. In this chapter, we have examined in depth the way that the development of new tools and processes can be managed to maximize learning and to counter the unconscious accretion of core rigidities. In the next chapter, we look at another pair of activities, deliberate experimentation and prototyping, that create knowledge assets.
NOTES 1. Clark, “Overhead Costs in Modern Industry,” /oumal of Political Economics (1927),
quoted in Bohle 1967. 2. 1993. 3. The process of mutual adaptation was first noted in a ten-project study by LeonardBarton (1988). 4. See Coch and French 1948 and, more recently, Locke and Schweiger 1979. 5. See von Hippel’s 1994 discussion of the problems caused by this separation of knowledge and the “stickiness” of information. 6. For example, one experiment resulted in the finding that an “alternative” approach to software system design that explicitly draws upon the users’ managerial “mental schemes” as well as on the software designers’ schemes produced a much richer and more inclusive design (Boland 1978). Other researchers report finding no evidence of improved output from a participative process (Ives and Olson 1984) or even negative relationships (Edstrom 1977). Ives and Olson point out that, at least for software, the hypothesis that a “system may be irnplemented successfully without user involvement” has been largely ignored in the information systems literature (1984, 600) and conclude that most research on the topic has been so flawed that “the benefits of user involvement have not been convincingly demonstrated” (586). 7. This case involves one of the two firms whose implementation of software was examined in the Alpha and Beta corporations study; see the box in Chapter 5. 8. Doll and Torkzadeh 1989.
Implementing and Integrating New Technical Processes and Tools
229
9. For a discussion of user involvement, see Leonard-Barton and Sinha 1993. 10. Von Hippel (1994)describes task partitioning in the creation of ASIC (semicustomized chips). The manufacturer encoded production information in a user-friendly CAD package that customers could use to customize chip specifications to their own needs, working within the constraints of the chip foundry equipment. The situation described here sounds somewhat similar, but there are two critical differences: in the apprenticeship mode, the users design and produce the product; moreover, the users learn the developers’ knowledge base. 11. O n the unfreezing of organizational routines, see Lewin and Grabbe 1962. 12. Keil 1992,20. 13. Clark 1985. See also the discussion of design hierarchy in Chapter 7. 14. Tyre and Orlikowski 1994. 15. Employee “burnout” is hypothesized to be comprised of three components: (1) emotional exhaustion (a lack of energy and a sense that one’s emotional resources are depleted); (2) a diminished sense of personal accomplishment (a tendency to evaluate oneself negatively); and (3) depersonalization (a tendency to treat others as objects rather than people). The first two of these are likely to be associated with the stress of fast-paced learning. Sec Cordes and Dougherty 1993 for a review of the literature on job-related burnout. 16. Goldstein and Klein 1988.
This page intentionally left blank
Organizational Issues in Groupware Implementation Wanda J. Orlikowski
INTRODUCTION Computer-supported cooperative work, collaborative computing, and groupware have become common labels in our contemporary technological vocabulary. While some have discussed the potential for such technologies to enhance organizational effectiveness (Dyson 1990; Govoni 1992; PC Week 1991; Marshak 1990), others have suggested that the implementation of such technologies is more difficult and yields more unintended consequences than is typically acknowledged (Bullen and Bennett 1990; Grudin 1988; Kiesler 1986; Kling 1991; Perin 1991). Empirical studies of groupware usage in organizations are clearly needed to shed light on these diverse expectations. While there have been many field studies of electronic mail usage (Bair and Gale 1988; Eveland and Bikson 1986; Feldman 1987; Finholt and Sproull 1990; Mackay 1988; Markus 1987; Sproull and Kiesler 1986), groupware techonologies (that include more collaborative features than electronic mail) have been studied less frequently. In this paper I describe the findings of an exploratory field study that examined the implementation of the groupware product Notes@(from Lotus Development Corporation) into one office of a large organization. [Notes supports communication, coordination, and collaboration within groups or organizations through such features as electronic mail, computer conferences, shared data bases, and customized views. See Marshak (1990) for more details on the product.] My interest in studying the implementation of this product was to investigate whether and how the use of a collaborative tool changes the nature of work and the pattern of social interactions in the office, and with what intended and unintended consequences. Two organizational elements seem especially relevant in influencing From The Information Society, 9:3 1993. 237-250, Wanda J. Orlikowski, Taylor & Francis, Inc., Washington, DC. Reproduced with permission. All rights reserved.
23 1
232
KNOWLEDGE MANAGEMENT TOOLS
the effective utilization of groupware: people’s cognitions, or mental models, about technology and their work, and the structural properties of the organization, such as policies, norms, and reward systems. The findings suggest that where people’s mental models do not understand or appreciate the collaborative nature of groupware, such technologies will be interpreted and used as if they were more familiar technologies, such as personal, stand-alone software (e.g., a spreadsheet or word processing program). Also, where the premises underlying the groupware technology (shared effort, cooperation, collaboration) are countercultural to an organization’s structural properties (competitive and individualistic culture, rigid hierarchy, etc.), the technology will be unlikely to facilitate collective use and value. That is, where there are few incentives or norms for cooperating or sharing expertise, groupware technology alone cannot engender these. Conversely, where the structural properties d o support shared effort, cooperation, and collaboration, it is likely that the technology will be used collaboratively, that is, it will be another medium within which those values and norms are expressed. Recognizing the significant influence of these organizational elements appears critical to groupware developers, users, and researchers.
RESEARCH SITE AND METHODS Field work was conducted within a large services firm, Alpha Corporation (a pseudonym), which provides consulting services to clients around the world. The career structure within Alpha is hierarchical, the four primary milestones being staff consultant, senior consultant, manager, and principal. In contrast to the pyramidal career structure, the firm operates through a matrix form, where client work is executed and managed in a decentralized fashion out of local offices, while being coordinated through consulting practice management centralized in the headquarters office. A few years ago, Alpha purchased and distributed Notes to all their consultants and support staff as part of a strategy, described by a senior principal as an attempt to “leverage the expertise of our firm.” My research study examined the implementation of Notes in one large office of Alpha (henceforth referred to simply as “the office”) over a period of five months. Detailed data collection was conducted through unstructured interviews, review of office documents, and observation of meetings, work sessions, and training classes. More than ninety interviews were conducted, each about an hour in length, and some participants were interviewed more than once over the period of study. In addition to the office where the study was conducted, I interviewed key players from Alpha’s headquarters and technology group. Participants spanned various hierarchical levels and were either consultants in active practice, administrators supporting practice activities, or members of the centralized technology support function (see Table 12-1). The research study was designed to examine how the groupware technology is adopted and used by individuals, and how work and social relations change as a consequence. The research study began in February 1991, before the Notes sys-
Learning from Notes: Organizational Issues in Groupware Implementation
233
TABLE 12-1 Number and Type of Interviews in Alpha
Principals Managers Seniors Support staff Total
Practice
Technology
Total
13 26
4 15
17 41
12 8
-
13
25 8
59
32
91
tern was due to be installed within the office, and continued through the implementation and early use Uune 1991). The findings reflect participants’ anticipations of as well as their early exposure to Notes.’ These findings need to be interpreted cautiously, as they only reflect the adoption and early-use experiences of a sample of individuals within a specific office in what is a larger implementation process continuing over time in Alpha. While early, the findings to date are interesting, as they reflect peoples’ initial experiences and assessments of Notes in light of their current work practices and assumptions about technology. The initial period following the implementation of a technology is typically a brief and rare opportunity for users to examine and think about the technology as a discrete artifact, before it is assimilated into cognitive habits and work practices, and disappears from view (Tyre and Orlikowski, in press). It is possible that with time, greater use, and appropriate circumstances, these early experiences will change.
RESEARCH RESULTS Background to the Notes Acquisition In the late eighties, a few senior principals realized that Alpha, relative to its competitors and its client’s expectations, was not utilizing information technology as effectively as they could. In response, they commissioned an internal study of the firm’s technological capabilities, weaknesses, and requirements. On the basis of this study’s recommendations, a new and powerful position-akin to that of a chief information officer (CI0)-was created within Alpha with responsibility for the firm’s internal use of information technology. One of the first tasks the new CIO took on was the creation of firm-wide standards for the personal computing environments utilized in Alpha offices. It was while reviewing communication software that the CIO was introduced to the Notes groupware system. As he remarked later, after a few days of “playing with Notes,” he quickly realized that it ‘This research study represents the first of a series of studies that are being conducted within Alpha over time. Further analyses and observations are thus anticipated.
234
KNOWLEDGE MANAGEMENT TOOLS
was “a breakthrough technology,” with the potential to create “a revolution” in how members of Alpha communicated and coordinated their activities. Shortly thereafter, the CIO acquired a site license to install Notes throughout the firm and announced that the product would be Alpha’s communications standard. The CIO began to market Notes energetically within various arenas of the firm. He gave numerous talks to principals and managers, both at national meetings and in local offices, during which he promoted his vision of how Notes “can help us manage our expertise and transform our practice.” Through interest and persuasion, demand for Notes grew, and the physical deployment of the technology proceeded rapidly throughout the firm. The actual use of Notes within the office I studied, however, appeared to be advancing more slowly. While electronic mail usage had been adopted widely and enthusiastically, the use of Notes to share expertise, and the integration of Notes into work practices and policies, had not yet been accomplished. The data I collected and analyzed during my field study of one office suggest that at least two organizational elements-cognitive and structural-influenced the participants’ adoption, understanding, and early use of Notes.
Cognitive Elements Cognitive elements are the mental models or frames of references that individuals have about the world, their organization, work, technology, and so on. While these frames are held by individuals, many assumptions and values constituting the frames tend to be shared with others. Such sharing of cognitions is facilitated by common educational and professional backgrounds, work experiences, and regular interaction. In the context of groupware, those cognitive elements that have to do with information technology become particularly salient. Elsewhere, I have termed these technological frames and described how they shape the way information technology is designed and used in organizations (Gash and Orlikowski 1991). When confronted with a new technology, individuals try to understand it in terms of their existing technological frames, often augmenting these frames to accommodate special aspects of the technology. If the technology is sufficiently different, however, these existing frames may be inappropriate, and individuals will need to significantly modify their technological frames in order to understand or interact effectively with the new technology. How users change their technological frames in response to a new technology is influenced by (1)the kind and amount of information about the product communicated to them, and (2) the nature and form of training they receive on the product. Communication about Notes
Employees in the office I studied received relatively little communication about Notes. Many of them first heard about the CIO’s decision to standardize on Notes through the trade press. Others encountered it during Alpha’s annual man-
Learning from Notes: Organizational Issues in Groupware Implementation
235
agement seminars that form part of consultants’ continuing education program. Most encountered it for the first time when it was installed on their computers. Without explicit information about what Notes is and why Alpha had purchased it, these individuals were left to make their own assumptions about the technology and why it was being distributed. This contributed to weakly developed technological frames around Notes in the office. Consider, for example, these remarks made by individuals a few weeks before Notes was to be installed on their computers: I know absolutely nothing about Notes. I don’t know what it is supposed to do. All I know is that the firm bought it, but I don’t know why. I first heard that the firm had bought Notes through the Wall Street Journal. Then your study was the next mention of it. That’s all I know about it. 1 heard about Notes at the [management seminars] about eight months ago. I still don’t know what it is. It has something to do with communications. It’s big email. I’ve heard that it’s hard copy of email . . . but I am not very clear about what it is exactly. I understand that it makes your work environment paperless. It’s like taking all your files-your library of information in your office-and putting it on the computer. I believe Notes is putting word processing power into spreadsheets. Is it a new version of 1-2-3? It’s a network . . . but I don’t know how the network works. Where does all this information go after I switch my machine off? It’s a database housed somewhere in the center of the universe. Weakly developed technological frames of a new and different technology are a significant problem in technology transfer because people act toward technology on the basis of the meaning it has for them. If people have a poor or inappropriate understanding of the unique and different features of a new technology, they may resist using it or may not integrate it appropriately into their work practices. In the office, one consequence of such poor understanding was a skepticism toward Notes and its capabilities. For example, principals and managers in the office commented:
I first heard about Notes when I read in the Wall Street Journal that Alpha had purchased a revolutionary new piece of software. M y first thought was-bow much is this costing me personally? . . . [T]his kind of implementation affects all of our pocketbooks. . . . I have /heard that] there is no value in information technology-so you can imagine how 1 feel! When I first heard about it, I thought “ O h yeah? First hook me up to the network, and then I’ll listen. Right now I still can’t see the benefit. ”
236
KNOWLEDGE MANAGEMENT TOOLS 1 don’t believe that Notes will help our business that much, unless all of our business is information transfer. It S not. Business is based on relationships.
Ideas are created in nonwork situations, socially, over lunch, et cetera. Poor circulation of information about Notes was a consequence of the rapid installation of Notes that Alpha had pursued. The CIO had delegated responsibility for Notes deployment to the firm’s technology group. Because demand for Notes was growing quickly, the technologists did not have an opportunity to plan the Notes rollout and did not develop or pursue a formal implementation plan or information dissemination strategy. Two technology managers commented: We tried to stay one step ahead of the firm’s demand and [the CIOS] evangelism. We were swamped with requests. Every time [the ClO] gave a talk, we’d be deluged with requests for Notes. . . . We had no time to do a formal plan or a grand strategy because [the ClO] had raised the level of enthusiasm in the firm, and there was no way we could say to the principals “wait while we get our act together. ”
[The CIO]set the tone for the deployment strategy by generating interest in the product at the top. He was pushing a top-down approach, getting to all the principals first. So our deployment was driven b y a lot of user pull and a little push from us. . . . We were constantly struggling to keep up with demand. This rapid, demand-driven rollout was consistent with the CIO’s assumption about how technologies such as Notes should be implemented. He commented that:
Our strategy was to blast Notes through our organization as quickly as possible, with no prototypes, no pilots, no lengthy technical evaluation. We want to transform the way we deliver service to clients. He believed that an “empowering” technology such as Notes should be distributed rapidly to as many people as possible, and that if the technology is compelling enough “they will drift into new ways of doing things.” That is,
[!If you believe that Notes is a competitive technology, you have to deploy it quickly, and put it in the hands of the users as fast as possible. Critical mass is key. In particular, the CIO focused on convincing the key “opinion leaders” in the firm of the value of the technology, as he believed that these individuals would lead the charge in defining and spreading the uses of Notes throughout the firm.
Learning from Notes: Organizational Issues in Groupware Implementation
237
Training on Notes Training users on new technology is central to their understanding of its capabilities and appreciating how it differs from other technologies with which they are familiar. It also significantly influences the augmentation of existing technological frames or the development of new ones. Because the technologists were extremely busy deploying Notes and keeping it up and running, they did not have the resources to pay much attention to the education of users. Their first priority was to physically install hundreds of copies of Notes in multiple offices around the country and keep them operational. As one technology manager noted, it was a matter of priorities: We made a conscious decision between whether we should throw it [Notes] to the users versus spending a lot of time training. We decided on the former.
The underemphasis on training was consistent with the CIO’s general view that Notes does not require formal end-user training, and that it is through experimentation and use, not formal education programs, that people begin to appreciate a technology’s potential and learn to use it in different and interesting ways. This user-driven diffusion strategy, however, typically takes time, particularly in a busy services firm with considerable production pressures. Because this study did not detect any new user initiatives around the use of Notes in the office, it is possible that the timing of the research is simply too early in the implementation process. The following experiences thus represent the first encounters consultants had with Notes and how they initially appropriated it. The training that was made available to users in the office I studied came in two forms, self-study and classroom training. The former provided users with a videotape and workbook, and covered Notes’ basic functions and interfaces. The latter offered up to four hours of instruction and hands-on exercises by local computer support personnel. None of these training options emphasized the business value of Notes or its collaborative nature as groupware. The training materials were relatively technical, individual-oriented, and nonspecific in content. Trainees were exposed to the basic Notes functions such as electronic mail, editing, and database browsing. While facilitating the accessibility of the material to all individuals, from secretaries to principals, this “one size fits all” training strategy had the effect-at least initially-of not conveying the power of Notes to support specific consulting applications or group coordination. This training on Notes resembled that of the training conducted on personal productivity tools. While useful for teaching the mechanics of Notes, it does not give users a new way of thinking differently about their work in terms of groupware. While Alpha was less concerned with collaborative or group work than with sharing expertise across the firm, the effect of the initial training was that participants in my study attempted to understand Notes through their existing frame of personal computing software. Such interpretations encouraged thinking about Notes as an individual productivity tool rather than as a collaborative technology or a forum for sharing ideas. For example, one manager noted,
238
KNOWLEDGE MANAGEMENT TOOLS
I see Notes as a personal communication tool. That is, with a modem and fax applications I can do work at home or at a client site and use Notes to transfer work back and forth. In the office, instead of getting my secretary to make twenty copies of a memo, she can just push a button. Further, the applications built for users by the technology group tended to automate existing information flows rather than creating new ones through the cooperative features of Notes. This reinforced the message that users received in their training, that Notes is an incremental rather than a transforming technology, and that new technological frames or new work practices around it are not required. Thus, in contrast to the technologists’ vision of Notes as a technology that can “fundamentally change the way we do business,” consultants in the office appeared to expect, at most, an incremental improvement in operations. One manager commented, The general perception of Notes is that it is an efficient tool, making what we do now better, but it is not viewed by the organization as a major change. Remember we’re. . . a management consulting fim and management consultants stick to management issues. We don’t get into technology issues. Another said,
I think it will reduce the time of gathering information. I think it will cut down on frustration in transferring information. But it is not a radical change. As a result of the lack of resources that technologists had for communication and training, users of Notes in the office developed technological frames that either had weakly developed notions of Notes, or that interpreted Notes as a personal rather than a group or firm productivity tool. Because technological frames may change over time and with changing contexts, it is possible that the frames developed by the office participants will change over time. For example, if individuals are exposed to other applications of Notes developed elsewhere in the firm or in other firms, or if their increased use of Notes helps them understand how they can change the way they work, new understandings and uses of Notes may result. Our ongoing study of this implementation process will monitor such possible developments.
Structural Elements Structural properties of organizations encompass the reward systems, policies, work practices, and norms that shape and are shaped by the everyday action of organizational members. In the office, three such structural properties significantly influenced individuals’ perceptions and early use of Notes.
Learning from Notes: Organizational Issues in Groupware Implementation
239
Reward systems Within Alpha there is an expectation-shared by many services firms-that all or most employee hours should be “billable,” that is, charged to clients. This is a major evaluation criterion on which employees are assessed, and employees studiously avoid “nonbillable hours.” Because most of the participants did not initially perceive using Notes as a client-related activity (and hence as “not chargeable”), they were disinclined to spend time on it. Further, given their lack of understanding and skepticism of Notes, they were unwilling to give up personal time to learn or use it. Consider these comments from senior consultants and managers: One of the problems is time. Given my billing rate, it makes no sense for me to take the time to learn the technology. In Alpha we put so much emphasis on chargeable hours and that puts a lot of pressure on managers. . . And now we’ve made an enormous commitment to Notes and Hardware, and LANs, but we haven’t given people the time and opportunity to learn it. For them to do classes, they have to work extra on weekends to meet deadlines.
.
1 think it is going to be a real issue to find time to use Notes. We don’t have the time to read or enter information in Notes. What would 1 charge it to? We already complain that we can’t charge our reading of our mail to anything. We end up having to charge it to ourselves [he reads his mail on the train going home].
The whole focus in this firm is on client service, and we are not going to tell the clients to chill out for a week while we all get trained on Notes. I don’t think that Notes will ever be used in Alpha as effectively as it could be. We’re not going to make sure everyone in the office has fifteen hours over the next year to spend time learning it. And if they expect us to take it out of our own time, I’m not going to invest that time. I have another life too.
The opportunity costs for me to take training in the office are very high. At my level, every week is a deadline, every week is a crisis. No accommodations are made in our schedules or workload to allow us to train on technology. So I won’t learn it unless it’s mandatory. Thus, one significant inhibitor of learning and using Notes was the office’s reward system with its accompanying incentive schemes and evaluation criteria. Because the reward system had not changed since the implementation of Notes, consultants in the office perceived time spent on Notes as less legitimate than client work. While many used Notes for electronic mail or database browsing, these activities amounted to a few minutes a day, and hence were easily subsumed into client or personal time. However, any more extensive use of Notes was seen as po-
240
KNOWLEDGE MANAGEMENT TOOLS
tentially disrupting the balance between billable hours and personal time, and hence to be avoided. These concerns, however, varied by position in the office. Not surprisingly, principals were willing to take a longer-term and firm-wide perspective on Notes, being less preoccupied than were managers and senior consultants with time constraints, “billable hours,” personal performance, and their own careers.
Policies and procedures Along with the few resources dedicated to Notes training and comrnunication, the office-at the time of my study-had not formulated new work procedures or set new policies around data quality, confidentiality, and access control. Many participants indicated that their use of Notes was inhibited by their lack of knowledge about these issues, particularly concerns about liability (their own and Alpha’s). Principals, for example, worried about data security:
Security is a concern for me. . . . We need to worry about who is seeing the data. . . . Managers should not be able to access all the infomation even if it is useful [such as] financial information to clients, because they leave and may go and work for competitors. So there should be prohibitions on information access.
I am not sure how secure Notes is. Many times we have run into difficulties, and things have gotten lost in never-never land. I have concerns about what goes into the databases and who has access to them and what access they have. . . . But we haven’t thought that through yet. Managers and senior consultants in the office were more anxious about personal liability or embarrassment. For example, 1 would be careful what I put out on Notes though. I like to retain personal control so that when people call me I can tell them not to use it for such and such. But there is no such control within Notes.
M y other concern is that information changes a lot. So if I put out a memo saying X today and then have a new memo two weeks later, the person accessing the infomation may not know about the second memo which had canceled the first. Also if you had a personal discussion, you could explain the caveats and the interpretations and how they should and shouldn’t use the information. I’d be more fearful that I’d put something out there and it was wrong and somebody would catch it.
Learning from Notes: Organizational Issues in Groupware Implementation
24 1
1 would be concerned in using Notes that 1 would come to the wrong con-
clusion and others would see it. What would make me worry is that it was public information and people were using it and what if it was wrong? 1 would not want to be cited by someone who hasn’t talked to me first. I’m worried that my information would be misconstrued and it would end up in Wichita, Kansas . . . as per]. Brown in New York . . . ” being used and relied on. You should be able to limit what access people have to what information, particularly if it is your information. 1 would definitely want to know who was looking at it. “
There is a hesitancy here because you don’t want to put everything into public information,as people may rely on that information and screw up, and it may reflect badly on you. The lack of explicit procedures and policies around Notes highlights the difficulty of enforcing firm-wide policies in a decentralized firm. While the CIO has been able to institute standards around certain technology platforms-clearly, a technical domain-instituting standard procedures and policies about data quality, control, and liability begins to encroach on the organizational domain-an arena where the CIO’s authority is less established. As a result, the technologists have been careful about setting policies that would require organizational changes and that might invoke turf issues. The management of local offices, however, had not devoted any attention to this issue, at least in the early adoption phase. As a result, there was some ambiguity about the locus and nature of responsibility and liability with respect to the intellectual content of Notes databases. This may have inhibited the application of Notes to a broader range of work practices in the early phase of its implementation.
F i m culture and work n o m Alpha shares with many other consulting firms a relatively competitive culture-at least at the levels below principal. The pyramidal structure and the hierarchical “up or out” career path promote and reinforce an individualistic culture among consultants, where those who have not yet attained principal status vie with each other to get the relatively few promotions handed out each year. In such a competitive culture, there are few norms around cooperating or sharing knowledge with peers. These comments by consultants in the office are illustrative.
This is definitely a competitive culture-it’s an up or out atmosphere. Usually managers work alone because of the competitiveness among the managers. There is a lot of one-upmanship against each other. Their life dream is to become a principal in Alpha, and they’ll do anything to get there.
242
KNOWLEDGE MANAGEMENT TOOLS
The atmosphere is competitive and cut-throat; all they want is to get ahead as individuals. Interestingly, there was some evidence that there is much more collegiality at the highest levels of the firm, where principals-having attained tenure and the highest career rank-enact more of a “fraternal culture” than the competitive individualism evident at lower levels. This is consistent with research conducted in other service organizations with similar organizational structures, such as firms providing legal, accounting, or medical services (Greenwood, Hinings, and Brown 1990). Below the principal level, however, managers and senior consultants in my study indicated that there was generally little precedent for sharing or cooperating with colleagues and little incentive to do so, as they needed to differentiate themselves from their peers. For example, The corporate psychology makes the use of Notes difficult, particularly the consultant career path, which creates a backstabbing and aggressive environment. People aren’t backstabbing consciously; it’s just that the environment makes people maximize opportunities for themselves. I’m trying to develop an area of expertise that makes me stand out. I f I shared that with you, you’d get the credit not me. . . . It’s really a cut-throat environment. Individual self-promotion is based on the information [individuals] have. You don’t see a lot of two-way, open discussions. Power in this firm is your client base and technical ability. . . . I t is definitely a function of consulting firms. Now if you put all this information in a Notes database, you lose power. There will be nothing that’s privy to you, so you will lose power. It’s important that I am selling something that no one else has. When I hear people talk about the importance of sharing expertise in the firm, I say, “Reality is a nice construct.” The competitive individualism-which reinforces individual effort and ability, and does not support cooperation or sharing of expertise-is countercultural to the underlying premise of groupware technologies such as Notes. It is thus not surprising that, at all but the highest career level, Notes is being utilized largely as an individual productivity tool in the office. Senior consultants and managers within this office feel little incentive to share their ideas for fear that they may lose status, power, and distinctive competence. Principals, on the other hand, do not share this fear and are more focused on the interests of the office and the firm than on their individual careers. An interesting contrast to this point, which further supports it, is that Notes is apparently being used by Alpha technologists to exchange technical expertise. Not being subject to the competitive culture, individual-focused reward systems, “up-or-out” career pressures, and “chargeable
Learning from Notes: Organizational lssues in Groupware Implementation
243
hours” constraints of the consultants, the technologists appear to have been able to use the technology to conduct their work, namely, solving technical problems.
DISCUSSION The results of this research study suggest that the organizational introduction of groupware will interact with cognitive and structural elements, and that these elements will have significant implications for the adoption, understanding, and early use of the technology. Because people act toward technology on the basis of their understanding of it, people’s technological frames often need to be changed to accommodate a new technology. Where people d o not appreciate the premises and purposes of a technology, they may use it in less effective ways. A major premise underlying groupware is the coordination of activities and people across time and space. For many users, such a premise may represent a radically different understanding of technology than they have experienced before. This suggests that a particularly central aspect of implementing groupware is ensuring that prospective users have an appropriate understanding of the technology, that is, that their technological frames reflect a perception of the technology as a collective rather than a personal tool. At the time I conducted my study, many of the participants in the office did not have a good conception of what Notes was and how they could use it. Their technological frames around Notes were weakly developed and relied heavily on their knowledge and experience of other individually used technologies. Given such cognitions, it is not surprising that in their early use of the technology, these participants had not generated new patterns of social interaction, nor had they developed fundamentally different work practices around Notes. Instead, they had either chosen not to use Notes, or had subsumed it within prior technological frames and were using it primarily to enhance personal productivity through electronic mail, file transfer, or accessing news services. As indicated above, however, these findings reflect an early phase of the participants’ experiences with Notes. It is possible that these experiences will change over time, as they get more accustomed to using Notes, and these are expected to change over time depending on their ongoing experiences with the technology. Where a new technological frame is desirable because the technology is sufficiently unprecedented to require new assumptions and meanings, communication and education are central in fostering the development of new technological frames. Such communication and education should stress the required shift in technological frame, as well as provide technical and logistic information on use. A training approach that resembles that used for personal computing software is unlikely to help individuals develop a good understanding of groupware. For individuals used to personal computing environments and personal applications, shared technology use and cooperative applications are difficult to grasp. In these cases, meaningful concrete demonstrations of such applications can help to provide insight. Further, if individuals are to use groupware within a specific group,
244
KNOWLEDGE MANAGEMENT TOOLS
learning such a technology collectively may foster joint understanding and expectations. Where individuals learn a shared technology in isolation, they may form their own assumptions, expectations, and procedures, which may differ from those of the people they will interact with through the technology. In those organizations where the premises underlying groupware are incongruent with those of the organization’s culture, policies, and reward systems, it is unlikely that effective cooperative computing will result without a change in structural properties. Such changes are difficult to accomplish and usually meet with resistance. Without such changes, however, the existing structural elements of the firm will likely serve as significant barriers to the desired use of the technology. For example, in the study described above, the existing norms, policies, and rewards appear to be in conflict with the premises of Notes. Because incentive schemes and evaluation criteria in the office had not been modified to encourage or accommodate cooperation and expertise sharing through Notes, members feared loss of power, control, prestige, and promotion opportunities if they shared their ideas, or if their lack of knowledge or misinterpretations were made visible. Thus, in a relatively competitive culture where members are evaluated and rewarded as individuals, there will be few norms for sharing and cooperating. If groupware products such as Notes are to be used cooperatively in such cultures, these norms need to be changed-either inculcated top-down through training, communication, leadership, and structural legitimation, or bottom-up through facilitating local opportunities and supportive environments for experimenting with cooperation and shared experiences. Without some such grounding in shared norms, groupware products will tend to be used primarily for advancing individual productivity. In addition to norms, resources are a further important facilitator of shared technology use. Whether formally ear-marked from some firm-wide R&D budget, or provided informally through local slack resources, occasions for experimenting with shared applications are needed to generate interest and use around cooperative computing. For example, in the office I studied, there had been no change in the allocation of resources following the initial implementation of Notes, and members had not been given time to use and experiment with Notes. There was thus a tension between the structural requirement that all work be production oriented, and the adoption of an infrastructure technology such as Notes, which was perceived to be only indirectly related to production work. Where individuals are not given resources to learn and experiment with the new technology, or not given specific, client-related applications that help them accomplish their production work within the technology, the immediate pressures of daily production tasks and deadlines will tend to dominate their decisions around how they allocate their time. This research study suggests that in the early adoption of a technology, cognitive and structural elements play an important role in influencing how people think about and assess the value of the technology. And these significantly influence how they choose to use the technology. When an organization deploys a new technology with an intent to make substantial changes in business processes, people’s technological frames and the organization’s work practices will likely require
Learning from Notes: Organizational Issues in Groupware Implementation
245
substantial change. An interesting issue raised by this requirement is how to anticipate the required structural and cognitive changes when the technology is brand new. That is, how d o you devise a game plan if you have never played the game before? This is particularly likely in the case of an unprecedented technology such as groupware. One strategy would be to deploy the technology widely in the belief that through experimentation and use over time, creative ideas and innovations will flourish. Another strategy would prototype the technology in a representative group of the organization, on a pilot basis, and then deploy it to the rest of the organization once the technology’s capabilities and implications are understood. This way, the required structural and cognitive changes learned through the pilot can be transferred. Viewed in terms of these two strategies, aspects of Alpha’s adoption activities now appear to resemble the former strategy. Our future studies should indicate how successful this strategy has been. It is worth noting that while the early use of Notes in the office has proved more valuable for facilitating individual productivity than for collective productivity, the implementation of Notes has resulted in the installation of an advanced and standardized technology infrastructure. As one technology manager put it, “A side benefit of Notes is that it got people into a more sophisticated environment of computing than we could have done otherwise.” Most of the office members, from principals to senior consultants, now have ready and easy access to a network of personal computers and laser printers. Thus, while the initial experiences with Notes in the office may not have significantly changed work practices or policies, the office appears to be relatively well positioned to use this platform to take advantage of any future technological or work-related initiatives. In general, the findings presented here provide insight for future research into the structural and cognitive organizational elements that interact with and shape the adoption and early use of groupware in organizations. They also have practical implications, indicating how and where such organizational elements might be managed to more effectively implement groupware in various organizational circumstances.
REFERENCES Bair, James H., and Stephen Gale. 1988. An Investigation of the COORDINATOR as an Example of Computer Supported Cooperative Work. In Proceedings of the Conference on Computer Supported Cooperative Work. The Association for Computing Machinery, New York, NY. Bullen, Christine V., and John L. Bennett. 1990. Groupware in Practice: An Interpretation of Work Experience. In Proceedings of the Conference on Computer Supported Cooperative Work, 291-302. The Association for Computing Machinery, New York, NY. Dyson, Esther. 1990. Why Groupware is Gaining Ground. Datamation. March: 52-56. Eveland, J. D., and T. K. Bikson. 1986. Evolving Electronic Communication Networks: An Empirical Assessment. In Proceedings of the Conference on Computer Supported Cooperative Work, 91-101. The Association for Computing Machinery, New York, NY.
246
KNOWLEDGE MANAGEMENT TOOLS
Feldman, Martha S. 1987. Electronic Mail and Weak Ties in Organizations. Office: Technology and People, 3:83-101. Finholt, Tom, and Lee S. Sproull. 1990. Electronic Groups at Work. Organization Science, 1(1):41-64. Gash, Debra C., and Wanda J. Orlikowski. 1991. Changing Frames: Towards an Understanding of Information Technology and Organizational Change. In Academy of Management Best Papers Proceedings, 189-193. Academy of Management, Miami Beach, FL. Govani, Stephen J. 1992. License to Kill. Information Week, January 6,22-28. Greenwood, R., C. R. Hinings, and J. Brown. 1990. “P2-Form” Strategic Management: Corporate Practices in Professional Partnerships. Academy of Management Journal, 33(4):725-755. Grudin, Jonathon. 1990. Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces. In Proceedings of the Conference on Computersupported Cooperative Work, 85-93. The Association for Computing Machinery, New York, NY. Kiesler, Sara. 1986. The Hidden Messages in Computer Networks.Hatvard Business Review, January-February:46-59.
This research was sponsored by the Center for Coordination Science at the Massachusetts Institute of Technology. This support is gratefully acknowledged. Thanks are due to Jolene Galegher, Bob Halperin, Tom Malone, and Judith Quillard, who provided helpful comments on an earlier version of this paper. Thanks are also due to the men and women of Alpha Corporation who participated in this research. This article first appeared in the Proceedings of the ACM 1992 Conference on Computer-Supported Cooperative Work, Copyright 1992. Association for Computing Machinery, Inc. Reprinted by permission.
13 Cosmos vs. Chaos: Sense and Nonsense in Electronic Contexts Karl E. Weick
The growth of electronic information processing h--, changed organizations in profound ways. One unexpected change is that elecl mic processing has made it harder, not easier, to understand events that are rep sented on screens. As a result, job dissatisfaction in the 1990s may not center on issues of human relations. It may involve the even more fundamental issue of meaning: Employees can tolerate people problems longer than they can tolerate uncertainty about what’s going on and what it means. Representations of events normally hang together sensibly within the set of assumptions that give them life and constitute a “cosmos” rather than its opposite, a “chaos.” Sudden losses of meaning that can occur when an event is represented electronically in an incomplete, cryptic form are what I call a “cosmology episode.” Representations in the electronic world can become chaotic for at least two reasons: The data in these representations are flawed, and the people who manage those flawed data have limited processing capacity. These two problems interact in a potentially deadly vicious circle. The data are flawed because they are incomplete; they contain only what can be collected and processed through machines. That excludes sensory information, feelings, intuitions, and context-all of which are necessary for an accurate perception of what is happening. Feelings, context, and sensory information are not soft-headed luxuries. They are ways of knowing that preserve properties of events not captured by machine-compatible information. To withhold these incompatible data is to handicap the observer. And therein lies the problem. When people are forced to make judgments based on cryptic data, they can’t resolve their puzzlement by comparing different versions of the event registered in Reprinted by permission of the publisher, from Orgunizarional Dynamics, Autumn 1985 (c) 1985 Karl E. Weick. et al., American Management Association, New York. All rights reserved.
247
248
KNOWLEDGE MANAGEMENT TOOLS
different media. When comparison is not possible, people try to clear up their puzzlement by asking for more data. More data of the same kind clarify nothing, but what does happen is that more and more human-processing capacity is used up to keep track of the unconnected details. As details build up and capacity is exceeded, the person is left with the question, What’s going on here? That emotional question is often so disconcerting that perception narrows, and even less of a potential pattern is seen. This leads people to seek more information and to have less understanding, more emotional arousal, less complete perception and, finally, a cosmology episode. When a person is able to connect the details and see what they might mean, processing capacity is restored. Meanings that can impose some sense on detail typically come from sources outside the electronic cosmos-sources such as metaphors, corporate culture, archetypes, myths, history. The electronic world makes sense only when people are able to reach outside that world for qualitatively different images that can flesh out cryptic representations. Managers who fail to cultivate and respect these added sources of meaning, and bring them to terminals, will make it impossible for people who work at screens to accurately diagnose the problems they are expected to solve. This article provides a groundwork for this conclusion. After a brief discussion of how people make sense of the world when they are away from terminals, I will show how those same sense-making processes are disrupted when people return to the terminal. The problem at the terminal is that people no longer have access to data and actions by which they usually validate their observations. When confined to inputs that make invalidity inevitable, people understandably feel anxious. That’s when cosmology episodes occur. 1 will conclude by suggesting what steps organizations can take to avoid such episodes.
SENSE MAKING AWAY FROM TERMINALS People use a variety of procedures to make sense of what happens around them, five of which are the focus of this analysis. To understand events, people (1) effectuate, (2) triangulate, (3) affiliate, (4) deliberate, and (5)consolidate.
Effectuating People learn about events when they prod them to see what happens. To learn our way around a new job we try things to see what gets praised and what gets punished. To see what physical problem a patient has, a physician often starts a treatment, observes the response, and then makes a diagnosis. To discover what their foreign policy consists of, diplomats sometimes give speeches in which a variety of assertions are made. They then read editorial comments to learn what reporters think they “said,” how the reporters reacted, and what should be preserved in subsequent speeches and policy statements.
Cosmos us. Chaos: Sense and Nonsense in Electronic Contexts
249
People often say, “HOWcan I know what I think until I see what I say?” People find out what’s going on by first making something happen. Doing something is the key. Until I say something-anything-I can’t be sure what I think or what is important or what my preferences are. I can’t be sure what my goals are until I can observe the choices I made when I had some discretion over how to spend my time. Action is a major tool through which we perceive and develop intuitions. Machines perform many operations that used to call for professional judgmentoperations like reasoning, analyzing, gathering data, and remembering. Now perception and intuition are the major inputs that human beings can contribute when solving a problem with a computer. Since action is the major source of human perceptions and intuition, any assessment of the potential for sense making must pay close attention to action.
Triangulating People learn about an event when they apply several different measures to it, each of which has a different set of flaws. When perceptions are confirmed by a series of measures whose imperfections vary, people have increased confidence in those perceptions or their conclusions about them. For example, committee reports, financial statements, and computer printouts are not sufficient by themselves to provide unequivocal data about the efficiency of operations. The conclusions from these data need to be checked against qualitatively different sources such as formal and informal field visits, exit interviews, mealtime conversations in the company cafeteria, complaints phoned to an 800 number, conversations with clients, and the speed with which internal memos are answered. These various “barometers,” each of which presents its own unique problem of measurement, begin to converge on an interpretation. The key point is that the convergence involves qualitatively different measures, not simply increasingly detailed refinements, ratios, and comparisons within one set of measures. What survives in common among the several measures is something that is sensible rather than fanciful.
Affiliating People learn about events when they compare what they see with what someone else sees and then negotiate some mutually acceptable version of what really happened. The highly symbolic character of most organizational life makes the construction of social reality necessary for stabilizing some version of “what is really happening.” People also affiliate when they want answers to specific questions. Herbert Simon explained how affiliation works by using this question as an example: “Do whales have spleens?” Suppose someone asked you that, what would you answer?
250
KNOWLEDGE MANAGEMENT TOOLS
Simon’s reply was that he’d make five calls and by the time he got to the fifth one he’d know the answer. In each phone call he’d ask, “Who do you know who’s the closest to being an expert on this topic?” He would call whoever was mentioned and would rapidly converge on the answer.
Deliberating People learn about events through slow and careful reasoning during which they formulate ideas and reach conclusions. When the reasoning process is drawn out, partially formed connections are allowed to incubate and become clarified, irrelevancies are forgotten, later events are used to reinterpret earlier ones, and all of these processes are used to edit and simplify the initial mass of input. This reduction of input, or deliberation, takes time. The activity of comprehending a speech is an example of how time can affect deliberation. If a speaker talks to an audience instead of reading a speech, then the speaker’s mind works at the same speed as the listener’s mind. Both are equally handicapped, and comprehension is high. If, however, a speaker reads a prepared text, the substance is more densely packed and is delivered at a speed that is faster than the listener’s mind can work. The listener deliberates while the speaker accelerates, and comprehension decreases. Of course, we are talking about the speed of the mind, not the speed of the nervous system. Nervous systems can accelerate in response to environmental input from displays such as television or video-game screens. The only way to cope with this acceleration of activity in the nervous system is to stop thinking, because ideas cannot form, dissolve, and combine as fast as eye-hand coordination can make adjustments in response to computer displays. Mindless activity takes less time than mindful activity, and this difference can affect the kind and depth of sense one is able to construct within information systems.
Consolidating People learn about events when they can put them in a context. The statement, “It is 30 degrees,” is senseless until we know whether the context is Centigrade or Fahrenheit. An event means quite a different thing when it is seen as part of a cycle, part of a developmental sequence, random, predetermined, or in transition from one steady state to another. The power of a context to synthesize and give meaning to scattered details can be seen in the current fascination with the “back to basics” movement. The diverse, unexplainable troubles people have right now are lumped into the diagnosis, “We’ve strayed from the basics.” People think that if they go back to the basics (for example, Kenneth Blanchard’s The O n e Minute Manager), their fortunes will improve. It is interesting that John Naisbitt’s Megutrends has a more disorienting, less soothing message. According to Naisbitt, the basics themselves
Cosmos us. Chaos: Sense and Nonsense in Electronic Contexts
251
are changing. Naisbitt’s view holds the prospect that events will become even more senseless. To consolidate bits and pieces into a compact, sensible pattern frequently requires that one look beyond those bits and pieces to understand what they might mean. The pieces themselves generate only a limited context, frequently inadequate to understanding what is happening in the system, what its limitations are, or how to change it. That diagnosis has to be made outside the system and frequently involves a different order of logic. It is often the inability to move outside an information system, and see it as a self-contained but limited context, that makes it difficult to diagnose, improve, and supplement what is happening inside that system. The famous paradox of Epimenides is an example of a problem in context. “Epimenides was a Cretan who said, ‘Cretans always lie.”’ The larger quotation becomes a classifier for the smaller, until the smaller quotation takes over and reclassifies the larger one to create contradiction. Gregory Bateson explains that when we ask: “Could Epimenides be telling the truth?” The answer is: “ l f yes, then no,” and “ I f no, then yes.” . . I f you present the Epimenides paradox to a computer, the answer will come out YES. . . N O . . . YES. . N O . . . until the computer runs out of ink or energy or encounters some other ceiling.
.
.
To avoid the paradox, you have to realize that a context in which classification used to be appropriate has become senseless. It is o u r inability to step outside, and invoke some context other than classification, that makes the situation senseless. Consider a different problem. A dog is trained to bark whenever a circle appears and to paw the ground whenever an ellipse appears. If the correct response is made, the dog gets a reward. Now, begin to flatten the circle and fatten the ellipse, and watch what happens. As the two figures become more indistinguishable, the animal gets more agitated, makes more errors, and gets fewer rewards. Why? The animal persists in treating the context as one in which it is supposed to discriminate. When discrimination becomes impossible, the situation becomes senseless-but only because it continues to be treated as a problem requiring discrimination. If the animal moved to a different level of reasoning outside the system and saw that discrimination was only one of several contexts within which it could try to distinguish the look-alike ellipses and circles, then sense might be restored. If, for example, the context were seen instead as one that required guesswork, then there would be no problem. Reframing the situation as guesswork is possible only if you realize that many contexts are possible, not just the one in which your life is lived. It is the very self-contained character of the electronic cosmos that tempts people, when data make less and less sense, to retain assumptions rather than move to different orders of reasoning. This error is especially apt to be made when information is defined only as that which can be collected and processed by machines. Different orders of meaning, those meanings that can impose new sense,
252
KNOWLEDGE MANAGEMENT TOOLS
can’t be collected and processed by machines. The big danger is that these meanings will then be dismissed rather than seen as vehicles for resolving some of the senseless episodes generated by the assumptions inherent in machine processing.
SENSE MAKING IN FRONT OF TERMINALS People using information technologies are susceptible to cosmology episodes because they act less, compare less, socialize less, pause less, and consolidate less when they work at terminals than when they are away from them. As a result, the incidence of senselessness increases when they work with computer representations of events.
Action Deficiencies The electronic cottage is a more difficult site for sense making than people may realize, because events are never confronted, prodded, or examined directly. People’s knowledge of events is limited to the ways they are represented by machine and by the ways in which they can alter those machine representations. A crucial source of data-feedback generated by direct, personal action-is absent. For example, Shoshana Zuboff describes what happens when a centralized “information interface,” based on microprocessors, is placed between operators and machinery in a pulp mill. Operators no longer see directly what happens in pulp operations. They leave a world “in which things were immediately known, comprehensively sensed, and able to be acted upon directly” for a more distant world that requires a different response and different skills. What is surprising is the extent to which managers underestimate what is lost when action is restricted to one place. Zuboff quotes one manager as saying: The workers have an intuitive feel of what the process needs to be. Someone in the process will listen to things and that is their information. All of their senses are supplying data. But once they are in the control room, all they have to do is look at the screen. Things are concentrated right in front o f you. You don’t have sensory feedback. You have to draw inferences by watching the data, so you must understand the theory behind it. In the long run you would like people who a n take data, trust them, and draw broad conclusions from them. They [workers] must be more scientific. This manager makes several errors. “Things” are not in front of operators in the control room-symbols are. And symbols carry only partial information that needs to be verified by other means. Operators “don’t have sensory feedback,” but that’s a problem, not a virtue, of technology. The display will substitute indirect for direct experience, because operators will have to “draw inferences” based on “the theory behind” the data. However, theories are just theories, and conjectures and inferences are shaky when based on partial data, tentative regularities, and flawed
Cosmos us. Chaos: Sense and Nonsense in Electronic Contexts
253
human induction. Operators are told to “take data . . . and draw broad conclusions from them,” but the data are not of the operators’ own choosing nor are they in a form that allows intuition to be part of the inferential process. In the words of another of Zuboff‘s managers, “We are saying your intuition is no longer valuable. Now you must understand the whole process and the theory behind it.” The irony is that intuition is the very means by which a person is able to know a whole process, because intuition incorporates action, thought, and feeling; automated controls do not. An additional problem with terminal work is the fact that trial and error, perhaps the most reliable tool for learning, is stripped of much of its power. Trials within an information system are homogeneous and correlated. What is tried next depends on what was done before and is a slight variation of the last trial. For example, spreadsheets are the very essence of trial and error, or so it seems. People vary quantities that are acceptable within the spreadsheet program, but they do not vary programs, hardware, algorithms, databases, or the truthfulness of inputs. People vary what the program lets them vary and ignore everything else. Since programs do not have provisions to switch logics or abandon logics or selectively combine different logics, trials are correlated and they sample a restricted range of choices. The more general point is that trial and error is most effective with a greater number of heterogeneous trials. That is why brainstorming groups often come up with solutions that no individual would have thought of before the discussion started. In these groups, suggestions are idiosyncratic and unconnected, but they sample a broader range of possibilities and improve the odds that someone will stumble onto a solution that lies outside traditional lines of thought. Spreadsheets do not let people introduce whatever comes to mind or follow lines of thought that have arisen from previous comments or inputs to whatever conclusions these thoughts may lead. These constraints are action deficiencies, because they restrict the ways in which the target can be manipulated, which restricts what can be known about the target.
Comparison Deficiencies Action is a major source of comparative data, which is one reason that the sedentary quality of information systems is so deadly. Moreover information systems do not give access to much of the data about a phenomenon or treat those data as noise. Not enough different perspectives are compared to improve accuracy. The illusion of accuracy can be created if people avoid comparison (triangulation), but in a dynamic, competitive, changing environment, illusions of accuracy are short-lived, and they fall apart without warning. Reliance on a single, uncontradicted data source can give people a feeling of omniscience, but because those data are flawed in unrecognized ways, they lead to nonadaptive action. Visual illusions such as those depicted in Figure 13-1 are a metaphor for what happens when triangulation is ignored. The point of a visual illusion is that the eye can be tricked. But that is true only if you maintain a fixed eye position
I
Zoher's illusion. The long vertical lines are strictly
Tophat illusion. Height and width are in reality equal
parallel although they appear to converge and diverge.
FIGURE 13-1 Two optical illusions, from Optical Illusions and the Visual Arts, by Ronald C. Karraher and Jacqueline B. Thurston. Copyright 0 1966 by Litton Educational Publications. All rights reserved.
2
4
0 0
G
Cosmos us. Chaos: Sense and Nonsense in Electronic Contexts
255
and do nothing but stare. If you tilt the illusion, view it along an edge, measure it, look at it from a different angle, or manipulate it, the illusion vanishes. As you manipulate the object, you add to the number of sensory impressions you initially had and therefore should run the risk of overload. Actually, however, you get clarity, because the several active operations give you a better sense of what is common among the several different kinds of information. One thing you discover is that the specific illusion that you saw when you did nothing disappears when you do something. Moving around an illusion is an exercise in triangulation because different perspectives are compared. Moving around is also an exercise in action that tells us about an object. It is difficult to triangulate within a computer world because it’s highly probable that the blindspots in the various alterations tried on a representation will be similar. For example, consider a simulated, three-dimensional computer design that represents bone fractures. The object is seen from several vantage points, but the program’s assumptions are carried along with each view and are neither detected by the observer nor canceled by perspectives that make a different set of assumptions. Thus the system will keep making the same errors. If you take a computer printout into the field and hold it alongside the event it is supposed to represent (for example, the behavior of a purchasing agent), the chances are good that the actual event will be noisier, less orderly, and more unique than is evidenced by the smoothed representation on the printout. Even though different kinds of potential error are inherent in a printout reading and a face-to-face observation, some similarities will be found when comparing these two perceptual modes. Those similarities are stable features of the observed phenomenon and are worth responding to. The differences between the two are the illusions (errors) inherent in any specific view of the world. What’s important to remember is that if people stick to one view, their lives may be momentarily more soothing, but also become more susceptible to sudden jolts of disconfirmation.
Affiliation Deficiencies Terminals are basically solitary settings. Christopher Lehmann-Haupt described computing as “quantified narcissism disguised as productive activity.” Of course, computing is not always solitary; FAA air-traffic control systems assign two controllers to each “scope.” But when the face-to-face, social character of sense making in information systems decreases, several problems can arise. First, less opportunity exists to build a social reality, some consensus version of events as they unfold. Different people viewing the terminal display see different things, because they are influenced by different beliefs. There is a grain of truth in each of the different things that are “seen.” As people work to build an interpretation they all can agree on, these grains of truth find their way into the final account and make that account more objective. Cut off from this diversity and the negotiation process itself, the solitary person sees less of what there is to see. Even more troublesome, when a situation is
256
KNOWLEDGE MANAGEMENT TOOLS
ambiguous, is that invention of some version of reality is the only way to cope. When uncertainty is high, it’s especially important to know what other people think, and what their analyses have in common with one’s own. A more subtle social issue in sense making is pointed out by Marion Kester’s striking question: “If children are separated from their parents by hours of TV, from their playmates by video games, and from their teachers by teaching machines, where are they supposed to learn to be human?” A recent study of electronic mail in an open office found that people used terminals to communicate with the person in the next cubicle even when they could stand up, lean over the cubicle, and ask the person the same question face-to-face. Thus it would seem Kester’s worry is not an idle one. Extensive nonsocial interaction with a terminal can atrophy social skills. That becomes a problem when people confront an uncertain situation in which they have to construct a jointly acceptable version of reality. If they participate in such discussions with minimal social skills, the interaction may not last long enough or probe deeply enough to build a decent model that people can work with. If clumsy interactions distort social realities, then failure is inevitable. Working agreements about what is going on can make even the most incomplete electronic representations look coherent. This is so because consensual information fills in the gaps in electronic representations. However, when social skills are in short supply, those gaps may remain unfilled. That’s when people begin asking, “What’s going on here?”
Deliberation Deficiencies Mander raises the interesting point that in an age of computers and information flow, the operative phrase may not be “small is beautiful,” but rather “slow is beautiful.” Deliberation takes time, yet that’s the very thing that disappears when the velocity of information flow intensifies in information systems. A more subtle problem with this acceleration is that computers operate close to the speed at which the stream of consciousness flows. This means that the whims and mixtures of feeling, thought, and images that flow through consciousness can be dumped into the analyzing process continuously. Not only does this increase demands on the person coming in afterward, who must deal with this kind of input; it also makes it harder to see priorities, preferences, and hierarchical structure and to separate the trivial from the important. The run-on sentences that have become a trademark of people writing with word processors exemplify this problem. As fast as images and possibilities bubble up, they are typed in and strung together with the conjunction and, which renders all images equally important. Most of what is typed is junk. But without discipline, self-editing, and deliberation, junk is left for someone else to wade through. The sheer volume and variety in an externalized stream of consciousness make it harder to separate figure from ground, which sets the stage for a cosmology episode.
Cosmos us. Chaos: Sense and Nonsense in Electronic Contexts
257
Consolidation Deficiencies When spontaneous material from stream of consciousness replaces deliberated thoughts and images based on data outside the information system, understanding becomes a problem. It is the very self-contained character of information systems that can undercut their value. Users fail to see that they need to reach outside the system for a different set of assumptions to understand what is happening inside the system. Herbert Simon explains that: Whether a computer will contribute to the solution of an information-overload problem, or instead compound it, depends on the distribution of its own attention among four classes of activities: listening, storing, thinking, and speaking. A general design principle can be put as follows: An information-processing subsystem (a computer or new organization unit) will reduce the net demand on the rest of the organization’s attention only if it absorbs more information previously received by others than it producesthat is, if it listens and thinks more than it speaks.
But to register and absorb information (to listen and think), the sensor must be at least as complex as the information it is receiving and, often, information systems fall short. The sensor must go beyond mere enumeration if it is to synthesize detail. To go beyond detail is to move to higher levels of abstraction and to invoke alternative realities. At these higher levels, feeling informs thinking, imagination informs logic, and intuition informs sensation. Feeling, imagination, and intuition use vivid, compact images to order the details in a way that the system cannot. This is why metaphors that draw on our common culture, fairy tales, or archetypes (“This place is like a cathouse”; “Our agency is like the tale of Rumplestiltskin”; “Each quarter we live through the four seasons.”) and novel labels or idioms (greenmail, golden parachutes, fast trackers), have such evocative power in linear systems. Each of these summarizing devices does three things: presents a compact summary of details, predicates characteristics that are difficult to name, and conveys a more vivid, multilevel image. All of these devices represent ways to absorb detail using logics that are qualitatively different from those contained within information systems.
WAYS TO IMPROVE SENSE MAKING What is surprising is how many of the problems described here can be solved if people simply push back from their terminals and walk around. When people walk around they generate outcomes (effectuate), compare sources of information (triangulate), meet people and discover what they think (affiliate), slow down the pace of input (deliberate), and get a more global view of what is happening (consolidate).
25 8
KNOWLEDGE MANAGEMENT TOOLS
Recent jokes about the invention of a new tool for word processing, which turns out to be a pencil, may be replaced by another joke about the new tool for managing called, “Pull the plug and go for a walk.” The swiftness with which the idea of management-by-walking-around spread and the intensity with which people tout its benefits may be explained by the fact that many things that look like problems when they are viewed from a fixed position vanish when one changes position. Just as illusions disappear when you move them around or move around them, so too do problems disappear when they no longer are confined to one medium and one set of assumptions. People who carry terminals into the field should be better problem solvers than are people who leave terminals at home, because people with terminals in the field are able to use different forms of data and test their hunches with triangulation. A computer program can have action steps that ask people to leave the terminal, walk around, and come back, after which the program can ask them some questions. Imagine, for example, that a manager is trying to figure out whether there is a market for brand-name vegetables. He or she examines demographics and buying patterns and extrapolates trends; then the screen says, “Go walk through a supermarket for two hours and come back.” (That same action step is appropriate for all kinds of related problems, from the question of whether you should purchase Conrail to what level inventories should be held at.) The reason that the supermarket tour is appropriate for such diverse agendas is that it generates data that differ from those on the screen. The problem is seen in a different setting and thus is viewed differently: The supermarket is a place where people handle vegetables in distinctive ways, which might suggest what kinds of vegetable wrappers are appealing. The supermarket is a place stocked with items that could be shipped by rail, or perhaps these items could be handled efficiently only by other modes of transportation. Or perhaps the supermarket is seen as a place where stock moves directly from trucks to shelves, and just-in-time strategies are being used more widely than the person doing this exercise realized, so his or her own distribution needs to get more attention. With these vivid, nonmachine images in mind, the person returns to the terminal and sees its displays in a different light. The same notations take on different meanings, and more is seen. While augmentation of sense making can occur if people become more mobile, other actions need to be taken as well. When any reorganization or change in information systems is contemplated, companies should systematically examine what those changes will do to action (effectuation), comparison (triangulation), interaction (affiliation), deliberation, and consolidation. A significant, permanent decrease in any of those five raises the likelihood that employees will know less about phenomena and will make more mistakes in managing them. If any one of these five decline, local remedies are possible. If the potential for action drops, insert more breaks, longer breaks, or more interactive displays that allow for a wider variety of personal experiment or encourage the use of portable computing equipment. If the potential for comparison drops, make
Cosmos us. Chaos: Sense and Nonsense in Electronic Contexts
259
greater use of tie-ins between terminals and visual simulations of the phenomena being monitored, locate terminals closer to the events they control, or add other sensory modalities to the output from terminals. When interaction is lessened, assign two people to one terminal, have one person tell another what is being observed, set up teleconferencing and have operators’ pictures continually visible in the corner of the display, or allow more intermixing of solitary work with group work. When deliberation drops off, more time can be allocated for summarizing and thinking about information away from terminals, several people can be assigned to the same terminal so they are forced to spend some of their time thinking somewhere else, or processing can be slowed down to allow time to ponder what is displayed. Finally, when consolidation tapers off, people can read poetry, look at art, question assumptions, or engage in whatever activities will expose them to syntheses, theories, or generalizations that can put the inputs being considered into a new context. The preceding analysis implies that overload is not really the problem with information systems. And indeed, confinement to a terminal is the problem, because it limits the variety of inputs, precludes comparison, and thus makes sense making more difficult. Overload occurs when you get too much of the same kind of information; ironically, if you increase the kinds of information you get, overload declines. In changing the quality of information about a phenomenon, one is able to see what stays constant and what changes. Impressions that change are method-specific. Impressions that don’t are likely to be stable features that need to be dealt with. Since the common elements are fewer in number and better organized, they also make fewer demands on processing capacity. The key point is that overload can be reduced by moving around and thus getting a variety of inputs. As the number of vantage points increases, the amount of overload decreases. A second implication of my analysis is that people and groups need to listen more and talk less. The value of an information system lies in what it withholds, as much as in what it gives. Listening and withholding require editing and categorizing-and these, in turn, require typologies, concepts, and ideas. The detail, specificity, and concreteness that can be achieved by information systems are worthless until patterns are imposed on them. Some of these patterns are inherent in the system itself, but most are found outside of it. People must listen for these patterns and, when they hear them in detail, transmit the pattern rather than the detail. Third, not only do people need to listen-they need to edit. Job descriptions in the information organization need to specify each person’s responsibility to absorb uncertainty and to transmit less than they receive. While there is always danger that people will edit out the wrong things, an even greater danger is that they will leave in too much, and thus paralyze themselves or those who come in after with too much detail. While electronic processing has the potential for everyone to check up on everyone else all the time, that kind of scrutiny will probably be infrequent because of the sheer quantity of work involved. It is more likely that faith and trust will become increasingly important as people become more dis-
260
KNOWLEDGE MANAGEMENT TOOLS
persed, delegation is practiced more fully, and people come to depend on others to fill in their own limited, obsolete knowledge. Finally, corporate culture takes on added importance in the context of the preceding arguments. Culture provides the framework within which cryptic data become meaningful. Current efforts to articulate culture may represent efforts to cope with intensified commitments to electronic processing, because it takes the former to understand the latter. Electronic organizations need to develop new respect for generalists, philosophers, and artists, because all three work with frameworks that provide context and meaning for the programs already in place.
CONCLUSION Managers need to be just as attentive to meaning as they are to money. As organizations move more and more vigorously into electronic informationprocessing, they will increasingly bump up against the limits of human-processing capacity. The key to overcoming these limits is meaning, because it increases processing capacity. And meanings that free up capacity usually originate outside the information-processing system in the form of different assumptions and contexts. Unless these qualitatively different kind of logic are developed, disseminated, and valued by the organization, people will find themselves increasingly unable to make sense of the products of information technology.
SELECTED BIBLIOGRAPHY For further reading about sense making, see The Social Psychology of Organizing, by Karl E. Weick (Addison-Wesley, 1979). Gregory Bateson’s work on logical types is Mind and Nature (E. P. Dutton, 1979). Marion Kester’s and Gerry Mander’s comments about computers appear in the December 1984 issue of the Whole Earth Review. Another work that discusses the limitations of computers is The Network Revolution, by Jacques Vallee (And/or Press, 1982). For more on corporate culture as a source of meaning, see the September 1983 issue of Administrative Science Quarterly. Herbert Simon’s discussion of overload appears in M. Greenberger’s book titled Computers, Communications, and the Public Interest (Johns Hopkins Press, 1971). Christopher Lehmann-Haupt’s view of computing may be found in his book review that appeared in the New York Times (October 3, 1984). Finally, Shoshana Zuboff discusses human adaptation to automated factories in “Technologies That Informate” (in Human Resource Manugement: Trends and Challenges, edited by R. Walton and I? Lawrence, Harvard Business School Press, 1985).
Based System for Threat Assessment P.J. deJongh, K.J. Carden, and N.A. Rogers
During 1988 and 1989, we developed a successful knowledge-based decision support system, called Future, for the South African Navy (SAN). The Naval Intelligence Division uses the system daily. It is a typical example of an expert system in the situation assessment domain. We developed Future using C and Prolog and the blackboard architecture and coupled it with an existing data-base system. During 1987, we were approached by a senior naval officer, who expressed an interest in expert systems and their application to problems in the intelligence division. A proposal to develop an expert system, which was accepted by the navy, was then drawn up by the officer and the first author. During 1988, only the first two authors were involved in the project. The first author acted as project leader and senior knowledge engineer, while the second author acted as knowledge engineer and expert system designer. The third author acted as system analyst and was responsible for delivering the final system. Two features distinguish this project from the average knowledge-based systems project: Many parties were involved in developing the system, necessitating proper management of the project; and The knowledge acquisition process involved multiple experts who had implicit knowledge. The management of the project and the knowledge acquisition process were crucial to the success of the project.
Reprinted by permission, P.J. deJongh, et al., "Future: A Knowledge-Based System for Threat Assessment," Interfaces, Volume 24, March-April 1994. Copyright 1994, The Institute of Management Sciences and the Operations Research Societ of America (currently INFORMS), 290 Westminster Street, Providence, RI 02903 USA
26 1
KNOWLEDGE MANAGEMENT TOOLS
262
BACKGROUND One of the major tasks of the naval intelligence division is to monitor shipping movements worldwide to identify possible threatening events. it is the responsibility of the naval intelligence officer to identify as early as possible a ship or group of ships that might pose some threat. This is a daunting task, since at any time thousands of ships are sailing the Oceans of the world and one has to look for strange events or out-of-the-ordinary patterns. Examples of possible threatening events are attacks by hostile naval vessels, reporting of South African operational activities, illegal fishing in South African territorial waters, and the formation of task forces. The above are but a few of the multitude of possible threats. Before the navy implemented the Future system, navy operators monitored shipping movements querying a data-base management system in which hundreds of ship position reports were entered daily. They then plotted interesting cases manually on a map, and experts interpreted the cases and predicted possible threats. Clearly, important events could slip by undetected. Operators were frequently transferred to other branches of the navy, and they were often replaced by inexperienced personnel. The experts on assessing shipping threats held middlemanagement positions and performed other critical tasks, leading them to neglect interpretation and prediction of threats. The absence of any documentation on threat assessment aggravated the situation; it meant that experts were needed continuously. Although knowledge on threat assessment was available in the organization, it was not always available at the right place! The navy realized that it needed a system or mechanism to make the basic expert knowledge available to the operators. We recommended that it develop a knowledge-based system.
THE DEVELOPMENT METHODOLOGY The developers of the first successful expert systems did not follow the conventional or traditional software engineering approaches. Such approaches are rigorous and well structured and work well for data-processing projects and projects with well-specified client requirements and low development risk. In our opinion, the developers of the early successful expert systems did not follow traditional approaches because the development risk was high and the client's real needs were often vague. Instead, they opted for the more flexible but unstructured rapid prototyping approach. Rapid prototyping hardly constitutes a methodology but is rather an ad hoc collection of tools and methods. It worked well for the early stand-alone and, typically, small expert systems. However, rapid prototyping can be dangerous for large-scale projects because prototypes often become operational systems. This can result in very expensive rewrites should the systems keep on expanding. Because of these factors and because the risk in developing expert systems has decreased, developers are currently moving back to traditional software engi-
263
Future: A Knowledge-Based System for Threat Assessment
neering approaches and thus towards a more rigorous methodology for developing expert systems. We followed such an approach, which, in retrospect, closely resembles the CONCH approach [Hickman et al. 1989; Taylor, Porter, and Hickman 19891. (CONCH stands for “client-oriented normative control hierarchy.”) We followed this approach because it offers the project leader the best of two worlds by retaining the best features of the traditional software engineering methodologies and of rapid prototyping. Also, the methodology has a natural tendency to reduce risk throughout the course of the project by, for example, involving the experts and client on a regular basis. This ensures realistic expectations of the project especially in its early critical stages. It also promotes good communication. It achieves these goals by iteratively cycling through the three phases of the development process: the analysis phase, the design phase, and the implementation phase. In each of these phases, one iterates through four stages: the planning stage, the development stage, the review stage, and the risk stage (Figure 14-1). In the plan-
START WITH ANALYSIS
START WITH DESIGN
WITH IMPLEMENTATION PHASE
FIGURE 14-1 The spiral represents the development path of a project. The project starts in the analysis phase and in the review stage. The first arc represents the typical activities at the start of any project: reviewing the existing systems, initial scoping, defining objectives, and organizing meetings. The spiral then goes into the risk stage where areas of uncertainty are defined. In the plan stage, plans for the first development activities are made which are carried out in the next stage. After completing the analysis phase, the design and implementation phases are tackled in the same way. Note that, depending on the nature of the particular project, one can spiral through the phases (analysis, design, and implementation) as many times as necessary.
264
KNOWLEDGE MANAGEMENT TOOLS
ning stage, plans are made for the development stage. In the development stage, developmental activities are undertaken. The results of these activities are then reviewed in the review stage, and in the risk stage, areas of uncertainty that could constitute potential risks are identified (Figure 14-1). At the start of any project, we would begin with the analysis phase in the review quadrant. The spiral in Figure 14-1 represents the development path, with the first are representing the typical activities at the start of any project: reviewing the existing system, initial scoping, defining objectives and organizing meetings. The products of this phase are typically minutes of meetings and other notes. The spiral then takes us into the risk stage where we identify areas of uncertainty and risk. (Incidentally, prototyping is a useful tool in this stage.) The products of this phase are feasibility reports and cost-benefit studies. In the new stage, we make plans for the first development activities in the analysis phase, such as analyzing requirements and acquiring knowledge. The products of this phase are project plans. Then we undertake development activities, with the products being documents, source code, or models. Depending on the particular project, we can spiral through the phases (analysis, design and implementation) as many times as necessary. This approach breaks each cycle up in four stages with natural milestones at the end of each stage. This leads to regular client involvement and rapid feedback to the client and experts and reduces the client’s uncertainty concerning the projects progress and deliverables. In project Future, we cycled four times through the analysis phase, three times through the design phase, and once through the implementation phase. The deliverables were a paper-knowledge base, a system description, software and source code, project plans for each phase, and cost benefit studies. We updated the paper-knowledge base eight times during the development (corresponding to the eight cycles), but we produced formal copies only at the end of each phase.
THE KNOWLEDGE ACQUISITION PROCESS In our preliminary one-on-one interviews with the experts they seemed knowledgeable, but they had great difficulty verbalizing what they did and why they did it. This situation made them uneasy and they appeared threatened. We decided to schedule a series of working sessions for the two knowledge engineers to work with the four experts. These working sessions with multiple experts had several advantages over interviews: The experts stimulated one another’s thought processes, which led to valuable discussions; The senior knowledge engineer’s attention was evenly divided among the experts, which made them feel less threatened; and The experts reached consensus on the structured knowledge, and thus a broad spectrum of experts owned the knowledge.
Future: A Knowledge-Based System for Threat Assessment
265
Apart from dealing with implicit knowledge, we needed to develop mathematical models to derive and model some of the indicators for threat identification. In the working sessions, the first author and senior knowledge engineer acted as facilitator and as model builder while the second author supported the knowledge engineering activity but concentrated more on expert system design issues. It was crucial to the success of this system that the knowledge engineering team was competent in facilitating, model building, and expert system design principles. By the end of the analysis phase we wanted to achieve the following goals: A list of the basic tasks and goals for the system; A list of all factors, threat indicators, and relevant domain concepts; Descriptions of typical threats; A completely structured and documented threat identification process; and A list of all system output requirements. In all we held four working sessions, which typically took two days with the experts. As we expected and because the knowledge was implicit, the early sessions were sometimes characterized by Open confrontation among experts Long discussions on detailed issues Unstructured knowledge making little sense; and Scepticism on the part of the experts who did not believe in the ultimate success of the project.
To compound this unwanted situation, some of the experts had unrealistically high expectations concerning the contribution of the knowledge engineers and expert systems in general. However, at the end of the analysis phase, we had reached all our goals and all the experts were looking forward enthusiastically to the working sessions scheduled for the design phase. How did we achieve this transformation? We relied heavily on interpersonal relationship skills. Without these skills the project would have been a disaster. The following are a few points that we found particularly important and that we think would be important to the success of similar project: Knowledge engineers should ensure that a relaxed atmosphere exists during working sessions. The client and the experts should understand their own roles and that of the knowledge engineers. Knowledge engineers should discuss the project plan with them, stating what is required from them in terms of their time and commitment. They should make sure they are aware of the skills of the members of the knowledge engineering team (for example facilitating, model-building, and knowledge of expert system building tools).
266
KNOWLEDGE MANAGEMENT TOOLS
They should make sure the experts understand that the team is there to help them and not to judge their worth. Also they should assure them that the system will support and not replace them. They should point out what knowledge engineers cannot do. Knowledge engineers should gather all the facts by listening to everything the experts want to say and by asking probing questions. By listening to what people have to say, they make them feel important. By using the terminology of the expert, they improve communication. By being as thorough as possible, they make sure they fully understand. By paying attention to nonverbal signals and body language of the experts, they sometimes obtain valuable information. Knowledge engineers should pay careful attention to documentation. In this project, we kept and filed the following documents: Agendas and minutes of meetings; Documents containing the knowledge and reasoning processes of the experts (the so-called paper knowledge base); and A dictionary of terms that included definitions and descriptions of all relevant concepts. Knowledge engineers should work to maintain the enthusiastic involvement of the client and the experts throughout the project. We did this in our project by keeping the client and the experts informed about the progress of the project. We made sure that the client had agendas well in advance of meetings and minutes soon after meetings. The client should also be told early of any possible delays in achieving milestones and of decisions that he or she is to make. The experts should be supplied with a constant flow of information: The latest versions of the paper knowledge base and dictionary of terms; Questions on the logic of the structured knowledge; Ideas on new ways of structuring the knowledge; and The latest version of the software developed.
All of this serves to stimulate the experts between sessions, so that new ideas to solve problems are generated quickly. The knowledge engineers should prepare well for meetings and try to anticipate what can go wrong during meetings. They should be prepared to switch approaches, for example, if distinguishing goals does not work, they should try dividing the domain [Hart 19861. They should be careful not to impose tools that are alien to the expert at an early stage in the project but rather let them choose their own methods of representing their knowledge. Knowledge engineers should keep in mind that they do not have to reach their objectives during the first meeting but should obtain them through constant iteration.
267
Future: A Knowledge-Based System for Threat Assessment
The System Future is a real-time knowledge-based system in the situation assessment domain [Funk 19881. As implied by the term situation assessment, the task of such a system is to assess and interpret any given situation. Such a system is thus event driven and responds to changes in its world of interest, usually a data base. The system must be designed to read information on events, to build up a picture of the situation, interpret it, display the results, and then cycle back to receive more information. This type of event-driven system is thus to be contrasted with the more common goal-driven or backward-chaining system most often described in expert systems literature. The domain of situation assessment lends itself to solution using the blackboard approach [Engelmore and Morgan 19881. Figure 14-2 shows the overall structure of the Future system. Future runs on a data-base extract that contains information on ships, such as identification number, name, type, size, and country of origin, and a history of geographical positions reported over time. Only a history of current shipping information, that is, information on those ships with at least one position reported recently, is contained in the database extract. Four main subsystems constitute the Future system: the modeling subsystem (MSS), the knowledge-based subsystem (KBSS),the user interface (UI), and the blackboard. All communication in the system is via the
DATA
. ~
BASE
DATA . BASE EXTRACT
BLACKBOARD
I
I
FIGURE 14-2 The Future system runs on a data-base extract that contains a history of current shipping information. Four main subsystems constitute the Future system: the modeling subsystem, the knowledge-based subsystem, the user interface, and the blackboard. All communication is via the blackboard. The modeling subsystem extracts shipping data from the data-base extract and performs calculations on the position data to derive certain threat indicators. It then writes these indicators to the blackboard along with other indicators extracted directly from the data base. The knowledge-based subsystem monitors all the threat indicators and reasons in the fashion of an expert to derive possible threats and the potential of each threat.
268
KNOWLEDGE MANAGEMENT TOOLS
blackboard. The modeling subsystem extracts shipping data from the data-base extract and performs calculations on the position data to derive certain threat indicators (for example, ship speed, predicted positions, short- and long-term activities, proximity to maritime assets, intended movement of the ship with respect to maritime assets) and writes these indicators to the blackboard along with other indicators extracted directly from the data base (for example, ship type and country of origin). The knowledge-based subsystem (consisting of a rule base and an inference engine) monitors all the threat indicators on the blackboard. Using this information, the KBSS then reasons in the fashion of an expert to derive possible threats and the potential of each threat. This is then communicated back to the blackboard. All data are accessible via the user interface, which incorporates cascading choice boxes and electronic maps. The threats are organized in five categories: tactical threats, operational planning threats, high density areas, and naval and fishing groups. The system runs automatically each morning just before the operators arrive at work, but it can also be operated whenever necessary. When the operator signs on, he or she has the ability to enter or change a hostility index relating to country codes. Basically the operator designates a country as being one of the following: enemy, hostile, neutral, sympathetic, or ally. He or she also has to enter so-called future target areas (FTA's), which have been defined as any area containing South African maritime interests (for example, a piece of coastline or a South African ship). The midpoint (geographical co-ordinates) and size (radius in nautical miles) of each FTA along with a future time of analysis (in days with respect to the date and time of analysis) must be specified. The information is then used to analyze and classify the different threats into one of the five categories. Due to security restrictions, we can only describe part of the system. We will give some examples of tactical threats and naval groupings using fictitious data.
An Example of a Tactical Threat Tactical threats are basically immediate threats and can be classified in a number of categories (for example, attack, reporting, or surveillance). (We had to document all these definitions in the knowledge base, since no documentation on threat assessment existed prior to project Future.) The Future system determines whether each ship has any potential of posing a tactical threat (on a scale ranging from zero to 100).The potential is determined by the KBSS; it considers the vessel's capability, intention, and opportunity of posing the particular threat against an FTA. The vessel's capability is determined by considering its type and size, its intention is determined by considering derived threat indicators, such as hostility and short- and long-term activities, and its opportunity is determined by considering derived threat indicators, such as its proximity and intended movement with respect to the FTA of interest. The judgments are encoded in the form of production rules, which constitute the knowledge base. All threat types and their associ-
269
Future: A Knowledge-Based System for Threat Assessment
.. .. 1 . 1
A11200
u
SHIP16 COM FFG
enemy STA passage LTA unsure
2531sO450Oe 100 High
Assumed Speed = 12.4 knots
ETA is 0 days
Closing Rapidly
FIGURE 14-3 An example of a ship posing an attack threat. The FTA zante, tactical threat type, and date and time of analysis appear at the top of edch map and the threat indicators at the bottom. The expert system shows the user on 29 April 1991 at 8 o’clock in the morning the first of nine ships posing an attack threat against the RSA east coast. The RSA east coast is denoted by the cursor while the arrow points at the last recorded position of the ship. Information on the ship appears a t the bottom of the screen: its identification number, ship name, ship type, hostility index, long- and short-term activity, position, and threat potential. Ship speed is calculated as 12.4 knots, and the ship is closing rapidly towards the RSA east coast. ETA is the abbreviation for the expected time of arrival a t the FTA.
ated potentials are listed on the blackboard. The operator can then browse through a list of ships posing particular threats. Alternatively the operator can view, on an electronic map, detailed information about each vessel that poses a threat. Figure 14-3 shows an example of a tactical threat.
An Example of a Naval Grouping To identify the formation of task forces, we used a single linkage cluster analysis [Everitt 19801. For all ships of a particular country, we predicted its position and course as at the date and time of analysis, and then calculated distances
270
KNOWLEDGE MANAGEMENT TOOLS
between ships in order to derive ship groupings by means of a cluster analysis. An example of a specific grouping is given in Figure 14-4. The country name, hostility classification, type of grouping, and date of analysis appear at the top of the map. The arrow points at a particular ship in the grouping, and information on that ship appears at the bottom of the map. In designing the system, we found the following issues to be important: The quality and validity of the data in the data-base management system are extremely important. Future is not only knowledge intensive but data intensive as well, so that garbage in inevitably results in garbage out. In designing the system, we were influenced by the frequency with which position reports were updated. The complexity of designing an event-driven system is quite different from that of designing a system that monitors the data base only at fixed times. Because position reports were updated
1:4
4120 SHIP2
member
COM
STA in harbour LTA unknown
FFG
0437s05528e Course 90
FIGURE 14-4 An example of an enemy task force forming off Madagascar. The expert system shows the user on 29 April 1991 at 8 o’clock in the morning the fourth of five threatening groupings. The arrow points at the first of four ships in the particular grouping. The information on the ship appears at the bottom of the screen: its identification number, ship name, ship type, short- and long-term activity, position, and course. Note that two ships have exactly the same predicted position. The operator can browse through all the ships in the particular grouping.
Future: A Knowledge-Based System for Threat Assessment
271
more-or-less once daily, we did not have to make a big effort to design a complex real-time system. One of the items on the initial wish list was that the system have a rule editor so that experts could enter their own rules. We thought this would be risky, and we persuaded the experts to drop this idea. It is well known that such an editor works well only when the experts are familiar with the underlying inference mechanism or when the knowledge is only one level deep. Throughout the project, we followed a multi-disciplinary approach. The knowledge engineers had formal training in such disciplines as statistics, engineering, and computer science. We aimed to construct a parsimonious rule base, representing experts’ knowledge with as few rules as possible. This not only makes the design more efficient, but it makes it easier to develop explanation facilities. The number of rules is not always a measure of the complexity of the knowledge-based system.
RESULTS The Future system achieved major productivity improvement and resulted in risk being reduced considerably. Productivity improvement was not quantified in monetary terms but rather in terms of the work or labor saved. Instead of the shipping intelligence officer relying on the input of three experts and four operators in preparing the briefing to the admiral (which took about a day), the officer can now prepare the briefing single handedly in less than an hour. This saves about six workdays daily. The experts, who held middle management positions in the intelligence branch and some of the operators have subsequently been transferred to other branches of the navy. The system freed them from a rather tedious and mundane task so that they could use their skills more effectively elsewhere. Risk was reduced in the sense that decision making is now more consistent and free from human error. All ships are analyzed in the same way by the system, which then alerts the shipping intelligence officer about those ships that have some threat potential. The chance of not detecting a potential threat has been reduced considerably.
ACKNOWLEDGMENTS We gratefully acknowledge the contribution of Captain John Gower, Commander Piet Retief, and Lieutenant-Commander John Kingsley of the South African Navy. Without their support, the success of this project would not have been possible. We also thank Dr. T. de Wet and an anonymous referee for comments that substantially improved the presentation of the article.
272
KNOWLEDGE MANAGEMENT TOOLS
REFERENCES Engelmore, R. and Morgan, T. 1988, Blackboard Systems, Addison-Wesley Publishing Company, Workingham, England. Everitt, B. 1980, Cluster Analysis, Halstedt Press, John Wiley and Sons, New York. Funk, K. 1988, “A knowledge-based system for tactical situation assessment,” Annals of Operations Research, Vol. 12, No. 1 4 , pp. 285-296. Hart, A. 1986, Knowledge Acquisition for Expert Systems, Kogan Page, London. Hickman, F. R.; Killin, J. L.; Land, L.; Mulhall, T.; Porter, D.; and Taylor, R. N. 1989, Analysis for Knowledge-based Systems: A Practical Guide to the KADS methodology, Ellis Horwood, Chichester, England. Taylor, R. N.; Porter, D.; and Hickman, E R. 1989, “CONCH: Client-oriented normative control hierarchy,” ESPRIT Project 1098, Task G9 working paper, The Knowledge Based Systems Centre of Touche Ross Management Consultants.
PART SIX What Next?
This page intentionally left blank
Daniel McNeill and Paul Freiberger
It is early 1991. Bart Kosko leans back behind the desk in his office at USC, contemplating the Patriot missiles bringing down Iraqi SCUDS over Israel and Saudi Arabia. The day before, one glanced off a SCUD above Tel Aviv, deflecting it rather than destroying it. The SCUD hit the city, injuring over a hundred and killing three. Kosko says a more accurate target-tracker based on fuzzy logic, which he has devised, might have stopped the SCUD in midair and prevented this suffering. His target-tracker is an adaptive fuzzy system, a hybrid of fuzzy and neural networks which may rejuvenate AI. It unites the best of fuzzy logic with neural networks, computers based roughly on the brain. Neural networks are among the most tantalizing advances in recent technology. They resemble the human brain-though remotely, like a paper plane resembles a jet. It is a big step forward, though, since a digital computer resembles the brain like a tossed rock resembles a jet. Neural networks "learn." That is, they use experience to become better and better at classifying, that basic feat of thought. Exposed to enough examples, they can generalize to others they haven't seen. For instance, if a character-recognizing neural net sees sufficient A's, it can come to recognize As' in fonts it has never encountered. They can also detect explosives at airports, bad risks for mortgage loans, and words in human speech. Moreover, neural networks can spot patterns no one knew existed. For instance, Chase Manhattan Bank used a neural network to help it reduce costs from stolen credit cards. It examined an array of information about thefts and discovered, intriguingly, that the most dubious sales were for women's shoes between $40 and $80. The first commercial neural net product debuted in June 1992. Based on a neural net chip developed by Federico Faggin and Carver Mead, it appeared in a check scanner, a device that scans the code at the bottom of checks and electronically consults the bank about it. It could speed up buying by check. Reprinted with the permission of Simon h Schuster from F u u y Logic by Daniel McNeil and Paul Freiberger. Copyright 0 1993 by Daniel McNeil and Paul Freiberger.
275
276
KNOWLEDGE MANAGEMENT TOOLS
Neural nets differ dramatically from the machines that now sit on the desktop. Traditional computers feed all information through a single point for processing. Neural nets process it everywhere and simultaneously. Digital computers store information at numbered addresses in memory. Neural nets store it throughout the system, at locales reached through content. Digital computers are brittle and can fail with one damaged part. Neural nets are robust and degrade slowly and gracefully, like Yamakawa’s inverted pendulum. Spanish neuroanatomist Santiago Ram6n y Cajal (1852-1934) first suggested the central idea behind neural nets in 1893: The brain holds memories as patterns of linked neurons. In 1943, neurologist Warren McCulloch and mathematician Walter Pitts proved that a machine based on this principle-a neural network-could perform any feat that a digital computer could. In 1949, Canadian psychologist Donald Hebb expanded on Ram6n y Cajal’s idea. We store memories not in individual cells, he said, but in cell assemblies, in strings of cells. Every time electric current passes from one neuron to another, across the microscopic gaps called synapses, it forges a better connection and makes it easier for the current to pass the next time. A synapse is like a tollbooth at a bridge, but a special, friendly kind where the toll decreases the more often you cross. A driver among a delta of islands, an irregular mosaic, would tend to take the route with the least toll. Current flows that way in the brain, Hebb said. Any oft-followed path would constitute a cell assembly, a memory, so the brain would have assemblies for, say, square, rabbit, and sea. This scheme could work because the brain has between 10 and 100 billion neurons, and each can have 1,000 to 5,000 synapses.’ The number of possible cell assemblies is colossal. In the 1950s Frank Rosenblatt began building machines based on this notion. He called them perceptrons, and they are the forerunners of today’s neural nets. They are complexes of “neurons,” all tied together electrically. The connections have different weights, and the greater the weight, the easier to cross. Each connection amounts to an association. Like human associations, they grow stronger or weaker with use. When they reach 0, they are forgotten. When they reach 1, they occur automatically, like the link between bell and salivation in Pavlov’s famous canine. Neural nets learn mainly by adjusting the strength of their synapses. If we show a net a G and it responds with G, it has hit the bull’s-eye and receives positive reinforcement, which strengthens the connections used. It is more apt to follow that path next time. The neural net is the embodiment of John Watson and B. E Skinner’s behaviorism, now fallen so far from glory. In effect, the network tries out many different patterns, receives rewards for nearing the solution, alters the weights, and tries again, and again, and again. In 1969, in a subsequently notorious episode, A1 proponents Marvin Minsky and Seymour Papert of MIT published a proof that Rosenblatt’s perceptrons, as then constituted, had stark limitations. The logic was unassailable, but most readers assumed it covered all versions of the device rather than just the elementary ones then employed. Perceptrons seemed a cul-de-sac. Interest chilled, and re-
Webs of Cognition
277
search shifted almost wholly to A1 based on symbol processing. By the 1980s, when symbol-based A1 was flopping about on the deck, researchers discovered that, with a few modifications, neural networks could sidestep Minsky and Papert’s objections. The two have since incurred odium as suppressors of the field. Do neural nets model the brain? The idea has provocative explanatory power. For instance, it makes very clear why people can identify patterns so much more easily than they articulate them. The patterns form first, gradually. We don’t sense them until they reach a certain strength, and only much later, perhaps, do we translate them into words. Even then, the translation may be misleading. The neural net model also feeds ammunition to exponents of “intuition” as a more basic feature of the brain than symbol-oriented reason. Intuition, hunches, are our sense of the unarticulated patterns themselves. As makers of intelligence tests advise, guessing is usually a good idea if one has a hunch about a question. It can tap an incipient truth. Regardless of such conjectures, as machines, neural nets differ fundamentally from brains. Most obviously, they have no emotions, creativity, or private thoughts. They lack the fabulous intricacy of gray matter. Their neurons are much simpler than real ones, which come in many varieties and relay information in numerous ways. Finally, the brain can devise rules from experience. Neural nets merely get better and better at recognizing patterns. As A1 advocates have pointed out, they can’t derive structured rules, and this inability not only sets them apart from brains, but hobbles them as rivals to AI. Fuzzy systems would solve this problem.
STRUCTURE AND NUMBER By the dawn of the 1990s, Bart Kosko had emerged as a leading figure in both fuzzy logic and neural nets. Though he was only an assistant professor, his 1991 work Neural Networks and Fuzzy Systems had sold out on publication and become a best-selling text. Beyond reformulating the theory of fuzziness, he has helped combine it with neural networks to create a powerful new hybrid. Kosko contrasts A1 and neural nets with fuzzy systems. He notes that traditional A1 is one-dimensional and its basic unit is the symbol, a large, awkward item. Yet it also has structure, that is, rules, and rules are priceless shortcuts. “The good news is you can represent structure in knowledge,” he says. “The bad news is you can’t do much with it because it’s symbolically represented. In other words, you cannot take the derivative of a symbol. So the entire framework of mathematics and most of the hardware techniques for making chips are not available to you in AI.” Neural nets are just the opposite. They have the advantage of number. “YOU can prove theorems and you can build chips. The problem is that neural nets are unstructured.” They cannot handle rules. For instance, a traffic-control system attempts to spur the flow of cars through city streets. What should it do if traffic grows heavier in one direction? The answer is pure common sense: Keep the green
278
KNOWLEDGE MANAGEMENT TOOLS
lights on longer. Unfortunately, one cannot just tell a neural network to do that. “You have to give it lots and lots and lots of examples,” he says, “and then maybe it’ll learn.” Moreover, like brains, neural nets do not have indelible memory. They are volatile. “You can’t be sure it won’t forget it when it learns new things. And that’s the problem with neural networks, and that’s why you don’t have neural network devices in the office, in the factory.” The best of both worlds, he says, is the adaptive fuzzy system. (It is adaptive because it changes over time, to learn.) It has structure like traditional AI, so it can use rules. It also allows the pure math of neural nets, and thus chips and learning.
BRAIN SUCKING Judea Pearl says fuzzy systems seem no different to him than ad hoc systems, devices engineers have long cobbled together to fit the circumstances. “I think Pearl is right from what’s he’s seen,” says Kosko. “They are beautifully ad hoc systems, in the sense that the deep principle of intelligence is that we don’t know the input-output transformation. The expert can’t articulate it, though he can act as a function. And that’s the point.” A1 systems, he says, require us to fully specify all the rules, which can be nearly impossible. We often just don’t know them. Neural networks, on the other hand, learn by example and only by example. But adaptive fuzzy systems can take the inputs and outputs of neural nets and express the relation in fuzzy rules. “It’s basically detective work at the mathematics level. We want to estimate the rules just from the data, without the guess of a math model.” The neural net behaves, and the fuzzy system divines the laws of that behavior. In fact, deriving the rules has long bedeviled makers of fuzzy systems. As Zadeh notes, “The way it’s been handled in the past-and it’s still handled that way-is to build a system and see if that works. If it doesn’t work, you begin to tinker with things.” The quest to automate this process goes back to a paper by Mamdani and an associate in 1977. In 1984, Tomohiro Takagi and Michio Sugeno published an article which discussed, for the first time in detailed and useful fashion, how to obtain rules by observation. As Zadeh notes, “Let’s consider the problem of parking a car. There are two approaches. One is through introspection. That is, you sit down, you analyze the way you park the car and you come up with the rules. And the other is based on observation of somebody else’s approach, not necessarily your own. And that is an important issue. This whole symbiosis of fuzzy logic and neural networks has to do with this problem basically: the induction of rules from observation.” Kosko has devised a learning method for machines that takes data straight from the outside world. He calls it differential competitive learning. “You ask Itzhak Perlman how he plays the violin and you’ll get an answer. But you can’t use that answer to replicate his performance. He doesn’t give you an equation for
Webs of Cognition
279
mapping inputs on the page to tonal frequencies coming out of the violin. He doesn’t know that and neither do you. Well, neither does a fuzzy system and neither does a neural system.” But observation of violin-playing can lead to rules. So can, say, trapshooting. “We take you and you sit down and try to put the crosshairs on the target,” he says. “You take your best shot and what you’re doing for us is generating trajectory data. And while you’re doing that, it’s cranking through this competitive learning algorithm and very quickly we’re generating boxes of rules. It’s called brain sucking.” The system can then take over and use the rules it has inferred. “So this is a real step toward automation at the intelligence level,” Kosko says. “You yourself could not articulate most of the rules. In time you could, but we don’t have to do that anymore. You don’t need to talk. You just have to behave. That’s why artificial intelligence has collapsed, because you can’t articulate those fine rules. The problem with most fuzzy systems to date is that they require so much to articulate the rules, it takes a long time just to get a few.”
THE FUZZY KALMAN FILTER Kosko did not select his trapshoot example on whim. It relates to the Kalman filter, which he calls “the single most powerful, most popular algorithm and system of modern engineering.” This Bayesian technique helped put men on the moon and bring them back, and engineers use it for most navigation as well as analysis of the bloodstream and other tasks. “If you’re trying to follow an airplane,” he says, “and it goes behind a cloud, you still have to estimate where it is behind the cloud.” The Kalman filter yields that estimate. It puts the crosshairs on a missile which may be moving at 2,000 miles per hour. The problem is twofold: The target is traveling very fast and the data are noisy. Kosko has developed a fuzzy Kalman filter which he says exceeds the original. “We’re taking the toughest benchmark, the Kalman filter, head-to-head, fair game, fair fight, doing the best you can with the Kalman filter and the best you can with fuzzy, and beating it,” he says. The fuzzy filter has two extra advantages: robustness and learning from life. Like other fuzzy systems, it is robust in the sense that injury causes its performance to decline gradually, not abruptly. If one reaches in and begins randomly erasing rules, the fuzzy filter performs “quite well” until about 50 percent of them are gone. The Kalman filter, in contrast, “degrades very quickly.” Kosko also tested it by inserting a few foolish rules, such as “Always turn left,” and “again you find that unless you really do a major lobotomy, it performs fairly well.” In addition, “it uses in-flight experience to modify its logical structure.” The Kalman filter guided Patriot missiles against Iraqi SCUDS. It is the greater accuracy of his fuzzy Kalman filter, Kosko says, that could have saved lives in the Mideast.*
280
KNOWLEDGE MANAGEMENT TOOLS
ORCA CALLS A guest at a party is speaking a foreign language. Another person overhears it and identifies it as Arabic, without knowing a word of the tongue. “How can we do that? How does it happen?” says Rod Taber, of the University of Alabama at Huntsville. ”It’s mysterious.” This kind of unarticulated pattern recognition had always fascinated Taber, and the tall, burly engineer eventually addressed an analogous problem with killer whales. He was working with General Dynamics in San Diego and became friends with marine biologists at the Hubbs Marine Research Institute. They had a variety of killer whale calls on tape, classified into types based on click, chirp, and whistle content. These types were dialects, and they showed the origin of each orca. Scientists had tried and failed to identify these half-second sounds with machines. Taber set out to solve the problem. He borrowed orca calls from Norway, Canada, Antarctica, Iceland, and Alaska, as well as recordings of other underwater noise, such as ships, torpedo launches, helicopters, earthquakes, and fish signals. He then tried to build a neural computer to identify the killer whale dialects. “It worked terribly,” he says. “You might as well have thrown dice. We figured there had to be a way of doing this, but something big was missing. So we jumped in with fuzzy logic.” He and his group invented a fuzzy neuron, a new hardware device, and in 1987 gave a demonstration to the Navy. The Navy handed Taber one of its own orca tapes and asked him where it came from. “It was the funniest darn thing,” he recalls. “According to fuzzy logic, it had about a 75 percent chance of being from the southern end of Alaska. Nothing came o u t over 75 percent. I said, ‘The machine doesn’t recognize it per se. But if we had to bet, it would be southern Alaska.”’ In fact, the Navy had recorded it 20 miles south of Alaska. The Navy reacted in an interesting way. “They sat me down and gave me a talking to,” he says. “They said, ‘This is the way you detect submarines. The Russians could use it to detect our subs more efficiently.’” Chastened, he cut back on his publications in this field. ”I still talk about it, but I tone it down and try to avoid touchy issues.”
NEURO-FUZZY IN JAPAN The Japanese have also been quick to exploit this technology. For instance, at Mitsubishi’s Industrial Electrical and Systems Development Laboratory, a cluster of deceptively nondescript buildings near Osaka, Atsushi Morita and his coworkers devised a mixture of fuzziness and neural networks that possessed strengths of both. Morita is a bright, cheerful man of 38 who grew up in Nara, the ancient temple city that was Japan’s first capital. He first heard of fuzzy logic through Masaki Togai, whom he met at a conference in December 1985. At first, he failed to fully grasp it. “I thought it was for understanding language, not for control. In 1986, it was something new, but we didn’t know how important or useful it might
Webs of Cognition
281
be.” However, he employed it successfully in an electrical discharge machine, and by 1990 had developed a fuzzy neural network modeL3 It worked fairly simply. Most neural nets start with their connections weighted the same throughout, that is, with no knowledge. Morita’s model began with rules from experts. It sprang a t once into being, like Pantagruel, then refined its rules even more. This tactic slashed learning time dramaticaily. Moreover, as it improves the rules, scientists can see how they change. The rules are not lost in the black box, but shift before one’s eyes. Overall, these systems telescoped development and side-stepped much of the difficulty of deriving rules. In nearby Osaka, Matsushita’s Central Research Lab was using neural nets with fuzziness in commercial appliances. Manager Noboru Wakami had also encountered problems in determining the rules. The neural net proceeded by trial and error, an arduous exercise. “So we researched ways of automatically tuning the membership functions,” he says. “For example, we combined neural networks with fuzzy logic. We decided the membership functions using a neural network learning algorithm.” Matsushita describes neuro-fuzzy logic as “fuzzy logic designed by neural networks.” The company notes that it required some time to determine the fuzzy IF-THEN rules. Moreover, fuzzy logic by itself could only consider a limited number of factors simultaneously. Of course, people have this problem too. If a driver on an icy road a t night begins to skid, she must contemplate an array of weighted factors at once-perhaps too many. Bad decisions ensue. Matsushita found that, by taking data from experts and feeding it into neural networks, the nets could generate fuzzy rules automatically, 45 times faster than previous neural networks. Moreover, the machines could consider many more factors. The first Matsushita products did not profit from this approach. But in February 1991, the company introduced its first neuro-fuzzy washing machine. Its original fuzzy washer adjusted for three variables: kind of dirt, amount of dirt, and load. The neuro-fuzzy machine could also handle type of detergent, quality of clothes, hardness of water, and other factors. It chose the most appropriate from among 3,800 different patterns of operation. The company soon also rolled out a neuro-fuzzy vacuum cleaner and rice cooker, and planned to place the technology in many other products. Other, more advanced devices are blooming in the East.‘ Japanese researchers are working on a fuzzy-neural character-recognizer that can identify letters in extreme noise, such as where ink spills over half the letter. While neural nets alone cannot achieve this feat, neural nets enhanced with fuzzy logic can. Another project allows robots to cut unknown metal surfaces. In addition, Chinese researchers are developing neuro-fuzzy systems for diagnosing silicosis and predicting change in foreign exchange rates.
FUZZY COGNITIVE MAPS Meanwhile, Bart Kosko was working on a new kind of decision system: the fuzzy cognitive map, or FCM (which he pronounces with internal schwas). FCMs
282
KNOWLEDGE MANAGEMENT TOOLS
are networks rather than one-way trees. They model situations by their classes and the links between them, and ultimately form a web of interlocking causes and their strengths. “I claim I can take any article and translate it into a fuzzy cognitive map,” he says. For instance, he derived the map in Figure 15-1 from an economist’s analysis of the complex political situation in South Africa in the late 1980s. The question was whether the United States should divest. The map has nine variables, such as mining, black tribal unity, and white racist radicalism, and an increase in one can create an intricate domino effect, causing many others to rise or fall. (Figure 15-1 uses simple plus and minus signs to show increase or decrease, but a true FCM would have numerical values.) For instance, it shows how a rise in black employment would ultimately affect apartheid. The FCM is a dynamic system. It seeks equilibrium, and the final equilibrium is the inference. “In fact,” Kosko claims, “the article does nothing but flesh out the cognitive map. So the prediction is that soon you’ll see political articles on the op-ed page, and then they’ll have an appendix which is the cognitive map. A few years from now, it will be just the reverse. The op-ed page will have the cognitive map and the words will be the appendix.’’
+ f
1
\
+
+ c w
FIGURE 15-1 Fuzzy Cognitive Map
*
Blvk Employment
Webs of Cognition
283
Fuzzy cognitive maps reduce political analysis to a matter of identifying variables, the links between them, and the strength of the links. If the situation alters, one can easily adjust these links. The FCM thus responds at once to feedback. Moreover, he expects analysts will be able to work easily with the maps, as they have with other fuzzy devices, because they can comprehend them. “They don’t have to understand mathematics. They can just common-sense argue it. We can go through it link by link.” It is, he says, a picture. Kosko’s FCM of South Africa proved controversial, perhaps partly because he adapted it from an article by Walter Williams, a black conservative. When he published it, the editors insisted he remove the labels like “Foreign Investment” from the nodes, and Kosko, angered, put “Censored” in their place. Yet unlike conventional expert systems, FCMs can be easily combined, so they need not reflect the sole judgment of a Walter Williams. “We can take any number of expert opinions, the more the better, and combine them into some unified FCM,” he says. “In other words, the underlying knowledge you are trying to tease out of these experts, it comes out and it improves as you increase the number of experts.” Moreover, one can fix the weight given each expert, so the opinion of a minor figure does not equal that of a high authority. The map also deftly handles contradictory assessments of the same relationships. “Fuzziness doesn’t run away from contradictions. It’s built on them,” Kosko says. “I foresee a day when you can take every document that’s ever been written, beginning with the Sumerians, take all the technical journals and all the books, the world’s accumulated knowledge-wouldn’t you like to see what that FCM looks like? And the beauty of it is that once you’ve done that, once you’ve grown this immense FCM in the sky, all you’ve really done is initiated the real FCM. Because now you’re going to use techniques drawing straight from data. So for succeeding eons, this map will be growing and evolving in a very structured way, in a way that no single mind can perhaps understand, and it will make very terse predictions about states of affairs.” Such a behemoth authority would generate social problems. “At first it would scare the hell out of people,” Kosko says. “That’s why everyone would have one.” But Kosko believes that the history of science shows that “can implies ought.” He likens FCMs to early computers themselves, which disturbed people until everyone had them on their desks. “This sort of stuff will be available to everyone very soon.” Kosko adds that mammoth FCMs could clarify arguments and thus better pinpoint disagreements. “You could build a Republican map and a Democratic map, and you can argue the links. Most of the links will be the same, except some of the relations between policy variables won’t be there.” One could make predictions on the basis of both “bushes,” see which come true, and plunge into the faulty bush to fix the “twigs.” Where the two sides disagree, other twigs would reveal the source of the conflict, and one could narrow the dispute to the real issue. “Earlier attempts to do things like this fell down,” he says, “because they used decision trees or they went too far and tried to write down equations, which
284
KNOWLEDGE MANAGEMENT TOOLS
I don’t think is tenable. But you can argue factual cases much more easily than value cases. If you can bring that down and get all the links to agree in [positive or negative] sign, if not in magnitude, that’s a great achievement.” In any case, he says, “The good news is: The bigger the map, the more the nodes and the more connections between them, the less any given change in magnitude seems to matter.” Sex Robots and Cancer Zappers
With such powers, Kosko envisions fuzziness as radically altering our world, our sense of the role of computers. For instance, he says, the machines will routinely write novels. A computer might not pop out a thriller every day, but perhaps once a \?reek or month. He predicts such a power within 20 years, and perhaps some form of it sooner. Would any human being care to read a computer novel? “Well,” he says, “I think that will take a certain amount of time,” but he feels it is inevitable. It would follow the basic plot rule of Aristotle, with three acts in a ratio of 1/4, 1/2, and 1/4. “You can break this down more and more methodically, and that stuff is very easy to teach an A1 system. So you’ll get a nice crisp three-act structure with the conflict resolved, and everything you can articulate in the classroom will be housed in that book. Then locally it will be filled in. You want a Hemingway style, you get a Hemingway style. You pick your author.” Voice recognition will be commonplace, he says. The modern struggle with this task will seem ancient history, and current voice recognizers will take their rightful place in museums for citizens to chuckle over. Automatic chauffeurs are coming too, he notes. Already, some companies are designing sonar devices to provide emergency braking on the freeways. If the car in front stops too quickly, it will pump the brakes. “The next step, and it’s not in principle that hard to do, is to put down a layer of asphalt that’s shot through with small emitters or receivers. In other words, smart roads and smart cars, and maps embedded in the system and a voice system to tell it where to go. It’s all a matter of obstacle avoidance.” He also foresees sex robots. “I would look for machine intelligence products there, because so much of sex is just grossly sensory, and because of the rapid advances in ceramics, and because you really don’t have great tactile differentiation ability.” He says such a fuzzy neural system is not farfetched a t all. “It can’t just be a Barbie doll. It has to appear to have an infinite repertoire of behavior,” he says. “Given any input, it will generate an output and those outputs will have great variety.” The market would be vast, he thinks. “Just imagine if someone in Kyoto comes up with flesh that, if you did a Turing test, people said, ‘Yes, that’s flesh,’ and you put in a little water and warm it up and so forth, and it makes all the sounds. The technology is not that hard. I think also if AIDS persists, and it could go on and on like the plague went on for centuries, then you’d be forced toward sex substitutes.” Such machines, he thinks, might profoundly affect marriage and the nature of the family.
Webs of Cognition
285
The potential of fuzziness extends even farther. Kosko dismisses MYCIN and similar medical diagnostic systems as crude. “Forget that business,” he says. “There are many other ways to do that.” For instance, Eric Drexler has speculated about a new domain he calls nanotechnology, the technology of machines the size of molecules. It could yield a bouquet of wonders, such as infinitesimal devices that automatically convert energy into food or roam the bloodstream in search of disease. This field is taking shape much faster than most scientists expected, and in 1990 researchers stacked 35 xenon atoms to spell IBM. Such micromanipulation will be critical to build these tiny machines. “Real solid progress has been made,” Kosko says. “But the problem with Drexler’s concept was that he had this undefined idea in it: A1 engineering,” Kosko says. “In fact, he was really talking about neural or fuzzy or a mixture of both.” Kosko thinks no one could ever build a large enough A1 expert system, with its ponderous load of rules, to work efficiently at the molecular level. Yet the devices require machine intelligence. For instance, a molecular virus hunter would need to track fast-moving targets. Once it learns that trick, he says, it demands very little computational power. “You don’t need a chip to do it.” The machine would also have to make judgment calls, which require more. “In other words, the idea of a cancer cell or a healthy cell-it can’t call all these in advance. It’d have to have a neural-like system to recognize categories or patterns, not just specific instances. So I think that’s the key part.” The ultimate promise o f nanotech is the dream of ages: immortality. “The minute you start taking a quantitative view of health at the molecular level, there’s really no reason you can’t keep those molecules healthy and definitely slow the aging process down to nil.” Individuals cryonically suspended-and Kosko believes strongly in cryonics-might then be brought back to enjoy lasting youth. These visions have a sparkling appeal. But will they come true, or will they harden into garish embarrassments like the predictions for A1 and neural networks? Overhype is a real danger to nascent technology. It creates expectations which go unattained, and ultimately infects a field with discouragement. Kosko may or may not be prophetic, but we can begin to form an opinion by peering into the labs where engineers are creating tomorrow’s fuzzy products.
NOTES 1. Stephen M. Kosslyn and Oliver Koenig, Wet Mind: The New Cognitive Neuroscience, New York, Free Press, 1992, p. 42. 2. For a fuller description, see Bart Kosko, Neural Networks and F u q Systems, chapter 11. 3. Atsushi Morita and Akio Noda, “Fuzzy Model of Neural Network Type,” in Proceedings of the 1990 National Convention ZEEE Japan--Industry Applications Society, pp. 1.13-1.20. 4. Proceedings of the International Fuzzy Engineering Symposium ‘91, November 131s. Yokohama, Japan, Fuuy Engineering toward Human Friendly Systems, vol. 1,Tokyo, Ohmsha, 1991, pp. 515-561.
This page intentionally left blank
Stan Franklin
Is it possible that consciousness is some sort of quantum effect? -Nick Herbert, Quantum Reality I believe that robots with human intelligence will be common within fifty years. -Hans Moravec, Mind Children Perhaps within the next few centuries, the universe will be full of intelligent life-silicon philosophers and planetary computers whose crude ancestors are evolving right now in our midst. -Lynn Margulis and Dorion Sagan, Microcosmos So far we’ve toured purported mechanisms of mind for which worked-out theories, or models, or even prototypes exist. Whether these mechanisms could be useful as mechanisms of actual minds may be doubted, but there’s no doubt that they are mechanisms. On this, our final tour stop, we’ll visit more speculative mechanisms and more speculation about the future of artificial minds.
THE QUANTUM CONNECTIONS Surely the most mysterious theory of science today is quantum mechanics. Although spectacularly successful at predicting outcomes of experiments in physics, quantum mechanics is extraordinarily resistant to any kind of explanatory narrative. Any story that’s told to explain it seems not to make sense somewhere. Some samples: Entities are routinely both particles and waves, and not waves in a real medium but probability waves. Parallel universes multiply at every observaReprinted by permission of The MIT Press. From “Into the Future,” Stan Franklin, Aitificiul Minds, The MIT Press, 1995. Copyright 1995 Stanley P. Franklin.
28 7
288
KNOWLEDGE MANAGEMENT TOOLS
tion, one universe realizing each possible outcome. Reality exists only in the presence of an observer. Reality is created by consciousness. (Note that creating reality is not the same as the creating of information about reality that I’ve been pushing.) My favorite layman’s account of quantum mechanics is Herbert’s Quantum Reality (1985), which demystifies the subject as much as seems humanly possible, and explores a variety of explanatory narratives, all more or less weird. The opening quote was taken from this book. Quantum theory shines in the realm of the almost infinitesimally small, the world of subatomic entities like protons, neutrons, electrons, photons, quarks, and a host of others particles with yet stranger names. (I used to say “subatomic particles,” but no more.) Each such quantum entity is associated with (or is?) its wave function. Suppose a photon is traveling through some medium toward a detector. Quantum mechanics has that photon traveling all its possible paths at once, with its wave function representing these possibilities. At a given location, the square of the amplitude (height) of the wave function gives the probability of the photon’s appearing at the location, should an observation be made. The shape of the wave represents attributes other than position (spin, mass, charge, momentum, etc.). If our photon’s position is detected, say by its striking a phosphor screen, its wave function is said to collapse and its particle nature to emerge. All this must happen in relative isolation. If there is a chance encounter along the way, our photon’s wave collapses before it reaches the screen. In these days of digital everything, we’ve almost forgotten analog computing, using natural processes to compute for us directly. The simplest example I can think of is my old slide rule. By moving the slide appropriately, it would multiply two numbers for me. The speed was admirable, though the accuracy depended on the length of the rule and wasn’t always what I wanted. Electronic analog computers could integrate in a jiffv, again with limited accuracy. The idea, like that of the slide rule, was to construct an electronic device whose operation, viewed mathematically, performed the operation you want. For example, Ohm’s law says that in an electrical circuit the current flow equals the product of the voltage applied and the resistance. If I build a circuit whose voltage and resistance I can vary and whose current I can measure, that’s an analog computer for multiplication. To multiply two numbers, a and b, set the voltage to a, the resistance to b, close the switch and measure the current, ab, the product of the two. An analog computer typically performs one specific task. Over a decade ago the physicist Richard Feynman proposed building quantum analog computers for appropriate tasks. In an analog fashion, such a computer might perform a multitude of calculations in parallel and in an instant, as the wave function collapses to the desired answer. No such quantum computer has yet been built,’ but the theory has been advanced (Deutsch 1985; Deutsch and Jozsa 1992). Shor (1994)has shown that a quantum computer could, in principle, be built to factor 100-digit numbers. Kak (1992) suggests building a quantum neural computer for solving A1 problems. Caulfield and colleagues have shown that some optical processors can be “uniquely and truly quantum mechanical.” They use such processors in the design of a quantum optical device to emulate human creativity2
Into the Future
289
Is it just a short step from quantum computing to quantum consciousness? Can consciousness in humans (and other animals) be the result of quantum computing on the part of neurons? Recall that we saw such suggested by Penrose, a serious and respected scientist, who used it as an argument on the side of the scoffers in the first A1 debate (1989). If intelligence is produced by nervous systems using quantum processors, ordinary computers might be hard pressed to duplicate it. “But,” you cry, “neurons aren’t in the quantum realm, being several orders of magnitude too large.” True. And this objection, of course, concerned Penrose until he was rescued by Hameroff. Hameroff proposed that the microtubules that act as an internal skeleton for each neuron (and any other cell as well) also serve as quantum information-processing devices (1987, in press; Jibu et al. 1994). He claims that certain properties of consciousness (unitary self, free will, subjective experience) resist non-quantum explanation, and that microtubules ”are the best bets for structural bases for cons~iousness.”~ Penrose embraces this view in Shadows of the Mind (1994). Journalist accounts are also available (Freedman 1994; Horgan 1994). As you no doubt expected, the notion of quantum consciousness via microtubles has been greeted with howls of protest. Microtubules are still orders of magnitude too large, are by no means isolated, operate at too high a temperature, and so on. Most critics are not yet ready to believe that consciousness, subjective experience, requires quantum explanation. They view Penrose and Hameroff‘s proposal as using an elephant gun to hunt an elusive mouse in a thicket. I suspect the critics might be right. Right or wrong, quantum microtubles are now hypothesized mechanisms of mind, and thus bear watching. And hypothesized quantum computers are also intended as mechanisms of mind. Who knows what the future will bring? Let’s next visit with a respected roboticist, who may not know but is willing to stick his neck o u t and predict.
MIND CHILDREN Moravec, a roboticist at Carnegie Mellon whom we met during our itinerary run, believes that intelligent robots, our mind children, will outstrip us. Unleashed from the plodding pace of biological evolution, the children of our minds will be free to grow to confront immense and fundamental challenges in the larger universe. We humans will benefit for a time from their labors, but sooner or later, like natural children, they will seek their own fortunes while we, their aged parents, silently fade away. Very little need be lost in this passing of the torch-it will be in our artificial offspring’spower, and to their benefit, to remember almost everything about us, even, perhaps, the detailed workings of individual human minds. (1988, p . 1 )
290
KNOWLEDGE MANAGEMENT TOOLS
A powerful prediction! Moravec is certainly not afraid to stick his neck out. And what’s it based on? Projections, intuition, and an imagination that would do Robert Heinlein or Arthur C. Clark proud. Moravec’s projections flow from a couple of fascinating figures. The first plots power against capacity (Figure 16-1). Capacity is memory size measured in bits, and power is processing speed in bits per second. Note the log scale on each axis. Each labeled unit is a thousand times greater than the previous one. I’m particularly intrigued that a bee outperforms the computer I’m writing on, and that a single human can outdo the national telephone network. This figure sets the stage for Moravec’s projections, and the next opens the curtains. Figure 16-2 plots computing power per dollar against time. The result is essentially linear, with power per dollar increasing a thousandfold every twenty years. Extrapolation will yield tools with human computing power at a reasonable cost in forty years. Thus, Moravec’s prediction. High-level artificial minds are just over the horizon, he says. I’m a little skeptical. First, I’m doubtful about predictions in general. Thomas J. Watson, the founder of IBM, once predicted five machines as a world-
.,4
telephone system
1
lo6
1o9 Capacity (bits)
FIGURE 16-1 Computational speed and storage capacity (reprinted from Moravec 1988)
10’~
Into the Future
291
FIGURE 16-2 A century of computing (reprinted from Moravec 1988) wide market demand for computers. Second, every growth curve I’ve ever seen has eventually leveled off. The speed of an electron through a wire or chip is limited by the speed of light. The amazing development in recent years portrayed in this figure has resulted from chips more densely endowed with components, so that traveling distances are less. But we’re approaching the quantum barrier. Much smaller distances and we’re in the quantum realm, where computations may not be reliable. Maybe this curve is about to level off. Finally, even if computers as computationally powerful as a human nervous system do emerge, there’s no assurance we’ll be able to program human intelligence into them. Don’t get me wrong, though. It’s not that I disbelieve Moravec’s prediction. It’s that I wouldn’t bet on his time frame. Enough about projections. Let’s go on to imagination. What would you say to a fractal, recursive robot? What? Well, let’s take it slowly. Clouds are fractal in that they look much alike at any scale. A photograph of a square mile of clouds would be hard to distinguish from one of a square yard of the same clouds.’ In computing, a recursive procedure is one that calls itself during its operation. Suppose I want a recursive procedure to find the length of a string of characters like abc. I might define the procedure so as to return 0 for the empty string, and to return 1 + the length of the rest of the string otherwise. The rest of abc is the string bc. This length procedure calls itself during its operation. It’s recursive. For our entire tour, we’ve visited mechanisms of mind. We’ll now visit a mechanism of body, a fractal mechanism, a recursive mechanism. A strange body indeed. But why a mechanism of body on a mechanisms of mind tour? Because, as Varela et al. pointed out to us, minds are always embodied: mind is constrained
292
KNOWLEDGE MANAGEMENT TOOLS
by structural coupling, that is by body, and how it meshes with the environment. So bear with me as we visit a wondrous body and speculate about possible minds for it. Imagine a meter (yard)-long cylinder, ten centimeters (four inches) in diameter. Inside are a power supply and a control mechanism. Now imagine four halflength, half-diameter copies of it, including power supply and control mechanism, two attached at each end. The joints have at least the freedom of movement of a human wrist, and built-in sensors for position and force. Continue this recursive, robot building process twenty times, creating what Moravec calls a bush robot.
FIGURE 16-3 A robot bush (reprinted from Moravec 1988)
Into the Future
293
Figure 16-3 shows his conception of what it might look like. After twenty halvings, the roughly 1 trillion last cylinders (cilia) would be about a micron (millionth of a meter) long. And, having so little inertia, they’d move a million times faster than the largest limbs. But what’s the bush robot standing on? These smallest limbs would be crushed by the weight. Well, it simply folds under enough sections to get to some strong enough to support it. Moravec postulates remarkable sensing abilities for his bush robot. Since each joint can sense forces and motions applied to it, suppose each of the 1 trillion cilia senses movement of a tenth of a micron and forces of a few micrograms, all this at speeds up to a million changers per second. By comparison, the human eye distinguishes about a million parts, registering changes about 100 times per second. The bush robot would “look at” a photograph by caressing it with tiny cilia, sensing height variation in the developed silver. It could watch a movie by walking its cilia along the film as it moved by at high speed. Wild! Cilia could also be sensitive to heat, light, and so on. An eye could be formed by holding up a lens and putting a few million cilia in the focal plane behind it. Or, without a lens, carefully spaced cilia could be used as a diffraction grating to form a holographic image. Wilder! But try this one. The bush robot could reach into a complicated piece of delicate mechanical equipment-or even a living organism-simultaneously sense the relative position of millions of parts, some possibly as small as molecules, and rearrange them for a near-instantaneous repair. In most cases the superior touch sense would totally substitute for vision, and the extreme dexterity would eliminate the need for special tools. ( 1 988, p . 105)
There’s more. Since each branch contains its own power supply and controller, the bush could break into a coordinated swarm of subbushes. They could communicate via sound vibrations of a few thousand cilia. The smaller the subbush, the less intelligent and less powerful it would be. Perhaps its home branch would send it on some mission. A light enough subbush would be able to walk on the ceiling, like geckos, using its cilia to hold to microscopic cracks. A sufficiently small subbush would have so much surface area for its weight, it would be able to fly like an insect, beating its cilia to provide propulsion. And what might a swarm of flying subbushes do? Dealing with killer bees might be one application. But how might such a bush robot come into being? Moravec views it as selfconstructing. Humans would build a few tiny bushes to start the process. These would cooperate to build bushes of the same size and one size larger, to which they would join themselves. The process would repeat until the largest branch was constructed. And the smallest branches? It could make the smallest parts with methods similar to the micromachining techniques of current integrated circuitry. l f its smallest branchlets were a few atoms in scale (with lengths measured in nanometers), a robot bush
294
KNOWLEDGE MANAGEMENT TOOLS
could grab individual atoms of raw material and assemble them one by one into new parts, in a variation of nanotechnology methods. (1988, p. 104) And what about action selection for this bush robot? How is it to be controlled? Moravec suggests enough computing power in each branch to control routine activity, and to appeal one level higher when something unusual occurs: a hierarchical control structure. This places a severe burden on the largest branch. The buck stops there. I’d picture each subbush as an autonomous agent with a built-in preference for being attached to the larger bush. Each subbush would select its own actions via some mechanism like those we’ve visited, say a pandemonium architecture. Such a strategy would make it much more reactive but might cause other problems. Imagine a subbush, away from the parent bush, whose own four largest subbushes decided to take a powder. Why include such a thought experiment on our tour? Because it clearly highlights how much the action selection (controller) of an autonomous agent depends on its material form and function, that is, how much the mind depends on the body.
THE FUTURE SUPERCOSM Having just visited with physicists and roboticists, let’s end this tour stop by visiting briefly with biologists. Margulis and Sagan, in their Microcosmos (1986), convinced me that I’m only an interloper living symbiotically with the bacteria in their domain. It was humbling for me to come to understand that this is the age of bacteria, that on Earth it’s always been the age of bacteria, and that it’s likely to be so until the sun blows. Unless, that is, it becomes the age of machines. But let them tell their story:
That machines apparently depend on us for their construction and maintenance does not seem to be a serious argument against their viability. We depend on our organelles, such as mitochondria and chromosomes, for our life, yet no one ever argues that human beings are not really living. Are we simply containers for our living organelles? In the future humans may program machines to program and reproduce themselves more independently from human beings. (p. 259) Margulis and Sagan share Moravec’s view of the coming dominance of machines, and are, sacre dieu, even willing to consider including them among the living. But they do leave a ray of hope.
The most helpful note for human survival may be the fact that we are presently as necessary for the reproduction of our machines as mitochondria are for the reproduction of ourselves. But since economic forces will pressure
Into the Future
295
machines to improve at everything, including the manufacture of machines with a minimum of human work, no one a n say how long this hopeful note will last. (p. 259) They even speculate on our roles should this ray of hope be realized.
Simply by extrapolating biospheric patterns, we may predict that humans will survive, if at all recognizably, as support systems connected to those forms of living organization with the greatest potential for perception and expansion, namely machines. The descendants of Prochloron, the chloroplasts, retained a much higher rate of growth inside plant cells than did Prochloron, their free-living green bacterial relatives patchily distributed in the Pacific Ocean. Analogously, human beings in association with machines already have a great selective advantage over those alienated from machines. (P. 260) Viewed globally, domesticated cattle and sheep must be considered biologically successful. Their numbers are large and stable. They’ve a symbiotic relationship with humans by which they are provided food, reproductive assistance, protection from predators, and medical care. Nondomesticated cattle and sheep don’t do so well by any of these measures. Perhaps we humans, as a species domesticated by our mind children, will do equally well in a world populated, perhaps dominated, by artificial minds, “silicon philosophers and planetary computers.” Thus our last and most speculative tour stop comes to a close. Before saying goodbye to the tour, let’s recall some of what we’ve seen, and how the sights have supported the views of mind mentioned in the itinerary, if indeed they have.
NOTES 1. My friend John Caulfield tells me that special-purpose quantum computers have been built, but I haven’t yet been able to gather the specifics. 2. Personal communication. 3. From a message posted on Psyche-D, a discussion group on the Internet, dated July 8, 1994. 4. A technical definition of fractal requires concepts that would take us too far afield, for instance, fractional Hausdorff dimension (Mandelbrot 1983; Barnsley 1988). 5. Moravec holds a diametrically opposite view, taking mind to be independent of body. Since this biased tour was designed to support a particular paradigm of mind, I won’t take you to visit his speculations about transferring minds from body to body. But don’t let my biases prevent you from making a side trip of your own to visit these wonders.
This page intentionally left blank
Index A Cyborg Manifesto (Haraway), 3
Abilities, behavior-specific analysis of human, 154 Actions call, 149 domains of behavior-specific, 154-55 normal, 149 regular, 1 4 8 4 9 regular vs. behavior-specific, 151-55 Adaptation, mutual, 222-26 Adept expert system, 221 A1 (artificial intelligence), 31 on the battlefield, 4 8 4 9 contrasted with fuzzy systems, 277 programming with, 51-76 research capabilities of parallel computers, 73 researchers preserving an image, 49-50 searching fos 51-76 Twig software, 218 AIM (Automated Mathematician), 107-9 Analog computing, 288 Analyses, means-end, 24 Archimedes, 113 Argnoter software, 177-78 Artificial minds, 290 Artrficial Minds (Franklin), 8 Asynchronous communication, asynchronous thought, 201-2 Authority o n authorities, 131-33
Bacon (computer program), 1 1 3 believing in, 117 looking for linearity, 114 Bassham, J. A., 1 9 Bawden, David, 5,79-100 Behavior-specific actions, domains of, 154-55 Bell Communications Research, 185
Bobrow, Daniel, 37 Brain, The (Restak), 35 Brains; See also Minds; Cortexes (human) activation of neurons, 58 capacity for encoding memories, 58 cerebral cortexes, 71 discoveries about the human, 52-61 duplicating processing power of, 67-69 evolution of, 70-71 lack of reliability of, 70 limbic system of, 71 processing power of, 55-57 as related to neural networks, 277 sizes of human memory, 6 1 and unconscious information, 59 Brown, John Seely, 179-80 Browsing related information organization, 92-93 BSAs (behavior-specific acts), 150-5 1 Burgess, Anthony, 1 4 7 Bush robot, 292-93
Cactus software, 218 Calvin, M., 19 Cancer zappers, 284-85 Capture Lab, 182-83 Carden, K.J., 261-72 Cells, pyramidal, 5 4 Cerebral cortexes, 71 Change, adaptive spirals of, 223-26 Chaplin, Charlie, 150 Chauffeurs, automatic, 284 Chess fewest replies heuristic, 17-18 master’s memories, I15 players winning over computers, 44 problem solving in, 13-18
297
Index Chess Life. 17 Children, learning by programming, 47-48 Chinese Room, 154,156 Classrooms, computers in, 45-47 Cognition, webs of, 275-85 brain sucking, 278-79 fuzzy Kalman filter, 279 neuro-fuzzy in Japan, 280-81 orca calls, 280 structure and numbeb 277-78 Cognitive, maps, 281-85 Cognitive authority, 121-44 Cognitive elements communication about Notes, 234-36 training on Notes, 237-38 Cognoter software, 177 Colab software, 176-77,181-82.184 Collaboration building. 186 computer-augmented, 174-86 Collaborative environments, 184-85 Collaborative modeling, 171 Collaborative models, 172 Collaborative technology, 175 Collaborative tools, 167-86 blackboards, whiteboards, metal models, 171 what are, 167-74 Collins, Harry, 6 Combinations (chess) defined, 14 Communications asynchronous, 201-2 channels, 95 forms of, 193 technology, 193 transactional models of, 172 Competent systems, 44 Computer-augmented collaboration, 174-86 Computer-based communications computer conferencing, 188 electronic mail, 188 systems, 187-208 asynchronous communication, asynchronous thought, 201-2 divergence and convergence, 202-3 emphasis on the group, 197-99 informality, 2034 interpersonal interaction, 195-97 patterns, 195-206 proliferating, 191 resources, 207 technology, 188-95 technology as participant, 204-6 users with producers, 199-200 valuing of chance, 200-201
Computer-based knowledge synthesis, 206 Computers A1 research capabilities, 73 avoiding nature’s mistakes, 69-72 chess players winning over, 44 in classrooms, 45-47 conferencing, 188-90,196, 198 coping with change, 38-39 difficulty recognizing images, 40 economies of scale, 64 fierce industry competition, 63-64 fifth generation of, 62-67 graphics, 205 making inferences, 40 mediums of collaboration, 176 not first-rate teachers, 46 not recognizing emotions, 40 programs
ELIZA, 37 INTERNIST-l,43 LOGO, 47 SHRDLU, 37-38,72-73 XCON, 43 thinking like people, 31-50 acquiring human know-how, 32-33 competent stage, 33 novice stage, 33 A1 on the battlefield, 48-49 A1 researchers preserving an image, 49-50 can children learn by programming, 47-48 computers in classroom, 45-47 computers, coping with change, 38-39 intuition of experts, 33-36 is intelligence based on facts?, 36-37 just how expert are expert systems?, 43-44 microworlds versus the real world, 37-38 minds like holograms, 41-42 thinking with images, not words, 3941 von Neumann, 65-66 Computing, analog, 288 CONCH (client-oriented normative control hierarchy) approach, 263 Conferencing, computer, 196,198 CONFIG software, 224-26 Connection Machine, 67 Construct computer-aided-system,220 Conversations, parallel voices in, 180 Cook, Sandra, 44 Corporate culture, 260 Cortexes (human); See also Brains as circuit board, 52-61 described, 53 similarities with the printed circuit board,
53-54
Index
299
Creativity information provision for, 8 6 general, 86-96 organization and retrieval of information, 92-96 type of information required, 88-92 nature of, 81-86 definitions, 81-82 specific aspects, 82-83 stimulation techniques, 84-85 stimulation of, 79-100 background, 80-81 stimulation techniques, 95-96 brainstorming, 85 lateral thinking, 85 morphological analysis, 85 synetics, 85 Crevier, Daniel, $51-76 Crick, Francis, 169-71 CRUISER defined, 185 Culture, corporate, 260
DARPA (Defense Advanced Research Projects Agency), DOD, 48, 64, 73 Data equal access to, 180 representation, 59-60 deJongh, P.J., 261-72 Didactic libraries, 137-38 Discoveries based o n problem-solving skills, 116 the light of, 101-17 o n machines, 116-17 not always purely inductive, 114 Discrimination processes, 26 trees, 25-26 Displacement, law of, 113 Disturbing the Universe (Dyson), 48 DNA learning from experience, 110-12 structures, 170 DOD (Department of Defense), U.S.,
31 Double helix, finding the, 169-70 Double Helix (Watson), 170 Drexler, Eric, 285 Dreyfus, Hubert, 5, 31-50,37,49-50, 151-53 Dreyfus, Stuart, $31-50, 151-53 Dykstra, John, 168 Dyson, Freeman, 4 8
148,
Einstein bottlenecks, 6 4 Electronic contexts, sense and nonsense in, 247-60 Electronic mail, computer-based communications, 188 Electronic sidewalks, 200 ELlZA program, 3 7 Embodied knowledge, 146 Embrained knowledge, 146 Encultured knowledge, 147-49 Engelbart, Douglas, 172 Entities, hypothetical, 1 3 Environments, collaborative, 184-85 EPAM (information-processing theory), 25-26 Eureka, 113 Eurisko (computer program), 101-2, 111 concept of, 103 creating concepts itself, 104 name explained, 113 purpose, 105 similarity berween Darwinian evolution, 110 upstaging Automated Mathematician (AM), 109 Experience, learning from, 110-12 Expert systems Adept, 221 just how expert are?, 43-44 Experts, intuition of, 33-36
Face recognition, 40 Familiarization processes, 26 FCMs (fuzzy cognitive maps), 281-85 Feynman, Richard, 48, 288 Fifth generation of computers, 62-67 Franklin, Stan, 8, 287-95 Freiberger, Paul, 275-85 Future into the, 287-95 future supercosm, 294-95 mind children, 289-94 quantum connections, 287-89 knowledge-based system background, 262 development methodology, 262-64 knowledge acquisition process, 264-71 results, 271 for threat assessment, 261-72 supercosrn, 294-95 Fuzzy cognitive maps, 281-85 Fuzzy Kalman filters, 279 Fuzzy systems, adaptive, 275
3 00 Gallwey, Timothy, 48 Garrulous theories, 18-1 9 Graphics, compute4 205 Group think syndrome, 198-99 Groupware implementation, 2 3 1 4 6 background to the Notes acquisition,,233-34 discussion, 2 4 3 4 5 research results, 233-43 cognitive elements, 234-38 structural elements, 2 3 8 4 3 research site and methods, 232-33
Hamlet (Shakespeare), 147 Haraway, Donna, 3 Herbert, Simon, 11-29 Hewitt, Carl, 72-73 Holograms, mind being like, 41-42 Humans abilities of, 154 become more charitable to machines, 159 become more like the image of machines, 160 machines and the structure of knowledge, 145-63 machines and the Turing test, 155-57 memory sizes of, 57-61 reaching equivalence, 61-72 starting to behave more like machines, 159-60 Hypothetical entities, 1 3 IBM task forces, 183-84 Images, thinking with, 39-41 Information elementary processes, 19-23 in chess theory, 21-22 miscellaneous, 20-21 in serial pattern recognition, 22-23 symbols, lists and descriptions, 20 intelligence and manipulation of, 5 2 interdisciplinary, 88-90 peripheral, 90 processing in computer and man, 11-29 garrulous theories, 18-19 levels of explanation, 12-13 parsimonious theories, 18-19 retrieval, 1 2 1 4 4 science and questions of quality, 140 systems and creativity, 79-100 and overload, 259 unconscious, 59
Index Inner Tennis (Gallwey), 48 Intelligence based on facts, 36-37 and manipulation of information, 52 Interaction, interpersonal, 195-97 INTERNIST-1 program, 43 Intuition of experts, 33-36
Johansen, Robert, 187-208 Johnson, George, 5, 101-17 Knowledge acquisition process example of a naval grouping, 269-71 example of a tactical threat, 268-69 system, 267-68 codification, 6 concepts, 187 difficulties of transferring, 148 embodied, 146 embrained, 146 encultured, 147 exceptions and inconsistencies in existing, 9 1-92 generation, 5-6 management tools, 1-8 structure of, 145-63 symbol-type, 147 synthesis, 187-208 and technology, 4-5 transfer, 6-7 Kosko, Bart, 275,277,279,282-85 Lamarckianism defined, 112 Learning from experience, 110-12 Lenat, Douglas B., 101-7, 109-10, 112 Leonard-Barton, Dorothy, 7,211-29 Librarians answering reference questions, 133-35 as delegates, 134-37 and quality of texts provided, 127 role in didactic libraries, 138 special status as cognitive authorities, 136-37 taking positions on open questions, 141 Libraries authority on authorities, 131-33 didactic, 137-38 increased prominence for, 130 liberal, 138-39 use of, 126 Limbic systems, 71
lndex Listening and editing, 259-60 Listening more and talking less, 259 Logic, non interest in, 106 LOGO program, 47
McCarthy, John, 49-50 Machinery ofthe Mind (Johnson), 5 Machines get better at mimicking us, 158-59 humans become more charitable to, 159 humans become more like the image of, 160 humans start to behave more like, 159-60 McNeill, Daniel, 275-85 Manufacturers introducing new products, 63 Maps, fuzzy cognitive, 281-85 Means-end analyses, 24 Media, classes of, 192 Memories, sizes of human, 57-61 Memories, of chess masters, 115 Microworlds versus the real world, 37-38 Minds; See also Brains; Cortexes (human) artificial, 290 being like holograms, 41-42 Minsky, Marvin, 36-37,50,72-73 Misinformation systems, 125-29 Modeling, collaborative, 171 Modern Times, 150 Monitor software, 219 Moravec, Hans, 55-57,63,289-93 Mutant heuristic, 109 Mutual adaptation defined, 222-26 Nanotechnology, 66,285 Nature, avoiding mistakes of, 69-72 Nearly extreme heuristic defined, 109 Neural networks, 275-77 contrasted with fuzzy systems, 277 have volatile memories, 278 unstructured, 277 Neural Networks and Fuzzy Systems (Kosko), 277 Neuro-fuzzy in Japan, 280-81 Neurons (of brains), 53-54,58 Newell, Allen, 36-37 NIBH (not invented by human) syndrome, 5 1984 (Orwell), 154 Notes cognitive elements, 234-38 discussion, 243-45 from, background to the Notes acquisition, 233-34 learning from, 231-46
301 research results, 233-43 research site and methods, 232-33 structural elements, 238-43
Organizations, problem-solving, 25 Orlikowski, Wanda J., 7, 231-46 Orwell, George, 154 Overload and information systems, 259
Papert, SeymouG 37,47-48,50 Parallel processing, 66 Parallel processors, 40 PARC (Palo Alto Research Center), Xerox, 175,185 Parsimonious theories, 18-19 Pattern recognition, unarticulated, 280 Pauling, Linus, 170 People listening and editing, 259-60 listening more and talking less, 259 Perceptrons defined, 276 Perceptual processes, 15 Peripheral information, 90 Photosynthesis of Carbon Compounds. The (Calvin and Bassham), 19 Planck’s radiation law, 116 PLANNER programming language, 72-73 POD defined, 184 Price curves of new manufactured products, 63 Printed words, authority of, 121-42 Problem solving in chess, 13-18 statement of the theory, 14 strategies, 14 organization, 25 Processes in chess theory, 21-22 discrimination, 26 elementary, 19-23 familiarization, 26 miscellaneous, 20-21 planning, 24-25 in serial pattern recognition, 22-23 symbol-manipulating, 15 symbols, lists and descriptions, 20 view of thinking, 23-26 discrimination trees, 25-26 means-end analyses, 24 planning processes, 24-25 problem-solving organization, 25 Processing, parallel, 66
302 Processors, parallel, 4 0 Programming with AI, 51-76 automatic, 106-7 can children learn by, 47-48 Programs ELIZA, 37 INTERNIST-1, 43 LOGO, 4 7 SHRDLU, 37-38,72-73 XCON, 4 3 Prototyping, rapid, 171 Pyramidal cells, 5 4 Pyrrhonian skeptics, 141-42
Quantum computing, 289 Quantum connections, 287-89 Quantum consciousness, 289 Quantum mechanics, 2 8 7 Quantum theory, 288
Radiation law, Planck’s, 116 Rapid prototyping, 171 Receptors defined, 5 1 Recognition, face, 4 0 Restak, Richard, 35 Reticular formations, 71 Retinas, layout of, 71 Retrieval techniques, 93-95 Robots hush, 292-93 sex, 284 Rogers, N.A., 261-72 Ruggles , Rudy L., 111, 1-8 Rules system of, 39 Wittgensteinian problem of, IS1
Schrage, Michael, 7, 167-86 Schwartz, Jacob T.,55-56 Sciences o f t h e Artificial. The (Simon), 115 SCP (Strategic Computing Plan), 48 Searle, John, SO, 154, 156 Second-Hand Knowledge (Wilson), 6 Sense making away from terminals, 248 affiliating, 249-50 consolidating, 250-52 deliberating, 250 effectuating, 248-49 triangulating, 249
Index in front of terminals, 252-57 action deficiencies, 252-53 affiliation deficiencies, 255-56 comparison deficiencies, 253-55 consolidation deficiencies, 257 deliberation deficiencies, 256 ways to improve, 257-60 Sex robots, 284 Shakespeare, William, 1 4 7 Shared Minds (Schrage), 7 Shared spaces, 173-74, 178, 181 SHRDLU programs, 37-38,72-73 Sidewalks, electronic, 200 Sight, perception of, 51 Simon, Herbert, 4,36-37, 114-17, 2 5 7 Situation assessments, 267 Software Argnoter, 177-78 Cactus, 21 8 Cognoter, 177 Colab, 176-77, 181-82,184 CONFIG, 224-26 monitor, 219 struggle to keep up, 72-74 Twig, 218 Spaces, shared, 173-74, 178, 181 Specific gravity, concept of, 113 Speculation, 90-91 Spirals of change, small to large, 223-26 Star Wars. 168 Stefik, Mark, 175, 179-81 Stereolithography defined, 186 Structural elements firm culture and work norms, 241-43 policies and procedures, 240-41 reward systems, 239-40 Supercotm, future, 294-95 Symbol-manipulating processes, 15 Symbol-type knowledge, 1 4 7 Syndromes group think, 198-99 NlBH (not invented by human), 5 Synthesis, concepts of, 1 8 7 Systems, competent, 44
Taylor series, 116 Teachers, computers not first-rate, 4 6 Technical processes and tools, new implementation as innovation, 212 implementing and integrating, 21 1-29 mutual adaptation, 222-26 pacing and celebration, 226-27 user involvement, 212-22
Index Technology accessible resources, 194 communication, 193 record-keeping processes, 193-94 usage and complexity, 194-95 Texts acquiring cognitive authority, 123-24 cognitive authority of, 122-23 evaluations by specialists, 129 evaluations of, 127 publication history, 123 quality control on, 125 subjectively satisfactory, 128 Theories garrulous, 18-19 parsimonious, 18-19 Thinking with images, not words, 3 9 4 1 processes, 23-26 Threat assessment, 261-72 example of a tactical, 268-69 Tools for knowledge management, 1-8 Traveller (war game), 101 exploring, 103 fleets and mathematical concepts, 110 playing, 109 Traveller, 102 Turing sociological predictions of, 158-60 humans become more charitable to machines, 159 humans become more like the image of machines, 160 humans start to behave more like machines, 159-60 machines get better at mimicking humans, 158-59 tests humans, machines, and the, 155-57 simplified, 157 Turing, Alan, 158 Twig A1 programs, 218
Unconscious information, 59 Universe defined, 38
303 User involvement apprenticeship mode, 221-22 codevelopment, 219-21 consultancy mode, 219 creating buy-in, 213 deliver mode, or over-the-wall, 216-18 differing forms of expertise, 214-15 embodying knowledge, 213-14 merits of, 214 modes of, 216 representativeness, 215 user selection, 214 user willingness, 215-16
Vian, Kathleen, 187-208 Voice recognition, 284 von Neumann computer, 65-66
Waltz, David, 74 Watson, James, 169-71 Weick, Karl E., 7,247-60 WelLprIngs of Knowledge, The
(Leonard-Barton), 7 What Computers Can’t Do (Dreyfus), 37
Wilson, Patrick, 6, 1 2 1 4 4 Winograd, Terry, 37-38 Winston, Patrick, 73, 107 Wittgensteinian problem of rules. 151 Words,authority of the printed; See also Texts authority on authorities, 131-33 demand for evaluation, 129-31 didactic libraries, 137-38 liberal libraries, 138-39 librarians as delegates, 134-37 misinformation systems, 125-29 skeptical librarians, 139-42 Work, coordination of intellectual, 179 World defined, 38 WYSWlS (what you see is what I see), 176, 180
XCON program, 43 Xerox PARC (Palo Alto Research Center), 175, 185
These books are available from
all good booksellers. In case of difficulty ca 1-800-366 665. t
f - ~
€30035
Lightning Source Inc Breinigsville. PA USA 14 August 2009
2223058VOOOO 1 BI19lA
/Illllllll 111l1 I1/I
9 780750 698498